00:00:00.002 Started by upstream project "autotest-per-patch" build number 132386 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.116 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.117 The recommended git tool is: git 00:00:00.117 using credential 00000000-0000-0000-0000-000000000002 00:00:00.119 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.192 Fetching changes from the remote Git repository 00:00:00.195 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.270 Using shallow fetch with depth 1 00:00:00.270 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.270 > git --version # timeout=10 00:00:00.326 > git --version # 'git version 2.39.2' 00:00:00.326 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.359 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.359 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.955 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.968 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.984 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.984 > git config core.sparsecheckout # timeout=10 00:00:06.999 > git read-tree -mu HEAD # timeout=10 00:00:07.015 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.039 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.039 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.114 [Pipeline] Start of Pipeline 00:00:07.126 [Pipeline] library 00:00:07.127 Loading library shm_lib@master 00:00:07.128 Library shm_lib@master is cached. Copying from home. 00:00:07.145 [Pipeline] node 00:00:07.154 Running on WFP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.155 [Pipeline] { 00:00:07.162 [Pipeline] catchError 00:00:07.163 [Pipeline] { 00:00:07.171 [Pipeline] wrap 00:00:07.176 [Pipeline] { 00:00:07.181 [Pipeline] stage 00:00:07.183 [Pipeline] { (Prologue) 00:00:07.380 [Pipeline] sh 00:00:07.660 + logger -p user.info -t JENKINS-CI 00:00:07.684 [Pipeline] echo 00:00:07.686 Node: WFP6 00:00:07.693 [Pipeline] sh 00:00:07.984 [Pipeline] setCustomBuildProperty 00:00:07.995 [Pipeline] echo 00:00:07.997 Cleanup processes 00:00:08.004 [Pipeline] sh 00:00:08.289 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.289 4098329 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.302 [Pipeline] sh 00:00:08.586 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.587 ++ grep -v 'sudo pgrep' 00:00:08.587 ++ awk '{print $1}' 00:00:08.587 + sudo kill -9 00:00:08.587 + true 00:00:08.601 [Pipeline] cleanWs 00:00:08.611 [WS-CLEANUP] Deleting project workspace... 00:00:08.611 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.618 [WS-CLEANUP] done 00:00:08.624 [Pipeline] setCustomBuildProperty 00:00:08.639 [Pipeline] sh 00:00:08.926 + sudo git config --global --replace-all safe.directory '*' 00:00:09.027 [Pipeline] httpRequest 00:00:09.417 [Pipeline] echo 00:00:09.419 Sorcerer 10.211.164.20 is alive 00:00:09.430 [Pipeline] retry 00:00:09.433 [Pipeline] { 00:00:09.448 [Pipeline] httpRequest 00:00:09.453 HttpMethod: GET 00:00:09.453 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.454 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.462 Response Code: HTTP/1.1 200 OK 00:00:09.462 Success: Status code 200 is in the accepted range: 200,404 00:00:09.462 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:21.493 [Pipeline] } 00:00:21.511 [Pipeline] // retry 00:00:21.519 [Pipeline] sh 00:00:21.806 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:21.821 [Pipeline] httpRequest 00:00:22.483 [Pipeline] echo 00:00:22.485 Sorcerer 10.211.164.20 is alive 00:00:22.494 [Pipeline] retry 00:00:22.496 [Pipeline] { 00:00:22.510 [Pipeline] httpRequest 00:00:22.515 HttpMethod: GET 00:00:22.515 URL: http://10.211.164.20/packages/spdk_92fb22519345bcb309a617ae4ad1cb7eebce6f14.tar.gz 00:00:22.516 Sending request to url: http://10.211.164.20/packages/spdk_92fb22519345bcb309a617ae4ad1cb7eebce6f14.tar.gz 00:00:22.522 Response Code: HTTP/1.1 200 OK 00:00:22.522 Success: Status code 200 is in the accepted range: 200,404 00:00:22.523 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_92fb22519345bcb309a617ae4ad1cb7eebce6f14.tar.gz 00:03:12.446 [Pipeline] } 00:03:12.462 [Pipeline] // retry 00:03:12.469 [Pipeline] sh 00:03:12.753 + tar --no-same-owner -xf spdk_92fb22519345bcb309a617ae4ad1cb7eebce6f14.tar.gz 00:03:15.298 [Pipeline] sh 00:03:15.584 + git -C spdk log --oneline -n5 00:03:15.584 92fb22519 dif: dif_generate/verify_copy() supports NVMe PRACT = 1 and MD size > PI size 00:03:15.584 79daf868a dif: Add SPDK_DIF_FLAGS_NVME_PRACT for dif_generate/verify_copy() 00:03:15.584 431baf1b5 dif: Insert abstraction into dif_generate/verify_copy() for NVMe PRACT 00:03:15.584 f86091626 dif: Rename internal generate/verify_copy() by insert/strip_copy() 00:03:15.584 0383e688b bdev/nvme: Fix race between reset and qpair creation/deletion 00:03:15.594 [Pipeline] } 00:03:15.609 [Pipeline] // stage 00:03:15.618 [Pipeline] stage 00:03:15.619 [Pipeline] { (Prepare) 00:03:15.637 [Pipeline] writeFile 00:03:15.652 [Pipeline] sh 00:03:15.936 + logger -p user.info -t JENKINS-CI 00:03:15.951 [Pipeline] sh 00:03:16.236 + logger -p user.info -t JENKINS-CI 00:03:16.248 [Pipeline] sh 00:03:16.532 + cat autorun-spdk.conf 00:03:16.533 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:16.533 SPDK_TEST_NVMF=1 00:03:16.533 SPDK_TEST_NVME_CLI=1 00:03:16.533 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:16.533 SPDK_TEST_NVMF_NICS=e810 00:03:16.533 SPDK_TEST_VFIOUSER=1 00:03:16.533 SPDK_RUN_UBSAN=1 00:03:16.533 NET_TYPE=phy 00:03:16.540 RUN_NIGHTLY=0 00:03:16.545 [Pipeline] readFile 00:03:16.571 [Pipeline] withEnv 00:03:16.573 [Pipeline] { 00:03:16.585 [Pipeline] sh 00:03:16.871 + set -ex 00:03:16.871 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:03:16.871 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:16.871 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:16.871 ++ SPDK_TEST_NVMF=1 00:03:16.871 ++ SPDK_TEST_NVME_CLI=1 00:03:16.871 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:16.871 ++ SPDK_TEST_NVMF_NICS=e810 00:03:16.871 ++ SPDK_TEST_VFIOUSER=1 00:03:16.871 ++ SPDK_RUN_UBSAN=1 00:03:16.871 ++ NET_TYPE=phy 00:03:16.871 ++ RUN_NIGHTLY=0 00:03:16.871 + case $SPDK_TEST_NVMF_NICS in 00:03:16.871 + DRIVERS=ice 00:03:16.871 + [[ tcp == \r\d\m\a ]] 00:03:16.871 + [[ -n ice ]] 00:03:16.871 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:03:16.871 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:03:16.871 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:03:16.871 rmmod: ERROR: Module irdma is not currently loaded 00:03:16.871 rmmod: ERROR: Module i40iw is not currently loaded 00:03:16.871 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:03:16.871 + true 00:03:16.871 + for D in $DRIVERS 00:03:16.871 + sudo modprobe ice 00:03:16.871 + exit 0 00:03:16.881 [Pipeline] } 00:03:16.896 [Pipeline] // withEnv 00:03:16.901 [Pipeline] } 00:03:16.914 [Pipeline] // stage 00:03:16.923 [Pipeline] catchError 00:03:16.924 [Pipeline] { 00:03:16.937 [Pipeline] timeout 00:03:16.937 Timeout set to expire in 1 hr 0 min 00:03:16.939 [Pipeline] { 00:03:16.953 [Pipeline] stage 00:03:16.955 [Pipeline] { (Tests) 00:03:16.968 [Pipeline] sh 00:03:17.253 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:17.253 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:17.253 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:17.253 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:03:17.253 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:17.253 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:17.253 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:03:17.253 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:17.253 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:17.253 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:17.253 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:03:17.253 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:17.253 + source /etc/os-release 00:03:17.253 ++ NAME='Fedora Linux' 00:03:17.253 ++ VERSION='39 (Cloud Edition)' 00:03:17.253 ++ ID=fedora 00:03:17.253 ++ VERSION_ID=39 00:03:17.253 ++ VERSION_CODENAME= 00:03:17.253 ++ PLATFORM_ID=platform:f39 00:03:17.253 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:17.253 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:17.253 ++ LOGO=fedora-logo-icon 00:03:17.253 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:17.253 ++ HOME_URL=https://fedoraproject.org/ 00:03:17.253 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:17.253 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:17.253 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:17.253 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:17.253 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:17.253 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:17.253 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:17.253 ++ SUPPORT_END=2024-11-12 00:03:17.253 ++ VARIANT='Cloud Edition' 00:03:17.253 ++ VARIANT_ID=cloud 00:03:17.253 + uname -a 00:03:17.253 Linux spdk-wfp-06 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:17.253 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:19.791 Hugepages 00:03:19.791 node hugesize free / total 00:03:19.791 node0 1048576kB 0 / 0 00:03:19.791 node0 2048kB 0 / 0 00:03:19.791 node1 1048576kB 0 / 0 00:03:19.791 node1 2048kB 0 / 0 00:03:19.791 00:03:19.791 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:19.791 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:19.791 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:19.791 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:19.791 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:19.791 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:19.791 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:19.791 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:19.791 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:19.791 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:19.791 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:19.791 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:19.791 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:19.791 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:19.791 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:19.791 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:19.791 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:19.791 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:19.791 + rm -f /tmp/spdk-ld-path 00:03:19.791 + source autorun-spdk.conf 00:03:19.791 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:19.791 ++ SPDK_TEST_NVMF=1 00:03:19.791 ++ SPDK_TEST_NVME_CLI=1 00:03:19.791 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:19.791 ++ SPDK_TEST_NVMF_NICS=e810 00:03:19.791 ++ SPDK_TEST_VFIOUSER=1 00:03:19.791 ++ SPDK_RUN_UBSAN=1 00:03:19.791 ++ NET_TYPE=phy 00:03:19.791 ++ RUN_NIGHTLY=0 00:03:19.791 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:19.791 + [[ -n '' ]] 00:03:19.791 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:19.791 + for M in /var/spdk/build-*-manifest.txt 00:03:19.791 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:19.791 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:19.791 + for M in /var/spdk/build-*-manifest.txt 00:03:19.791 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:19.791 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:19.791 + for M in /var/spdk/build-*-manifest.txt 00:03:19.791 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:19.791 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:19.791 ++ uname 00:03:19.791 + [[ Linux == \L\i\n\u\x ]] 00:03:19.791 + sudo dmesg -T 00:03:20.051 + sudo dmesg --clear 00:03:20.051 + dmesg_pid=4099779 00:03:20.051 + [[ Fedora Linux == FreeBSD ]] 00:03:20.051 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:20.051 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:20.051 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:20.051 + [[ -x /usr/src/fio-static/fio ]] 00:03:20.051 + export FIO_BIN=/usr/src/fio-static/fio 00:03:20.051 + FIO_BIN=/usr/src/fio-static/fio 00:03:20.051 + sudo dmesg -Tw 00:03:20.051 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:20.051 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:20.051 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:20.051 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:20.051 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:20.051 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:20.051 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:20.051 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:20.051 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:20.051 12:17:25 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:20.051 12:17:25 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:20.051 12:17:25 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:20.051 12:17:25 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:03:20.051 12:17:25 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:03:20.051 12:17:25 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:20.051 12:17:25 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:03:20.051 12:17:25 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:03:20.051 12:17:25 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:03:20.051 12:17:25 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:03:20.051 12:17:25 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:03:20.051 12:17:25 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:20.051 12:17:25 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:20.051 12:17:25 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:20.051 12:17:25 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:20.051 12:17:25 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:20.051 12:17:25 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:20.051 12:17:25 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:20.051 12:17:25 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:20.051 12:17:25 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:20.051 12:17:25 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:20.051 12:17:25 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:20.051 12:17:25 -- paths/export.sh@5 -- $ export PATH 00:03:20.051 12:17:25 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:20.051 12:17:25 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:20.051 12:17:25 -- common/autobuild_common.sh@493 -- $ date +%s 00:03:20.051 12:17:25 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732101445.XXXXXX 00:03:20.051 12:17:25 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732101445.sxwzK9 00:03:20.051 12:17:25 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:03:20.051 12:17:25 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:03:20.051 12:17:25 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:03:20.051 12:17:25 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:03:20.051 12:17:25 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:03:20.051 12:17:25 -- common/autobuild_common.sh@509 -- $ get_config_params 00:03:20.051 12:17:25 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:03:20.051 12:17:25 -- common/autotest_common.sh@10 -- $ set +x 00:03:20.051 12:17:25 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:03:20.051 12:17:25 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:03:20.051 12:17:25 -- pm/common@17 -- $ local monitor 00:03:20.051 12:17:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:20.051 12:17:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:20.051 12:17:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:20.051 12:17:25 -- pm/common@21 -- $ date +%s 00:03:20.051 12:17:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:20.051 12:17:25 -- pm/common@21 -- $ date +%s 00:03:20.051 12:17:25 -- pm/common@25 -- $ sleep 1 00:03:20.051 12:17:25 -- pm/common@21 -- $ date +%s 00:03:20.051 12:17:25 -- pm/common@21 -- $ date +%s 00:03:20.051 12:17:25 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732101445 00:03:20.051 12:17:25 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732101445 00:03:20.051 12:17:25 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732101445 00:03:20.051 12:17:25 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732101445 00:03:20.311 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732101445_collect-vmstat.pm.log 00:03:20.311 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732101445_collect-cpu-load.pm.log 00:03:20.311 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732101445_collect-cpu-temp.pm.log 00:03:20.311 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732101445_collect-bmc-pm.bmc.pm.log 00:03:21.249 12:17:26 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:03:21.249 12:17:26 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:21.249 12:17:26 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:21.249 12:17:26 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:21.249 12:17:26 -- spdk/autobuild.sh@16 -- $ date -u 00:03:21.249 Wed Nov 20 11:17:26 AM UTC 2024 00:03:21.249 12:17:26 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:21.249 v25.01-pre-217-g92fb22519 00:03:21.249 12:17:26 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:21.249 12:17:26 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:21.249 12:17:26 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:21.249 12:17:26 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:21.249 12:17:26 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:21.249 12:17:26 -- common/autotest_common.sh@10 -- $ set +x 00:03:21.249 ************************************ 00:03:21.249 START TEST ubsan 00:03:21.249 ************************************ 00:03:21.249 12:17:26 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:03:21.249 using ubsan 00:03:21.249 00:03:21.249 real 0m0.000s 00:03:21.249 user 0m0.000s 00:03:21.249 sys 0m0.000s 00:03:21.249 12:17:26 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:21.249 12:17:26 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:21.249 ************************************ 00:03:21.249 END TEST ubsan 00:03:21.249 ************************************ 00:03:21.249 12:17:26 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:21.249 12:17:26 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:21.249 12:17:26 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:21.249 12:17:26 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:21.249 12:17:26 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:21.249 12:17:26 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:21.249 12:17:26 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:21.249 12:17:26 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:21.249 12:17:26 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:03:21.508 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:21.508 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:21.767 Using 'verbs' RDMA provider 00:03:34.927 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:47.213 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:47.213 Creating mk/config.mk...done. 00:03:47.213 Creating mk/cc.flags.mk...done. 00:03:47.213 Type 'make' to build. 00:03:47.213 12:17:52 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:03:47.213 12:17:52 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:47.213 12:17:52 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:47.213 12:17:52 -- common/autotest_common.sh@10 -- $ set +x 00:03:47.213 ************************************ 00:03:47.213 START TEST make 00:03:47.213 ************************************ 00:03:47.213 12:17:52 make -- common/autotest_common.sh@1129 -- $ make -j96 00:03:47.213 make[1]: Nothing to be done for 'all'. 00:03:48.599 The Meson build system 00:03:48.599 Version: 1.5.0 00:03:48.599 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:48.599 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:48.599 Build type: native build 00:03:48.599 Project name: libvfio-user 00:03:48.599 Project version: 0.0.1 00:03:48.600 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:48.600 C linker for the host machine: cc ld.bfd 2.40-14 00:03:48.600 Host machine cpu family: x86_64 00:03:48.600 Host machine cpu: x86_64 00:03:48.600 Run-time dependency threads found: YES 00:03:48.600 Library dl found: YES 00:03:48.600 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:48.600 Run-time dependency json-c found: YES 0.17 00:03:48.600 Run-time dependency cmocka found: YES 1.1.7 00:03:48.600 Program pytest-3 found: NO 00:03:48.600 Program flake8 found: NO 00:03:48.600 Program misspell-fixer found: NO 00:03:48.600 Program restructuredtext-lint found: NO 00:03:48.600 Program valgrind found: YES (/usr/bin/valgrind) 00:03:48.600 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:48.600 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:48.600 Compiler for C supports arguments -Wwrite-strings: YES 00:03:48.600 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:48.600 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:48.600 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:48.600 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:48.600 Build targets in project: 8 00:03:48.600 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:48.600 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:48.600 00:03:48.600 libvfio-user 0.0.1 00:03:48.600 00:03:48.600 User defined options 00:03:48.600 buildtype : debug 00:03:48.600 default_library: shared 00:03:48.600 libdir : /usr/local/lib 00:03:48.600 00:03:48.600 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:49.169 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:49.169 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:49.169 [2/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:49.169 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:49.169 [4/37] Compiling C object samples/null.p/null.c.o 00:03:49.169 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:49.169 [6/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:49.169 [7/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:49.169 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:49.169 [9/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:49.170 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:49.170 [11/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:49.170 [12/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:49.170 [13/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:49.170 [14/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:49.170 [15/37] Compiling C object samples/server.p/server.c.o 00:03:49.428 [16/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:49.428 [17/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:49.428 [18/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:49.428 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:49.428 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:49.428 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:49.428 [22/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:49.428 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:49.428 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:49.428 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:49.428 [26/37] Compiling C object samples/client.p/client.c.o 00:03:49.428 [27/37] Linking target samples/client 00:03:49.428 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:49.428 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:49.428 [30/37] Linking target test/unit_tests 00:03:49.428 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:03:49.687 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:49.687 [33/37] Linking target samples/null 00:03:49.687 [34/37] Linking target samples/server 00:03:49.687 [35/37] Linking target samples/gpio-pci-idio-16 00:03:49.687 [36/37] Linking target samples/lspci 00:03:49.687 [37/37] Linking target samples/shadow_ioeventfd_server 00:03:49.687 INFO: autodetecting backend as ninja 00:03:49.687 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:49.687 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:50.252 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:50.252 ninja: no work to do. 00:03:55.525 The Meson build system 00:03:55.525 Version: 1.5.0 00:03:55.525 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:03:55.525 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:03:55.525 Build type: native build 00:03:55.525 Program cat found: YES (/usr/bin/cat) 00:03:55.525 Project name: DPDK 00:03:55.525 Project version: 24.03.0 00:03:55.525 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:55.525 C linker for the host machine: cc ld.bfd 2.40-14 00:03:55.525 Host machine cpu family: x86_64 00:03:55.525 Host machine cpu: x86_64 00:03:55.525 Message: ## Building in Developer Mode ## 00:03:55.525 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:55.525 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:03:55.525 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:55.525 Program python3 found: YES (/usr/bin/python3) 00:03:55.525 Program cat found: YES (/usr/bin/cat) 00:03:55.525 Compiler for C supports arguments -march=native: YES 00:03:55.525 Checking for size of "void *" : 8 00:03:55.525 Checking for size of "void *" : 8 (cached) 00:03:55.525 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:55.525 Library m found: YES 00:03:55.525 Library numa found: YES 00:03:55.525 Has header "numaif.h" : YES 00:03:55.525 Library fdt found: NO 00:03:55.525 Library execinfo found: NO 00:03:55.525 Has header "execinfo.h" : YES 00:03:55.525 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:55.525 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:55.525 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:55.525 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:55.525 Run-time dependency openssl found: YES 3.1.1 00:03:55.525 Run-time dependency libpcap found: YES 1.10.4 00:03:55.525 Has header "pcap.h" with dependency libpcap: YES 00:03:55.525 Compiler for C supports arguments -Wcast-qual: YES 00:03:55.525 Compiler for C supports arguments -Wdeprecated: YES 00:03:55.525 Compiler for C supports arguments -Wformat: YES 00:03:55.525 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:55.525 Compiler for C supports arguments -Wformat-security: NO 00:03:55.525 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:55.525 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:55.525 Compiler for C supports arguments -Wnested-externs: YES 00:03:55.525 Compiler for C supports arguments -Wold-style-definition: YES 00:03:55.525 Compiler for C supports arguments -Wpointer-arith: YES 00:03:55.525 Compiler for C supports arguments -Wsign-compare: YES 00:03:55.526 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:55.526 Compiler for C supports arguments -Wundef: YES 00:03:55.526 Compiler for C supports arguments -Wwrite-strings: YES 00:03:55.526 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:55.526 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:55.526 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:55.526 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:55.526 Program objdump found: YES (/usr/bin/objdump) 00:03:55.526 Compiler for C supports arguments -mavx512f: YES 00:03:55.526 Checking if "AVX512 checking" compiles: YES 00:03:55.526 Fetching value of define "__SSE4_2__" : 1 00:03:55.526 Fetching value of define "__AES__" : 1 00:03:55.526 Fetching value of define "__AVX__" : 1 00:03:55.526 Fetching value of define "__AVX2__" : 1 00:03:55.526 Fetching value of define "__AVX512BW__" : 1 00:03:55.526 Fetching value of define "__AVX512CD__" : 1 00:03:55.526 Fetching value of define "__AVX512DQ__" : 1 00:03:55.526 Fetching value of define "__AVX512F__" : 1 00:03:55.526 Fetching value of define "__AVX512VL__" : 1 00:03:55.526 Fetching value of define "__PCLMUL__" : 1 00:03:55.526 Fetching value of define "__RDRND__" : 1 00:03:55.526 Fetching value of define "__RDSEED__" : 1 00:03:55.526 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:55.526 Fetching value of define "__znver1__" : (undefined) 00:03:55.526 Fetching value of define "__znver2__" : (undefined) 00:03:55.526 Fetching value of define "__znver3__" : (undefined) 00:03:55.526 Fetching value of define "__znver4__" : (undefined) 00:03:55.526 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:55.526 Message: lib/log: Defining dependency "log" 00:03:55.526 Message: lib/kvargs: Defining dependency "kvargs" 00:03:55.526 Message: lib/telemetry: Defining dependency "telemetry" 00:03:55.526 Checking for function "getentropy" : NO 00:03:55.526 Message: lib/eal: Defining dependency "eal" 00:03:55.526 Message: lib/ring: Defining dependency "ring" 00:03:55.526 Message: lib/rcu: Defining dependency "rcu" 00:03:55.526 Message: lib/mempool: Defining dependency "mempool" 00:03:55.526 Message: lib/mbuf: Defining dependency "mbuf" 00:03:55.526 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:55.526 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:55.526 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:55.526 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:55.526 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:55.526 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:55.526 Compiler for C supports arguments -mpclmul: YES 00:03:55.526 Compiler for C supports arguments -maes: YES 00:03:55.526 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:55.526 Compiler for C supports arguments -mavx512bw: YES 00:03:55.526 Compiler for C supports arguments -mavx512dq: YES 00:03:55.526 Compiler for C supports arguments -mavx512vl: YES 00:03:55.526 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:55.526 Compiler for C supports arguments -mavx2: YES 00:03:55.526 Compiler for C supports arguments -mavx: YES 00:03:55.526 Message: lib/net: Defining dependency "net" 00:03:55.526 Message: lib/meter: Defining dependency "meter" 00:03:55.526 Message: lib/ethdev: Defining dependency "ethdev" 00:03:55.526 Message: lib/pci: Defining dependency "pci" 00:03:55.526 Message: lib/cmdline: Defining dependency "cmdline" 00:03:55.526 Message: lib/hash: Defining dependency "hash" 00:03:55.526 Message: lib/timer: Defining dependency "timer" 00:03:55.526 Message: lib/compressdev: Defining dependency "compressdev" 00:03:55.526 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:55.526 Message: lib/dmadev: Defining dependency "dmadev" 00:03:55.526 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:55.526 Message: lib/power: Defining dependency "power" 00:03:55.526 Message: lib/reorder: Defining dependency "reorder" 00:03:55.526 Message: lib/security: Defining dependency "security" 00:03:55.526 Has header "linux/userfaultfd.h" : YES 00:03:55.526 Has header "linux/vduse.h" : YES 00:03:55.526 Message: lib/vhost: Defining dependency "vhost" 00:03:55.526 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:55.526 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:55.526 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:55.526 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:55.526 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:55.526 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:55.526 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:55.526 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:55.526 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:55.526 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:55.526 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:55.526 Configuring doxy-api-html.conf using configuration 00:03:55.526 Configuring doxy-api-man.conf using configuration 00:03:55.526 Program mandb found: YES (/usr/bin/mandb) 00:03:55.526 Program sphinx-build found: NO 00:03:55.526 Configuring rte_build_config.h using configuration 00:03:55.526 Message: 00:03:55.526 ================= 00:03:55.526 Applications Enabled 00:03:55.526 ================= 00:03:55.526 00:03:55.526 apps: 00:03:55.526 00:03:55.526 00:03:55.526 Message: 00:03:55.526 ================= 00:03:55.526 Libraries Enabled 00:03:55.526 ================= 00:03:55.526 00:03:55.526 libs: 00:03:55.526 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:55.526 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:55.526 cryptodev, dmadev, power, reorder, security, vhost, 00:03:55.526 00:03:55.526 Message: 00:03:55.526 =============== 00:03:55.526 Drivers Enabled 00:03:55.526 =============== 00:03:55.526 00:03:55.526 common: 00:03:55.526 00:03:55.526 bus: 00:03:55.526 pci, vdev, 00:03:55.526 mempool: 00:03:55.526 ring, 00:03:55.526 dma: 00:03:55.526 00:03:55.526 net: 00:03:55.526 00:03:55.526 crypto: 00:03:55.526 00:03:55.526 compress: 00:03:55.526 00:03:55.526 vdpa: 00:03:55.526 00:03:55.526 00:03:55.526 Message: 00:03:55.526 ================= 00:03:55.526 Content Skipped 00:03:55.526 ================= 00:03:55.526 00:03:55.526 apps: 00:03:55.526 dumpcap: explicitly disabled via build config 00:03:55.526 graph: explicitly disabled via build config 00:03:55.526 pdump: explicitly disabled via build config 00:03:55.526 proc-info: explicitly disabled via build config 00:03:55.526 test-acl: explicitly disabled via build config 00:03:55.526 test-bbdev: explicitly disabled via build config 00:03:55.526 test-cmdline: explicitly disabled via build config 00:03:55.526 test-compress-perf: explicitly disabled via build config 00:03:55.526 test-crypto-perf: explicitly disabled via build config 00:03:55.526 test-dma-perf: explicitly disabled via build config 00:03:55.526 test-eventdev: explicitly disabled via build config 00:03:55.526 test-fib: explicitly disabled via build config 00:03:55.526 test-flow-perf: explicitly disabled via build config 00:03:55.526 test-gpudev: explicitly disabled via build config 00:03:55.526 test-mldev: explicitly disabled via build config 00:03:55.526 test-pipeline: explicitly disabled via build config 00:03:55.526 test-pmd: explicitly disabled via build config 00:03:55.526 test-regex: explicitly disabled via build config 00:03:55.526 test-sad: explicitly disabled via build config 00:03:55.526 test-security-perf: explicitly disabled via build config 00:03:55.526 00:03:55.526 libs: 00:03:55.526 argparse: explicitly disabled via build config 00:03:55.526 metrics: explicitly disabled via build config 00:03:55.526 acl: explicitly disabled via build config 00:03:55.526 bbdev: explicitly disabled via build config 00:03:55.526 bitratestats: explicitly disabled via build config 00:03:55.526 bpf: explicitly disabled via build config 00:03:55.526 cfgfile: explicitly disabled via build config 00:03:55.526 distributor: explicitly disabled via build config 00:03:55.526 efd: explicitly disabled via build config 00:03:55.526 eventdev: explicitly disabled via build config 00:03:55.526 dispatcher: explicitly disabled via build config 00:03:55.526 gpudev: explicitly disabled via build config 00:03:55.526 gro: explicitly disabled via build config 00:03:55.526 gso: explicitly disabled via build config 00:03:55.526 ip_frag: explicitly disabled via build config 00:03:55.526 jobstats: explicitly disabled via build config 00:03:55.526 latencystats: explicitly disabled via build config 00:03:55.526 lpm: explicitly disabled via build config 00:03:55.526 member: explicitly disabled via build config 00:03:55.526 pcapng: explicitly disabled via build config 00:03:55.526 rawdev: explicitly disabled via build config 00:03:55.526 regexdev: explicitly disabled via build config 00:03:55.526 mldev: explicitly disabled via build config 00:03:55.526 rib: explicitly disabled via build config 00:03:55.526 sched: explicitly disabled via build config 00:03:55.526 stack: explicitly disabled via build config 00:03:55.526 ipsec: explicitly disabled via build config 00:03:55.526 pdcp: explicitly disabled via build config 00:03:55.526 fib: explicitly disabled via build config 00:03:55.526 port: explicitly disabled via build config 00:03:55.526 pdump: explicitly disabled via build config 00:03:55.526 table: explicitly disabled via build config 00:03:55.526 pipeline: explicitly disabled via build config 00:03:55.526 graph: explicitly disabled via build config 00:03:55.526 node: explicitly disabled via build config 00:03:55.526 00:03:55.526 drivers: 00:03:55.526 common/cpt: not in enabled drivers build config 00:03:55.526 common/dpaax: not in enabled drivers build config 00:03:55.526 common/iavf: not in enabled drivers build config 00:03:55.526 common/idpf: not in enabled drivers build config 00:03:55.526 common/ionic: not in enabled drivers build config 00:03:55.526 common/mvep: not in enabled drivers build config 00:03:55.526 common/octeontx: not in enabled drivers build config 00:03:55.526 bus/auxiliary: not in enabled drivers build config 00:03:55.526 bus/cdx: not in enabled drivers build config 00:03:55.526 bus/dpaa: not in enabled drivers build config 00:03:55.526 bus/fslmc: not in enabled drivers build config 00:03:55.526 bus/ifpga: not in enabled drivers build config 00:03:55.526 bus/platform: not in enabled drivers build config 00:03:55.527 bus/uacce: not in enabled drivers build config 00:03:55.527 bus/vmbus: not in enabled drivers build config 00:03:55.527 common/cnxk: not in enabled drivers build config 00:03:55.527 common/mlx5: not in enabled drivers build config 00:03:55.527 common/nfp: not in enabled drivers build config 00:03:55.527 common/nitrox: not in enabled drivers build config 00:03:55.527 common/qat: not in enabled drivers build config 00:03:55.527 common/sfc_efx: not in enabled drivers build config 00:03:55.527 mempool/bucket: not in enabled drivers build config 00:03:55.527 mempool/cnxk: not in enabled drivers build config 00:03:55.527 mempool/dpaa: not in enabled drivers build config 00:03:55.527 mempool/dpaa2: not in enabled drivers build config 00:03:55.527 mempool/octeontx: not in enabled drivers build config 00:03:55.527 mempool/stack: not in enabled drivers build config 00:03:55.527 dma/cnxk: not in enabled drivers build config 00:03:55.527 dma/dpaa: not in enabled drivers build config 00:03:55.527 dma/dpaa2: not in enabled drivers build config 00:03:55.527 dma/hisilicon: not in enabled drivers build config 00:03:55.527 dma/idxd: not in enabled drivers build config 00:03:55.527 dma/ioat: not in enabled drivers build config 00:03:55.527 dma/skeleton: not in enabled drivers build config 00:03:55.527 net/af_packet: not in enabled drivers build config 00:03:55.527 net/af_xdp: not in enabled drivers build config 00:03:55.527 net/ark: not in enabled drivers build config 00:03:55.527 net/atlantic: not in enabled drivers build config 00:03:55.527 net/avp: not in enabled drivers build config 00:03:55.527 net/axgbe: not in enabled drivers build config 00:03:55.527 net/bnx2x: not in enabled drivers build config 00:03:55.527 net/bnxt: not in enabled drivers build config 00:03:55.527 net/bonding: not in enabled drivers build config 00:03:55.527 net/cnxk: not in enabled drivers build config 00:03:55.527 net/cpfl: not in enabled drivers build config 00:03:55.527 net/cxgbe: not in enabled drivers build config 00:03:55.527 net/dpaa: not in enabled drivers build config 00:03:55.527 net/dpaa2: not in enabled drivers build config 00:03:55.527 net/e1000: not in enabled drivers build config 00:03:55.527 net/ena: not in enabled drivers build config 00:03:55.527 net/enetc: not in enabled drivers build config 00:03:55.527 net/enetfec: not in enabled drivers build config 00:03:55.527 net/enic: not in enabled drivers build config 00:03:55.527 net/failsafe: not in enabled drivers build config 00:03:55.527 net/fm10k: not in enabled drivers build config 00:03:55.527 net/gve: not in enabled drivers build config 00:03:55.527 net/hinic: not in enabled drivers build config 00:03:55.527 net/hns3: not in enabled drivers build config 00:03:55.527 net/i40e: not in enabled drivers build config 00:03:55.527 net/iavf: not in enabled drivers build config 00:03:55.527 net/ice: not in enabled drivers build config 00:03:55.527 net/idpf: not in enabled drivers build config 00:03:55.527 net/igc: not in enabled drivers build config 00:03:55.527 net/ionic: not in enabled drivers build config 00:03:55.527 net/ipn3ke: not in enabled drivers build config 00:03:55.527 net/ixgbe: not in enabled drivers build config 00:03:55.527 net/mana: not in enabled drivers build config 00:03:55.527 net/memif: not in enabled drivers build config 00:03:55.527 net/mlx4: not in enabled drivers build config 00:03:55.527 net/mlx5: not in enabled drivers build config 00:03:55.527 net/mvneta: not in enabled drivers build config 00:03:55.527 net/mvpp2: not in enabled drivers build config 00:03:55.527 net/netvsc: not in enabled drivers build config 00:03:55.527 net/nfb: not in enabled drivers build config 00:03:55.527 net/nfp: not in enabled drivers build config 00:03:55.527 net/ngbe: not in enabled drivers build config 00:03:55.527 net/null: not in enabled drivers build config 00:03:55.527 net/octeontx: not in enabled drivers build config 00:03:55.527 net/octeon_ep: not in enabled drivers build config 00:03:55.527 net/pcap: not in enabled drivers build config 00:03:55.527 net/pfe: not in enabled drivers build config 00:03:55.527 net/qede: not in enabled drivers build config 00:03:55.527 net/ring: not in enabled drivers build config 00:03:55.527 net/sfc: not in enabled drivers build config 00:03:55.527 net/softnic: not in enabled drivers build config 00:03:55.527 net/tap: not in enabled drivers build config 00:03:55.527 net/thunderx: not in enabled drivers build config 00:03:55.527 net/txgbe: not in enabled drivers build config 00:03:55.527 net/vdev_netvsc: not in enabled drivers build config 00:03:55.527 net/vhost: not in enabled drivers build config 00:03:55.527 net/virtio: not in enabled drivers build config 00:03:55.527 net/vmxnet3: not in enabled drivers build config 00:03:55.527 raw/*: missing internal dependency, "rawdev" 00:03:55.527 crypto/armv8: not in enabled drivers build config 00:03:55.527 crypto/bcmfs: not in enabled drivers build config 00:03:55.527 crypto/caam_jr: not in enabled drivers build config 00:03:55.527 crypto/ccp: not in enabled drivers build config 00:03:55.527 crypto/cnxk: not in enabled drivers build config 00:03:55.527 crypto/dpaa_sec: not in enabled drivers build config 00:03:55.527 crypto/dpaa2_sec: not in enabled drivers build config 00:03:55.527 crypto/ipsec_mb: not in enabled drivers build config 00:03:55.527 crypto/mlx5: not in enabled drivers build config 00:03:55.527 crypto/mvsam: not in enabled drivers build config 00:03:55.527 crypto/nitrox: not in enabled drivers build config 00:03:55.527 crypto/null: not in enabled drivers build config 00:03:55.527 crypto/octeontx: not in enabled drivers build config 00:03:55.527 crypto/openssl: not in enabled drivers build config 00:03:55.527 crypto/scheduler: not in enabled drivers build config 00:03:55.527 crypto/uadk: not in enabled drivers build config 00:03:55.527 crypto/virtio: not in enabled drivers build config 00:03:55.527 compress/isal: not in enabled drivers build config 00:03:55.527 compress/mlx5: not in enabled drivers build config 00:03:55.527 compress/nitrox: not in enabled drivers build config 00:03:55.527 compress/octeontx: not in enabled drivers build config 00:03:55.527 compress/zlib: not in enabled drivers build config 00:03:55.527 regex/*: missing internal dependency, "regexdev" 00:03:55.527 ml/*: missing internal dependency, "mldev" 00:03:55.527 vdpa/ifc: not in enabled drivers build config 00:03:55.527 vdpa/mlx5: not in enabled drivers build config 00:03:55.527 vdpa/nfp: not in enabled drivers build config 00:03:55.527 vdpa/sfc: not in enabled drivers build config 00:03:55.527 event/*: missing internal dependency, "eventdev" 00:03:55.527 baseband/*: missing internal dependency, "bbdev" 00:03:55.527 gpu/*: missing internal dependency, "gpudev" 00:03:55.527 00:03:55.527 00:03:55.527 Build targets in project: 85 00:03:55.527 00:03:55.527 DPDK 24.03.0 00:03:55.527 00:03:55.527 User defined options 00:03:55.527 buildtype : debug 00:03:55.527 default_library : shared 00:03:55.527 libdir : lib 00:03:55.527 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:55.527 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:55.527 c_link_args : 00:03:55.527 cpu_instruction_set: native 00:03:55.527 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:03:55.527 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:03:55.527 enable_docs : false 00:03:55.527 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:55.527 enable_kmods : false 00:03:55.527 max_lcores : 128 00:03:55.527 tests : false 00:03:55.527 00:03:55.527 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:55.793 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:03:56.057 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:56.057 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:56.057 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:56.057 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:56.057 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:56.057 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:56.057 [7/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:56.057 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:56.057 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:56.057 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:56.057 [11/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:56.057 [12/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:56.057 [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:56.057 [14/268] Linking static target lib/librte_kvargs.a 00:03:56.057 [15/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:56.057 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:56.057 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:56.057 [18/268] Linking static target lib/librte_log.a 00:03:56.057 [19/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:56.317 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:56.317 [21/268] Linking static target lib/librte_pci.a 00:03:56.317 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:56.317 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:56.317 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:56.317 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:56.575 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:56.575 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:56.575 [28/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:56.575 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:56.575 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:56.575 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:56.575 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:56.575 [33/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:56.575 [34/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:56.575 [35/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:56.575 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:56.575 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:56.575 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:56.575 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:56.575 [40/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:56.575 [41/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:56.575 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:56.575 [43/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:56.575 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:56.575 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:56.575 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:56.575 [47/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:56.575 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:56.575 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:56.575 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:56.576 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:56.576 [52/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:56.576 [53/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:56.576 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:56.576 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:56.576 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:56.576 [57/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:56.576 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:56.576 [59/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:56.576 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:56.576 [61/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:56.576 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:56.576 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:56.576 [64/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:56.576 [65/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:56.576 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:56.576 [67/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:56.576 [68/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:56.576 [69/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:56.576 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:56.576 [71/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:56.576 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:56.576 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:56.576 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:56.576 [75/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:56.576 [76/268] Linking static target lib/librte_meter.a 00:03:56.576 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:56.576 [78/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:56.576 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:56.576 [80/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:56.576 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:56.576 [82/268] Linking static target lib/librte_telemetry.a 00:03:56.576 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:56.576 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:56.576 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:56.576 [86/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:56.576 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:56.576 [88/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:56.576 [89/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:56.576 [90/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:56.576 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:56.576 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:56.576 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:56.576 [94/268] Linking static target lib/librte_ring.a 00:03:56.576 [95/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:56.576 [96/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:56.576 [97/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:56.576 [98/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:56.576 [99/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:56.576 [100/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:56.576 [101/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:56.576 [102/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:56.576 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:56.835 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:56.835 [105/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:56.835 [106/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:56.835 [107/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:56.835 [108/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:56.835 [109/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:56.835 [110/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:56.835 [111/268] Linking static target lib/librte_rcu.a 00:03:56.835 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:56.835 [113/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:56.835 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:56.835 [115/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:56.835 [116/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:56.835 [117/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:56.835 [118/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:56.835 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:56.835 [120/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:56.835 [121/268] Linking static target lib/librte_net.a 00:03:56.835 [122/268] Linking static target lib/librte_mempool.a 00:03:56.835 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:56.835 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:56.835 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:56.835 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:56.835 [127/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:56.835 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:56.835 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:56.835 [130/268] Linking static target lib/librte_eal.a 00:03:56.835 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:56.835 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:56.835 [133/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:56.835 [134/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:56.835 [135/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:56.835 [136/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:56.835 [137/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:56.835 [138/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:56.835 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:57.093 [140/268] Linking static target lib/librte_timer.a 00:03:57.093 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:57.093 [142/268] Linking static target lib/librte_cmdline.a 00:03:57.093 [143/268] Linking target lib/librte_log.so.24.1 00:03:57.093 [144/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:57.093 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:57.093 [146/268] Linking static target lib/librte_mbuf.a 00:03:57.093 [147/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:57.093 [148/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:57.093 [149/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:57.093 [150/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.093 [151/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:57.093 [152/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.093 [153/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:57.093 [154/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:57.093 [155/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:57.093 [156/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:57.093 [157/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:57.093 [158/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.093 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:57.093 [160/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:57.093 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:57.093 [162/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:57.093 [163/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:57.093 [164/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:57.093 [165/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:57.093 [166/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:57.093 [167/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:57.093 [168/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:57.093 [169/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:57.093 [170/268] Linking target lib/librte_kvargs.so.24.1 00:03:57.093 [171/268] Linking target lib/librte_telemetry.so.24.1 00:03:57.093 [172/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:57.093 [173/268] Linking static target lib/librte_reorder.a 00:03:57.093 [174/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:57.093 [175/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:57.093 [176/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:57.093 [177/268] Linking static target lib/librte_dmadev.a 00:03:57.093 [178/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:57.093 [179/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:57.093 [180/268] Linking static target lib/librte_compressdev.a 00:03:57.093 [181/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:57.352 [182/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:57.352 [183/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:57.352 [184/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:57.352 [185/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:57.352 [186/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:57.352 [187/268] Linking static target lib/librte_security.a 00:03:57.352 [188/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:57.352 [189/268] Linking static target lib/librte_power.a 00:03:57.352 [190/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:57.352 [191/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:57.352 [192/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:57.352 [193/268] Linking static target drivers/librte_bus_vdev.a 00:03:57.352 [194/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:57.352 [195/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:57.352 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:57.352 [197/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:57.352 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:57.352 [199/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:57.352 [200/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:57.352 [201/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:57.352 [202/268] Linking static target drivers/librte_mempool_ring.a 00:03:57.352 [203/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.352 [204/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:57.352 [205/268] Linking static target lib/librte_hash.a 00:03:57.352 [206/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:57.352 [207/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:57.352 [208/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:57.610 [209/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:57.610 [210/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:57.610 [211/268] Linking static target drivers/librte_bus_pci.a 00:03:57.610 [212/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.610 [213/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.610 [214/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.610 [215/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:57.610 [216/268] Linking static target lib/librte_cryptodev.a 00:03:57.610 [217/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.868 [218/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.868 [219/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:57.868 [220/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.868 [221/268] Linking static target lib/librte_ethdev.a 00:03:57.868 [222/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:58.126 [223/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:58.126 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:58.126 [225/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:58.384 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:58.384 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:59.320 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:59.320 [229/268] Linking static target lib/librte_vhost.a 00:03:59.578 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.482 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:06.752 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:07.010 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:07.010 [234/268] Linking target lib/librte_eal.so.24.1 00:04:07.010 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:07.269 [236/268] Linking target lib/librte_pci.so.24.1 00:04:07.269 [237/268] Linking target lib/librte_meter.so.24.1 00:04:07.269 [238/268] Linking target lib/librte_timer.so.24.1 00:04:07.269 [239/268] Linking target lib/librte_dmadev.so.24.1 00:04:07.269 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:07.269 [241/268] Linking target lib/librte_ring.so.24.1 00:04:07.269 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:07.269 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:07.269 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:07.269 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:07.269 [246/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:07.269 [247/268] Linking target lib/librte_mempool.so.24.1 00:04:07.269 [248/268] Linking target lib/librte_rcu.so.24.1 00:04:07.269 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:07.527 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:07.527 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:07.528 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:07.528 [253/268] Linking target lib/librte_mbuf.so.24.1 00:04:07.528 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:07.785 [255/268] Linking target lib/librte_net.so.24.1 00:04:07.786 [256/268] Linking target lib/librte_reorder.so.24.1 00:04:07.786 [257/268] Linking target lib/librte_compressdev.so.24.1 00:04:07.786 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:04:07.786 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:07.786 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:07.786 [261/268] Linking target lib/librte_hash.so.24.1 00:04:07.786 [262/268] Linking target lib/librte_cmdline.so.24.1 00:04:07.786 [263/268] Linking target lib/librte_security.so.24.1 00:04:07.786 [264/268] Linking target lib/librte_ethdev.so.24.1 00:04:08.044 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:08.044 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:08.044 [267/268] Linking target lib/librte_power.so.24.1 00:04:08.044 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:08.044 INFO: autodetecting backend as ninja 00:04:08.044 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:04:18.017 CC lib/log/log.o 00:04:18.017 CC lib/log/log_flags.o 00:04:18.017 CC lib/log/log_deprecated.o 00:04:18.017 CC lib/ut/ut.o 00:04:18.017 CC lib/ut_mock/mock.o 00:04:18.276 LIB libspdk_ut.a 00:04:18.276 LIB libspdk_ut_mock.a 00:04:18.276 LIB libspdk_log.a 00:04:18.276 SO libspdk_ut.so.2.0 00:04:18.276 SO libspdk_ut_mock.so.6.0 00:04:18.276 SO libspdk_log.so.7.1 00:04:18.276 SYMLINK libspdk_ut.so 00:04:18.276 SYMLINK libspdk_log.so 00:04:18.276 SYMLINK libspdk_ut_mock.so 00:04:18.843 CC lib/util/base64.o 00:04:18.843 CC lib/util/cpuset.o 00:04:18.843 CC lib/util/bit_array.o 00:04:18.843 CC lib/util/crc16.o 00:04:18.843 CC lib/ioat/ioat.o 00:04:18.843 CC lib/util/crc32.o 00:04:18.843 CC lib/dma/dma.o 00:04:18.843 CC lib/util/crc32c.o 00:04:18.843 CC lib/util/crc32_ieee.o 00:04:18.843 CC lib/util/crc64.o 00:04:18.843 CC lib/util/dif.o 00:04:18.843 CC lib/util/fd.o 00:04:18.843 CC lib/util/fd_group.o 00:04:18.843 CXX lib/trace_parser/trace.o 00:04:18.843 CC lib/util/file.o 00:04:18.843 CC lib/util/hexlify.o 00:04:18.843 CC lib/util/iov.o 00:04:18.843 CC lib/util/math.o 00:04:18.843 CC lib/util/net.o 00:04:18.843 CC lib/util/pipe.o 00:04:18.843 CC lib/util/strerror_tls.o 00:04:18.843 CC lib/util/string.o 00:04:18.843 CC lib/util/uuid.o 00:04:18.843 CC lib/util/xor.o 00:04:18.843 CC lib/util/zipf.o 00:04:18.843 CC lib/util/md5.o 00:04:18.843 CC lib/vfio_user/host/vfio_user.o 00:04:18.843 CC lib/vfio_user/host/vfio_user_pci.o 00:04:18.843 LIB libspdk_dma.a 00:04:18.843 SO libspdk_dma.so.5.0 00:04:18.843 LIB libspdk_ioat.a 00:04:19.102 SO libspdk_ioat.so.7.0 00:04:19.102 SYMLINK libspdk_dma.so 00:04:19.102 SYMLINK libspdk_ioat.so 00:04:19.102 LIB libspdk_vfio_user.a 00:04:19.102 SO libspdk_vfio_user.so.5.0 00:04:19.102 SYMLINK libspdk_vfio_user.so 00:04:19.102 LIB libspdk_util.a 00:04:19.360 SO libspdk_util.so.10.1 00:04:19.360 SYMLINK libspdk_util.so 00:04:19.360 LIB libspdk_trace_parser.a 00:04:19.360 SO libspdk_trace_parser.so.6.0 00:04:19.618 SYMLINK libspdk_trace_parser.so 00:04:19.618 CC lib/vmd/vmd.o 00:04:19.618 CC lib/rdma_utils/rdma_utils.o 00:04:19.618 CC lib/conf/conf.o 00:04:19.618 CC lib/vmd/led.o 00:04:19.618 CC lib/env_dpdk/env.o 00:04:19.618 CC lib/env_dpdk/memory.o 00:04:19.618 CC lib/env_dpdk/pci.o 00:04:19.618 CC lib/env_dpdk/threads.o 00:04:19.618 CC lib/env_dpdk/init.o 00:04:19.618 CC lib/json/json_parse.o 00:04:19.618 CC lib/env_dpdk/pci_ioat.o 00:04:19.618 CC lib/json/json_util.o 00:04:19.618 CC lib/idxd/idxd.o 00:04:19.618 CC lib/env_dpdk/pci_virtio.o 00:04:19.618 CC lib/env_dpdk/pci_vmd.o 00:04:19.618 CC lib/json/json_write.o 00:04:19.618 CC lib/idxd/idxd_user.o 00:04:19.618 CC lib/env_dpdk/pci_idxd.o 00:04:19.618 CC lib/idxd/idxd_kernel.o 00:04:19.618 CC lib/env_dpdk/pci_event.o 00:04:19.618 CC lib/env_dpdk/sigbus_handler.o 00:04:19.618 CC lib/env_dpdk/pci_dpdk.o 00:04:19.618 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:19.618 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:19.877 LIB libspdk_conf.a 00:04:19.877 LIB libspdk_rdma_utils.a 00:04:19.877 SO libspdk_conf.so.6.0 00:04:19.877 SO libspdk_rdma_utils.so.1.0 00:04:20.135 LIB libspdk_json.a 00:04:20.135 SYMLINK libspdk_conf.so 00:04:20.135 SYMLINK libspdk_rdma_utils.so 00:04:20.135 SO libspdk_json.so.6.0 00:04:20.135 SYMLINK libspdk_json.so 00:04:20.135 LIB libspdk_vmd.a 00:04:20.135 LIB libspdk_idxd.a 00:04:20.135 SO libspdk_vmd.so.6.0 00:04:20.135 SO libspdk_idxd.so.12.1 00:04:20.394 SYMLINK libspdk_vmd.so 00:04:20.394 SYMLINK libspdk_idxd.so 00:04:20.394 CC lib/rdma_provider/common.o 00:04:20.394 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:20.394 CC lib/jsonrpc/jsonrpc_server.o 00:04:20.394 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:20.394 CC lib/jsonrpc/jsonrpc_client.o 00:04:20.394 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:20.394 LIB libspdk_rdma_provider.a 00:04:20.653 SO libspdk_rdma_provider.so.7.0 00:04:20.653 SYMLINK libspdk_rdma_provider.so 00:04:20.653 LIB libspdk_jsonrpc.a 00:04:20.653 SO libspdk_jsonrpc.so.6.0 00:04:20.653 SYMLINK libspdk_jsonrpc.so 00:04:20.653 LIB libspdk_env_dpdk.a 00:04:20.913 SO libspdk_env_dpdk.so.15.1 00:04:20.913 SYMLINK libspdk_env_dpdk.so 00:04:21.172 CC lib/rpc/rpc.o 00:04:21.172 LIB libspdk_rpc.a 00:04:21.172 SO libspdk_rpc.so.6.0 00:04:21.431 SYMLINK libspdk_rpc.so 00:04:21.689 CC lib/notify/notify.o 00:04:21.689 CC lib/notify/notify_rpc.o 00:04:21.689 CC lib/keyring/keyring.o 00:04:21.689 CC lib/keyring/keyring_rpc.o 00:04:21.689 CC lib/trace/trace.o 00:04:21.689 CC lib/trace/trace_flags.o 00:04:21.689 CC lib/trace/trace_rpc.o 00:04:21.689 LIB libspdk_notify.a 00:04:21.689 SO libspdk_notify.so.6.0 00:04:21.950 LIB libspdk_keyring.a 00:04:21.950 LIB libspdk_trace.a 00:04:21.950 SYMLINK libspdk_notify.so 00:04:21.950 SO libspdk_keyring.so.2.0 00:04:21.950 SO libspdk_trace.so.11.0 00:04:21.950 SYMLINK libspdk_keyring.so 00:04:21.950 SYMLINK libspdk_trace.so 00:04:22.209 CC lib/sock/sock.o 00:04:22.209 CC lib/sock/sock_rpc.o 00:04:22.209 CC lib/thread/thread.o 00:04:22.209 CC lib/thread/iobuf.o 00:04:22.468 LIB libspdk_sock.a 00:04:22.727 SO libspdk_sock.so.10.0 00:04:22.727 SYMLINK libspdk_sock.so 00:04:22.986 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:22.986 CC lib/nvme/nvme_ctrlr.o 00:04:22.986 CC lib/nvme/nvme_fabric.o 00:04:22.986 CC lib/nvme/nvme_ns_cmd.o 00:04:22.986 CC lib/nvme/nvme_ns.o 00:04:22.986 CC lib/nvme/nvme_pcie_common.o 00:04:22.986 CC lib/nvme/nvme_pcie.o 00:04:22.986 CC lib/nvme/nvme_qpair.o 00:04:22.986 CC lib/nvme/nvme.o 00:04:22.986 CC lib/nvme/nvme_quirks.o 00:04:22.986 CC lib/nvme/nvme_transport.o 00:04:22.986 CC lib/nvme/nvme_discovery.o 00:04:22.986 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:22.986 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:22.986 CC lib/nvme/nvme_tcp.o 00:04:22.986 CC lib/nvme/nvme_opal.o 00:04:22.986 CC lib/nvme/nvme_io_msg.o 00:04:22.986 CC lib/nvme/nvme_poll_group.o 00:04:22.986 CC lib/nvme/nvme_zns.o 00:04:22.986 CC lib/nvme/nvme_stubs.o 00:04:22.986 CC lib/nvme/nvme_auth.o 00:04:22.986 CC lib/nvme/nvme_cuse.o 00:04:22.986 CC lib/nvme/nvme_vfio_user.o 00:04:22.986 CC lib/nvme/nvme_rdma.o 00:04:23.245 LIB libspdk_thread.a 00:04:23.245 SO libspdk_thread.so.11.0 00:04:23.503 SYMLINK libspdk_thread.so 00:04:23.761 CC lib/accel/accel.o 00:04:23.761 CC lib/accel/accel_rpc.o 00:04:23.761 CC lib/accel/accel_sw.o 00:04:23.761 CC lib/vfu_tgt/tgt_endpoint.o 00:04:23.761 CC lib/vfu_tgt/tgt_rpc.o 00:04:23.761 CC lib/init/json_config.o 00:04:23.761 CC lib/init/subsystem.o 00:04:23.761 CC lib/init/subsystem_rpc.o 00:04:23.761 CC lib/init/rpc.o 00:04:23.761 CC lib/fsdev/fsdev.o 00:04:23.761 CC lib/fsdev/fsdev_io.o 00:04:23.761 CC lib/fsdev/fsdev_rpc.o 00:04:23.761 CC lib/virtio/virtio.o 00:04:23.761 CC lib/virtio/virtio_vhost_user.o 00:04:23.761 CC lib/virtio/virtio_vfio_user.o 00:04:23.761 CC lib/virtio/virtio_pci.o 00:04:23.761 CC lib/blob/blobstore.o 00:04:23.761 CC lib/blob/request.o 00:04:23.761 CC lib/blob/zeroes.o 00:04:23.761 CC lib/blob/blob_bs_dev.o 00:04:24.019 LIB libspdk_init.a 00:04:24.019 SO libspdk_init.so.6.0 00:04:24.019 LIB libspdk_vfu_tgt.a 00:04:24.019 LIB libspdk_virtio.a 00:04:24.019 SO libspdk_vfu_tgt.so.3.0 00:04:24.019 SYMLINK libspdk_init.so 00:04:24.019 SO libspdk_virtio.so.7.0 00:04:24.019 SYMLINK libspdk_vfu_tgt.so 00:04:24.019 SYMLINK libspdk_virtio.so 00:04:24.277 LIB libspdk_fsdev.a 00:04:24.277 SO libspdk_fsdev.so.2.0 00:04:24.277 SYMLINK libspdk_fsdev.so 00:04:24.277 CC lib/event/app.o 00:04:24.277 CC lib/event/reactor.o 00:04:24.277 CC lib/event/log_rpc.o 00:04:24.277 CC lib/event/app_rpc.o 00:04:24.277 CC lib/event/scheduler_static.o 00:04:24.535 LIB libspdk_accel.a 00:04:24.535 SO libspdk_accel.so.16.0 00:04:24.535 SYMLINK libspdk_accel.so 00:04:24.535 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:24.794 LIB libspdk_event.a 00:04:24.794 LIB libspdk_nvme.a 00:04:24.794 SO libspdk_event.so.14.0 00:04:24.794 SYMLINK libspdk_event.so 00:04:24.794 SO libspdk_nvme.so.15.0 00:04:25.052 CC lib/bdev/bdev.o 00:04:25.052 CC lib/bdev/bdev_rpc.o 00:04:25.052 CC lib/bdev/bdev_zone.o 00:04:25.052 CC lib/bdev/part.o 00:04:25.052 CC lib/bdev/scsi_nvme.o 00:04:25.052 SYMLINK libspdk_nvme.so 00:04:25.052 LIB libspdk_fuse_dispatcher.a 00:04:25.052 SO libspdk_fuse_dispatcher.so.1.0 00:04:25.310 SYMLINK libspdk_fuse_dispatcher.so 00:04:25.878 LIB libspdk_blob.a 00:04:25.878 SO libspdk_blob.so.11.0 00:04:25.878 SYMLINK libspdk_blob.so 00:04:26.136 CC lib/lvol/lvol.o 00:04:26.136 CC lib/blobfs/blobfs.o 00:04:26.136 CC lib/blobfs/tree.o 00:04:26.704 LIB libspdk_bdev.a 00:04:26.704 SO libspdk_bdev.so.17.0 00:04:26.963 LIB libspdk_blobfs.a 00:04:26.963 SO libspdk_blobfs.so.10.0 00:04:26.963 LIB libspdk_lvol.a 00:04:26.963 SYMLINK libspdk_bdev.so 00:04:26.963 SO libspdk_lvol.so.10.0 00:04:26.963 SYMLINK libspdk_blobfs.so 00:04:26.963 SYMLINK libspdk_lvol.so 00:04:27.222 CC lib/nvmf/ctrlr_discovery.o 00:04:27.222 CC lib/nvmf/ctrlr.o 00:04:27.222 CC lib/nvmf/ctrlr_bdev.o 00:04:27.222 CC lib/nvmf/subsystem.o 00:04:27.222 CC lib/nvmf/nvmf.o 00:04:27.222 CC lib/nvmf/nvmf_rpc.o 00:04:27.222 CC lib/nvmf/transport.o 00:04:27.222 CC lib/nvmf/tcp.o 00:04:27.222 CC lib/nvmf/stubs.o 00:04:27.222 CC lib/nvmf/mdns_server.o 00:04:27.222 CC lib/nvmf/vfio_user.o 00:04:27.222 CC lib/nvmf/rdma.o 00:04:27.222 CC lib/nvmf/auth.o 00:04:27.222 CC lib/scsi/dev.o 00:04:27.222 CC lib/scsi/port.o 00:04:27.222 CC lib/scsi/lun.o 00:04:27.222 CC lib/ublk/ublk.o 00:04:27.222 CC lib/nbd/nbd.o 00:04:27.222 CC lib/ublk/ublk_rpc.o 00:04:27.222 CC lib/nbd/nbd_rpc.o 00:04:27.222 CC lib/scsi/scsi.o 00:04:27.222 CC lib/scsi/scsi_bdev.o 00:04:27.222 CC lib/ftl/ftl_core.o 00:04:27.222 CC lib/scsi/scsi_pr.o 00:04:27.222 CC lib/ftl/ftl_init.o 00:04:27.222 CC lib/ftl/ftl_layout.o 00:04:27.222 CC lib/scsi/scsi_rpc.o 00:04:27.222 CC lib/ftl/ftl_debug.o 00:04:27.222 CC lib/scsi/task.o 00:04:27.222 CC lib/ftl/ftl_io.o 00:04:27.222 CC lib/ftl/ftl_sb.o 00:04:27.222 CC lib/ftl/ftl_l2p_flat.o 00:04:27.222 CC lib/ftl/ftl_l2p.o 00:04:27.222 CC lib/ftl/ftl_nv_cache.o 00:04:27.222 CC lib/ftl/ftl_band_ops.o 00:04:27.222 CC lib/ftl/ftl_band.o 00:04:27.222 CC lib/ftl/ftl_writer.o 00:04:27.222 CC lib/ftl/ftl_rq.o 00:04:27.222 CC lib/ftl/ftl_reloc.o 00:04:27.222 CC lib/ftl/ftl_l2p_cache.o 00:04:27.222 CC lib/ftl/ftl_p2l.o 00:04:27.222 CC lib/ftl/mngt/ftl_mngt.o 00:04:27.222 CC lib/ftl/ftl_p2l_log.o 00:04:27.222 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:27.222 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:27.222 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:27.222 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:27.222 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:27.222 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:27.222 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:27.222 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:27.222 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:27.222 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:27.222 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:27.222 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:27.222 CC lib/ftl/utils/ftl_conf.o 00:04:27.222 CC lib/ftl/utils/ftl_md.o 00:04:27.222 CC lib/ftl/utils/ftl_mempool.o 00:04:27.222 CC lib/ftl/utils/ftl_bitmap.o 00:04:27.222 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:27.222 CC lib/ftl/utils/ftl_property.o 00:04:27.222 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:27.222 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:27.222 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:27.222 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:27.222 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:27.222 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:27.222 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:27.222 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:27.222 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:27.222 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:27.222 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:27.222 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:27.222 CC lib/ftl/base/ftl_base_dev.o 00:04:27.222 CC lib/ftl/base/ftl_base_bdev.o 00:04:27.222 CC lib/ftl/ftl_trace.o 00:04:27.788 LIB libspdk_nbd.a 00:04:27.788 SO libspdk_nbd.so.7.0 00:04:27.788 SYMLINK libspdk_nbd.so 00:04:27.788 LIB libspdk_scsi.a 00:04:28.047 SO libspdk_scsi.so.9.0 00:04:28.047 LIB libspdk_ublk.a 00:04:28.047 SYMLINK libspdk_scsi.so 00:04:28.047 SO libspdk_ublk.so.3.0 00:04:28.047 SYMLINK libspdk_ublk.so 00:04:28.306 CC lib/iscsi/conn.o 00:04:28.306 CC lib/iscsi/init_grp.o 00:04:28.306 CC lib/iscsi/iscsi.o 00:04:28.306 CC lib/iscsi/tgt_node.o 00:04:28.306 CC lib/iscsi/param.o 00:04:28.306 CC lib/iscsi/portal_grp.o 00:04:28.306 CC lib/iscsi/iscsi_subsystem.o 00:04:28.306 CC lib/iscsi/iscsi_rpc.o 00:04:28.306 CC lib/iscsi/task.o 00:04:28.306 CC lib/vhost/vhost.o 00:04:28.306 CC lib/vhost/vhost_rpc.o 00:04:28.306 CC lib/vhost/vhost_scsi.o 00:04:28.306 CC lib/vhost/vhost_blk.o 00:04:28.306 CC lib/vhost/rte_vhost_user.o 00:04:28.306 LIB libspdk_ftl.a 00:04:28.565 SO libspdk_ftl.so.9.0 00:04:28.824 SYMLINK libspdk_ftl.so 00:04:29.107 LIB libspdk_nvmf.a 00:04:29.107 LIB libspdk_vhost.a 00:04:29.107 SO libspdk_nvmf.so.20.0 00:04:29.107 SO libspdk_vhost.so.8.0 00:04:29.402 SYMLINK libspdk_vhost.so 00:04:29.402 LIB libspdk_iscsi.a 00:04:29.402 SYMLINK libspdk_nvmf.so 00:04:29.402 SO libspdk_iscsi.so.8.0 00:04:29.402 SYMLINK libspdk_iscsi.so 00:04:29.986 CC module/env_dpdk/env_dpdk_rpc.o 00:04:29.986 CC module/vfu_device/vfu_virtio.o 00:04:29.986 CC module/vfu_device/vfu_virtio_scsi.o 00:04:29.986 CC module/vfu_device/vfu_virtio_blk.o 00:04:29.986 CC module/vfu_device/vfu_virtio_rpc.o 00:04:29.986 CC module/vfu_device/vfu_virtio_fs.o 00:04:30.245 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:30.245 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:30.245 LIB libspdk_env_dpdk_rpc.a 00:04:30.245 CC module/fsdev/aio/fsdev_aio.o 00:04:30.245 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:30.245 CC module/fsdev/aio/linux_aio_mgr.o 00:04:30.245 CC module/keyring/linux/keyring.o 00:04:30.245 CC module/accel/error/accel_error.o 00:04:30.245 CC module/keyring/linux/keyring_rpc.o 00:04:30.245 CC module/accel/error/accel_error_rpc.o 00:04:30.245 CC module/accel/dsa/accel_dsa.o 00:04:30.245 CC module/accel/dsa/accel_dsa_rpc.o 00:04:30.245 CC module/sock/posix/posix.o 00:04:30.245 CC module/blob/bdev/blob_bdev.o 00:04:30.245 CC module/keyring/file/keyring.o 00:04:30.245 CC module/keyring/file/keyring_rpc.o 00:04:30.245 CC module/accel/ioat/accel_ioat.o 00:04:30.245 CC module/accel/ioat/accel_ioat_rpc.o 00:04:30.245 CC module/scheduler/gscheduler/gscheduler.o 00:04:30.245 CC module/accel/iaa/accel_iaa.o 00:04:30.245 CC module/accel/iaa/accel_iaa_rpc.o 00:04:30.245 SO libspdk_env_dpdk_rpc.so.6.0 00:04:30.245 SYMLINK libspdk_env_dpdk_rpc.so 00:04:30.245 LIB libspdk_keyring_linux.a 00:04:30.245 LIB libspdk_scheduler_gscheduler.a 00:04:30.245 LIB libspdk_keyring_file.a 00:04:30.245 LIB libspdk_scheduler_dynamic.a 00:04:30.245 LIB libspdk_scheduler_dpdk_governor.a 00:04:30.245 SO libspdk_keyring_linux.so.1.0 00:04:30.245 SO libspdk_scheduler_gscheduler.so.4.0 00:04:30.245 SO libspdk_keyring_file.so.2.0 00:04:30.245 LIB libspdk_accel_ioat.a 00:04:30.245 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:30.504 SO libspdk_scheduler_dynamic.so.4.0 00:04:30.504 LIB libspdk_accel_iaa.a 00:04:30.504 LIB libspdk_accel_error.a 00:04:30.504 SO libspdk_accel_ioat.so.6.0 00:04:30.504 SO libspdk_accel_iaa.so.3.0 00:04:30.504 SO libspdk_accel_error.so.2.0 00:04:30.504 SYMLINK libspdk_scheduler_gscheduler.so 00:04:30.504 SYMLINK libspdk_keyring_linux.so 00:04:30.504 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:30.504 SYMLINK libspdk_keyring_file.so 00:04:30.504 LIB libspdk_accel_dsa.a 00:04:30.504 SYMLINK libspdk_scheduler_dynamic.so 00:04:30.504 LIB libspdk_blob_bdev.a 00:04:30.504 SYMLINK libspdk_accel_ioat.so 00:04:30.504 SO libspdk_accel_dsa.so.5.0 00:04:30.504 SYMLINK libspdk_accel_iaa.so 00:04:30.504 SO libspdk_blob_bdev.so.11.0 00:04:30.504 SYMLINK libspdk_accel_error.so 00:04:30.504 SYMLINK libspdk_accel_dsa.so 00:04:30.504 LIB libspdk_vfu_device.a 00:04:30.504 SYMLINK libspdk_blob_bdev.so 00:04:30.504 SO libspdk_vfu_device.so.3.0 00:04:30.763 SYMLINK libspdk_vfu_device.so 00:04:30.763 LIB libspdk_fsdev_aio.a 00:04:30.763 SO libspdk_fsdev_aio.so.1.0 00:04:30.763 LIB libspdk_sock_posix.a 00:04:30.763 SO libspdk_sock_posix.so.6.0 00:04:30.763 SYMLINK libspdk_fsdev_aio.so 00:04:30.763 SYMLINK libspdk_sock_posix.so 00:04:31.022 CC module/blobfs/bdev/blobfs_bdev.o 00:04:31.022 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:31.022 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:31.023 CC module/bdev/null/bdev_null.o 00:04:31.023 CC module/bdev/null/bdev_null_rpc.o 00:04:31.023 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:31.023 CC module/bdev/error/vbdev_error.o 00:04:31.023 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:31.023 CC module/bdev/nvme/bdev_nvme.o 00:04:31.023 CC module/bdev/error/vbdev_error_rpc.o 00:04:31.023 CC module/bdev/delay/vbdev_delay.o 00:04:31.023 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:31.023 CC module/bdev/nvme/nvme_rpc.o 00:04:31.023 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:31.023 CC module/bdev/gpt/gpt.o 00:04:31.023 CC module/bdev/nvme/bdev_mdns_client.o 00:04:31.023 CC module/bdev/gpt/vbdev_gpt.o 00:04:31.023 CC module/bdev/nvme/vbdev_opal.o 00:04:31.023 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:31.023 CC module/bdev/aio/bdev_aio.o 00:04:31.023 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:31.023 CC module/bdev/aio/bdev_aio_rpc.o 00:04:31.023 CC module/bdev/passthru/vbdev_passthru.o 00:04:31.023 CC module/bdev/raid/bdev_raid.o 00:04:31.023 CC module/bdev/lvol/vbdev_lvol.o 00:04:31.023 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:31.023 CC module/bdev/raid/bdev_raid_rpc.o 00:04:31.023 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:31.023 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:31.023 CC module/bdev/ftl/bdev_ftl.o 00:04:31.023 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:31.023 CC module/bdev/raid/raid0.o 00:04:31.023 CC module/bdev/raid/bdev_raid_sb.o 00:04:31.023 CC module/bdev/malloc/bdev_malloc.o 00:04:31.023 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:31.023 CC module/bdev/raid/raid1.o 00:04:31.023 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:31.023 CC module/bdev/raid/concat.o 00:04:31.023 CC module/bdev/iscsi/bdev_iscsi.o 00:04:31.023 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:31.023 CC module/bdev/split/vbdev_split.o 00:04:31.023 CC module/bdev/split/vbdev_split_rpc.o 00:04:31.281 LIB libspdk_blobfs_bdev.a 00:04:31.281 SO libspdk_blobfs_bdev.so.6.0 00:04:31.281 LIB libspdk_bdev_error.a 00:04:31.281 SYMLINK libspdk_blobfs_bdev.so 00:04:31.281 LIB libspdk_bdev_null.a 00:04:31.281 SO libspdk_bdev_error.so.6.0 00:04:31.281 LIB libspdk_bdev_split.a 00:04:31.281 LIB libspdk_bdev_gpt.a 00:04:31.281 LIB libspdk_bdev_ftl.a 00:04:31.281 SO libspdk_bdev_split.so.6.0 00:04:31.281 SO libspdk_bdev_null.so.6.0 00:04:31.281 SO libspdk_bdev_gpt.so.6.0 00:04:31.281 SYMLINK libspdk_bdev_error.so 00:04:31.281 LIB libspdk_bdev_passthru.a 00:04:31.281 SO libspdk_bdev_ftl.so.6.0 00:04:31.281 SYMLINK libspdk_bdev_null.so 00:04:31.540 LIB libspdk_bdev_aio.a 00:04:31.540 SYMLINK libspdk_bdev_split.so 00:04:31.540 LIB libspdk_bdev_zone_block.a 00:04:31.540 SO libspdk_bdev_passthru.so.6.0 00:04:31.540 LIB libspdk_bdev_malloc.a 00:04:31.540 LIB libspdk_bdev_delay.a 00:04:31.540 SYMLINK libspdk_bdev_gpt.so 00:04:31.540 LIB libspdk_bdev_iscsi.a 00:04:31.540 SO libspdk_bdev_aio.so.6.0 00:04:31.540 SYMLINK libspdk_bdev_ftl.so 00:04:31.540 SO libspdk_bdev_zone_block.so.6.0 00:04:31.540 SO libspdk_bdev_delay.so.6.0 00:04:31.540 SO libspdk_bdev_malloc.so.6.0 00:04:31.540 SO libspdk_bdev_iscsi.so.6.0 00:04:31.540 SYMLINK libspdk_bdev_passthru.so 00:04:31.540 SYMLINK libspdk_bdev_aio.so 00:04:31.540 SYMLINK libspdk_bdev_zone_block.so 00:04:31.540 SYMLINK libspdk_bdev_delay.so 00:04:31.540 SYMLINK libspdk_bdev_malloc.so 00:04:31.540 LIB libspdk_bdev_virtio.a 00:04:31.540 SYMLINK libspdk_bdev_iscsi.so 00:04:31.540 LIB libspdk_bdev_lvol.a 00:04:31.540 SO libspdk_bdev_virtio.so.6.0 00:04:31.540 SO libspdk_bdev_lvol.so.6.0 00:04:31.540 SYMLINK libspdk_bdev_virtio.so 00:04:31.540 SYMLINK libspdk_bdev_lvol.so 00:04:31.799 LIB libspdk_bdev_raid.a 00:04:31.799 SO libspdk_bdev_raid.so.6.0 00:04:32.058 SYMLINK libspdk_bdev_raid.so 00:04:32.995 LIB libspdk_bdev_nvme.a 00:04:32.995 SO libspdk_bdev_nvme.so.7.1 00:04:32.995 SYMLINK libspdk_bdev_nvme.so 00:04:33.563 CC module/event/subsystems/vmd/vmd.o 00:04:33.563 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:33.563 CC module/event/subsystems/iobuf/iobuf.o 00:04:33.563 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:33.563 CC module/event/subsystems/sock/sock.o 00:04:33.563 CC module/event/subsystems/scheduler/scheduler.o 00:04:33.563 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:33.563 CC module/event/subsystems/fsdev/fsdev.o 00:04:33.563 CC module/event/subsystems/keyring/keyring.o 00:04:33.563 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:33.822 LIB libspdk_event_scheduler.a 00:04:33.822 LIB libspdk_event_vhost_blk.a 00:04:33.822 LIB libspdk_event_vfu_tgt.a 00:04:33.822 LIB libspdk_event_sock.a 00:04:33.822 LIB libspdk_event_vmd.a 00:04:33.822 LIB libspdk_event_keyring.a 00:04:33.822 LIB libspdk_event_fsdev.a 00:04:33.822 LIB libspdk_event_iobuf.a 00:04:33.822 SO libspdk_event_vfu_tgt.so.3.0 00:04:33.822 SO libspdk_event_scheduler.so.4.0 00:04:33.822 SO libspdk_event_fsdev.so.1.0 00:04:33.822 SO libspdk_event_vhost_blk.so.3.0 00:04:33.822 SO libspdk_event_sock.so.5.0 00:04:33.822 SO libspdk_event_vmd.so.6.0 00:04:33.822 SO libspdk_event_keyring.so.1.0 00:04:33.822 SO libspdk_event_iobuf.so.3.0 00:04:33.822 SYMLINK libspdk_event_fsdev.so 00:04:33.822 SYMLINK libspdk_event_vfu_tgt.so 00:04:33.822 SYMLINK libspdk_event_scheduler.so 00:04:33.822 SYMLINK libspdk_event_vmd.so 00:04:33.822 SYMLINK libspdk_event_sock.so 00:04:33.822 SYMLINK libspdk_event_vhost_blk.so 00:04:33.822 SYMLINK libspdk_event_keyring.so 00:04:33.822 SYMLINK libspdk_event_iobuf.so 00:04:34.082 CC module/event/subsystems/accel/accel.o 00:04:34.341 LIB libspdk_event_accel.a 00:04:34.341 SO libspdk_event_accel.so.6.0 00:04:34.341 SYMLINK libspdk_event_accel.so 00:04:34.910 CC module/event/subsystems/bdev/bdev.o 00:04:34.910 LIB libspdk_event_bdev.a 00:04:34.910 SO libspdk_event_bdev.so.6.0 00:04:34.910 SYMLINK libspdk_event_bdev.so 00:04:35.478 CC module/event/subsystems/scsi/scsi.o 00:04:35.478 CC module/event/subsystems/nbd/nbd.o 00:04:35.478 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:35.478 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:35.478 CC module/event/subsystems/ublk/ublk.o 00:04:35.478 LIB libspdk_event_nbd.a 00:04:35.478 LIB libspdk_event_ublk.a 00:04:35.478 LIB libspdk_event_scsi.a 00:04:35.478 SO libspdk_event_nbd.so.6.0 00:04:35.478 SO libspdk_event_ublk.so.3.0 00:04:35.478 SO libspdk_event_scsi.so.6.0 00:04:35.478 LIB libspdk_event_nvmf.a 00:04:35.478 SYMLINK libspdk_event_nbd.so 00:04:35.478 SYMLINK libspdk_event_ublk.so 00:04:35.478 SO libspdk_event_nvmf.so.6.0 00:04:35.478 SYMLINK libspdk_event_scsi.so 00:04:35.738 SYMLINK libspdk_event_nvmf.so 00:04:35.998 CC module/event/subsystems/iscsi/iscsi.o 00:04:35.998 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:35.998 LIB libspdk_event_vhost_scsi.a 00:04:35.998 LIB libspdk_event_iscsi.a 00:04:35.998 SO libspdk_event_vhost_scsi.so.3.0 00:04:35.998 SO libspdk_event_iscsi.so.6.0 00:04:35.998 SYMLINK libspdk_event_vhost_scsi.so 00:04:35.998 SYMLINK libspdk_event_iscsi.so 00:04:36.257 SO libspdk.so.6.0 00:04:36.257 SYMLINK libspdk.so 00:04:36.516 CXX app/trace/trace.o 00:04:36.516 CC app/trace_record/trace_record.o 00:04:36.516 CC app/spdk_top/spdk_top.o 00:04:36.516 CC app/spdk_nvme_perf/perf.o 00:04:36.783 CC app/spdk_nvme_identify/identify.o 00:04:36.783 CC app/spdk_nvme_discover/discovery_aer.o 00:04:36.783 CC test/rpc_client/rpc_client_test.o 00:04:36.783 CC app/spdk_lspci/spdk_lspci.o 00:04:36.783 TEST_HEADER include/spdk/accel_module.h 00:04:36.783 TEST_HEADER include/spdk/accel.h 00:04:36.783 TEST_HEADER include/spdk/barrier.h 00:04:36.783 TEST_HEADER include/spdk/assert.h 00:04:36.783 TEST_HEADER include/spdk/base64.h 00:04:36.783 TEST_HEADER include/spdk/bdev_module.h 00:04:36.783 TEST_HEADER include/spdk/bdev.h 00:04:36.783 TEST_HEADER include/spdk/bit_pool.h 00:04:36.783 TEST_HEADER include/spdk/bdev_zone.h 00:04:36.783 TEST_HEADER include/spdk/bit_array.h 00:04:36.783 TEST_HEADER include/spdk/blob_bdev.h 00:04:36.783 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:36.783 TEST_HEADER include/spdk/blob.h 00:04:36.783 TEST_HEADER include/spdk/blobfs.h 00:04:36.783 TEST_HEADER include/spdk/config.h 00:04:36.783 TEST_HEADER include/spdk/conf.h 00:04:36.783 TEST_HEADER include/spdk/crc16.h 00:04:36.783 TEST_HEADER include/spdk/cpuset.h 00:04:36.783 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:36.783 TEST_HEADER include/spdk/crc32.h 00:04:36.783 TEST_HEADER include/spdk/crc64.h 00:04:36.783 TEST_HEADER include/spdk/dma.h 00:04:36.783 TEST_HEADER include/spdk/env_dpdk.h 00:04:36.783 TEST_HEADER include/spdk/dif.h 00:04:36.783 TEST_HEADER include/spdk/event.h 00:04:36.783 TEST_HEADER include/spdk/env.h 00:04:36.783 TEST_HEADER include/spdk/endian.h 00:04:36.783 TEST_HEADER include/spdk/fd_group.h 00:04:36.783 TEST_HEADER include/spdk/fd.h 00:04:36.783 TEST_HEADER include/spdk/file.h 00:04:36.783 TEST_HEADER include/spdk/fsdev.h 00:04:36.783 TEST_HEADER include/spdk/ftl.h 00:04:36.783 TEST_HEADER include/spdk/fsdev_module.h 00:04:36.783 TEST_HEADER include/spdk/gpt_spec.h 00:04:36.783 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:36.783 TEST_HEADER include/spdk/hexlify.h 00:04:36.783 CC app/iscsi_tgt/iscsi_tgt.o 00:04:36.783 TEST_HEADER include/spdk/histogram_data.h 00:04:36.783 TEST_HEADER include/spdk/idxd.h 00:04:36.783 TEST_HEADER include/spdk/ioat.h 00:04:36.783 TEST_HEADER include/spdk/idxd_spec.h 00:04:36.783 TEST_HEADER include/spdk/init.h 00:04:36.783 TEST_HEADER include/spdk/ioat_spec.h 00:04:36.783 CC app/spdk_dd/spdk_dd.o 00:04:36.783 TEST_HEADER include/spdk/jsonrpc.h 00:04:36.783 TEST_HEADER include/spdk/json.h 00:04:36.783 TEST_HEADER include/spdk/iscsi_spec.h 00:04:36.783 TEST_HEADER include/spdk/keyring.h 00:04:36.783 TEST_HEADER include/spdk/likely.h 00:04:36.783 TEST_HEADER include/spdk/log.h 00:04:36.783 TEST_HEADER include/spdk/keyring_module.h 00:04:36.783 TEST_HEADER include/spdk/memory.h 00:04:36.783 TEST_HEADER include/spdk/lvol.h 00:04:36.783 TEST_HEADER include/spdk/md5.h 00:04:36.783 CC app/nvmf_tgt/nvmf_main.o 00:04:36.783 TEST_HEADER include/spdk/mmio.h 00:04:36.783 TEST_HEADER include/spdk/nbd.h 00:04:36.783 TEST_HEADER include/spdk/net.h 00:04:36.783 TEST_HEADER include/spdk/notify.h 00:04:36.783 TEST_HEADER include/spdk/nvme.h 00:04:36.783 TEST_HEADER include/spdk/nvme_intel.h 00:04:36.783 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:36.783 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:36.783 TEST_HEADER include/spdk/nvme_spec.h 00:04:36.783 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:36.783 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:36.783 TEST_HEADER include/spdk/nvme_zns.h 00:04:36.783 TEST_HEADER include/spdk/nvmf.h 00:04:36.783 TEST_HEADER include/spdk/nvmf_spec.h 00:04:36.783 TEST_HEADER include/spdk/nvmf_transport.h 00:04:36.783 TEST_HEADER include/spdk/opal.h 00:04:36.783 TEST_HEADER include/spdk/opal_spec.h 00:04:36.783 TEST_HEADER include/spdk/pci_ids.h 00:04:36.783 TEST_HEADER include/spdk/reduce.h 00:04:36.783 TEST_HEADER include/spdk/queue.h 00:04:36.783 TEST_HEADER include/spdk/pipe.h 00:04:36.783 TEST_HEADER include/spdk/scsi.h 00:04:36.783 TEST_HEADER include/spdk/scheduler.h 00:04:36.783 TEST_HEADER include/spdk/rpc.h 00:04:36.783 TEST_HEADER include/spdk/scsi_spec.h 00:04:36.783 TEST_HEADER include/spdk/stdinc.h 00:04:36.783 TEST_HEADER include/spdk/sock.h 00:04:36.783 TEST_HEADER include/spdk/string.h 00:04:36.783 TEST_HEADER include/spdk/thread.h 00:04:36.783 TEST_HEADER include/spdk/trace.h 00:04:36.783 TEST_HEADER include/spdk/trace_parser.h 00:04:36.783 TEST_HEADER include/spdk/tree.h 00:04:36.783 TEST_HEADER include/spdk/ublk.h 00:04:36.783 TEST_HEADER include/spdk/util.h 00:04:36.783 TEST_HEADER include/spdk/version.h 00:04:36.783 CC app/spdk_tgt/spdk_tgt.o 00:04:36.783 TEST_HEADER include/spdk/uuid.h 00:04:36.783 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:36.783 TEST_HEADER include/spdk/vhost.h 00:04:36.783 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:36.783 TEST_HEADER include/spdk/vmd.h 00:04:36.783 TEST_HEADER include/spdk/xor.h 00:04:36.783 TEST_HEADER include/spdk/zipf.h 00:04:36.783 CXX test/cpp_headers/accel.o 00:04:36.783 CXX test/cpp_headers/accel_module.o 00:04:36.783 CXX test/cpp_headers/assert.o 00:04:36.783 CXX test/cpp_headers/barrier.o 00:04:36.783 CXX test/cpp_headers/base64.o 00:04:36.783 CXX test/cpp_headers/bdev.o 00:04:36.783 CXX test/cpp_headers/bdev_module.o 00:04:36.783 CXX test/cpp_headers/bdev_zone.o 00:04:36.783 CXX test/cpp_headers/bit_pool.o 00:04:36.783 CXX test/cpp_headers/blob_bdev.o 00:04:36.783 CXX test/cpp_headers/bit_array.o 00:04:36.783 CXX test/cpp_headers/blobfs.o 00:04:36.783 CXX test/cpp_headers/blob.o 00:04:36.783 CXX test/cpp_headers/blobfs_bdev.o 00:04:36.783 CXX test/cpp_headers/conf.o 00:04:36.783 CXX test/cpp_headers/crc16.o 00:04:36.783 CXX test/cpp_headers/config.o 00:04:36.783 CXX test/cpp_headers/crc32.o 00:04:36.783 CXX test/cpp_headers/cpuset.o 00:04:36.783 CXX test/cpp_headers/crc64.o 00:04:36.783 CXX test/cpp_headers/dif.o 00:04:36.783 CXX test/cpp_headers/dma.o 00:04:36.783 CXX test/cpp_headers/endian.o 00:04:36.783 CXX test/cpp_headers/env_dpdk.o 00:04:36.783 CXX test/cpp_headers/event.o 00:04:36.783 CXX test/cpp_headers/env.o 00:04:36.783 CXX test/cpp_headers/fd_group.o 00:04:36.783 CXX test/cpp_headers/file.o 00:04:36.783 CXX test/cpp_headers/fd.o 00:04:36.783 CXX test/cpp_headers/fsdev.o 00:04:36.783 CXX test/cpp_headers/ftl.o 00:04:36.783 CXX test/cpp_headers/fsdev_module.o 00:04:36.783 CXX test/cpp_headers/fuse_dispatcher.o 00:04:36.783 CXX test/cpp_headers/hexlify.o 00:04:36.783 CXX test/cpp_headers/gpt_spec.o 00:04:36.783 CXX test/cpp_headers/histogram_data.o 00:04:36.783 CXX test/cpp_headers/idxd.o 00:04:36.783 CXX test/cpp_headers/idxd_spec.o 00:04:36.783 CXX test/cpp_headers/ioat.o 00:04:36.783 CXX test/cpp_headers/init.o 00:04:36.783 CXX test/cpp_headers/ioat_spec.o 00:04:36.783 CXX test/cpp_headers/iscsi_spec.o 00:04:36.783 CXX test/cpp_headers/json.o 00:04:36.783 CXX test/cpp_headers/jsonrpc.o 00:04:36.783 CXX test/cpp_headers/keyring_module.o 00:04:36.783 CXX test/cpp_headers/keyring.o 00:04:36.783 CXX test/cpp_headers/log.o 00:04:36.783 CXX test/cpp_headers/lvol.o 00:04:36.783 CXX test/cpp_headers/likely.o 00:04:36.783 CXX test/cpp_headers/md5.o 00:04:36.783 CXX test/cpp_headers/mmio.o 00:04:36.783 CXX test/cpp_headers/memory.o 00:04:36.783 CXX test/cpp_headers/net.o 00:04:36.783 CXX test/cpp_headers/nbd.o 00:04:36.783 CXX test/cpp_headers/notify.o 00:04:36.783 CXX test/cpp_headers/nvme.o 00:04:36.783 CXX test/cpp_headers/nvme_intel.o 00:04:36.783 CXX test/cpp_headers/nvme_ocssd.o 00:04:36.783 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:36.783 CC examples/util/zipf/zipf.o 00:04:36.783 CXX test/cpp_headers/nvme_zns.o 00:04:36.783 CXX test/cpp_headers/nvme_spec.o 00:04:36.783 CXX test/cpp_headers/nvmf_cmd.o 00:04:36.783 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:36.783 CXX test/cpp_headers/nvmf_spec.o 00:04:36.783 CXX test/cpp_headers/nvmf.o 00:04:36.783 CC examples/ioat/perf/perf.o 00:04:36.783 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:36.783 CXX test/cpp_headers/nvmf_transport.o 00:04:36.783 CXX test/cpp_headers/opal.o 00:04:36.783 CC test/env/vtophys/vtophys.o 00:04:36.783 CC test/env/memory/memory_ut.o 00:04:36.784 CC test/app/jsoncat/jsoncat.o 00:04:36.784 CC test/env/pci/pci_ut.o 00:04:36.784 CC examples/ioat/verify/verify.o 00:04:36.784 CC test/app/histogram_perf/histogram_perf.o 00:04:36.784 CC app/fio/nvme/fio_plugin.o 00:04:36.784 CC test/app/stub/stub.o 00:04:36.784 CC test/dma/test_dma/test_dma.o 00:04:36.784 CC test/thread/poller_perf/poller_perf.o 00:04:37.064 CC test/app/bdev_svc/bdev_svc.o 00:04:37.064 CC app/fio/bdev/fio_plugin.o 00:04:37.064 LINK spdk_nvme_discover 00:04:37.064 LINK spdk_lspci 00:04:37.064 CC test/env/mem_callbacks/mem_callbacks.o 00:04:37.333 LINK rpc_client_test 00:04:37.333 LINK spdk_trace_record 00:04:37.333 LINK zipf 00:04:37.333 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:37.333 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:37.333 LINK histogram_perf 00:04:37.333 LINK interrupt_tgt 00:04:37.333 LINK nvmf_tgt 00:04:37.333 CXX test/cpp_headers/opal_spec.o 00:04:37.333 LINK iscsi_tgt 00:04:37.333 LINK env_dpdk_post_init 00:04:37.333 CXX test/cpp_headers/pci_ids.o 00:04:37.333 CXX test/cpp_headers/pipe.o 00:04:37.333 CXX test/cpp_headers/queue.o 00:04:37.333 CXX test/cpp_headers/reduce.o 00:04:37.333 CXX test/cpp_headers/rpc.o 00:04:37.333 CXX test/cpp_headers/scheduler.o 00:04:37.333 CXX test/cpp_headers/scsi.o 00:04:37.333 LINK stub 00:04:37.333 LINK spdk_tgt 00:04:37.333 CXX test/cpp_headers/scsi_spec.o 00:04:37.333 CXX test/cpp_headers/sock.o 00:04:37.333 CXX test/cpp_headers/thread.o 00:04:37.333 CXX test/cpp_headers/stdinc.o 00:04:37.333 LINK ioat_perf 00:04:37.333 CXX test/cpp_headers/string.o 00:04:37.333 CXX test/cpp_headers/trace.o 00:04:37.333 CXX test/cpp_headers/trace_parser.o 00:04:37.333 CXX test/cpp_headers/tree.o 00:04:37.333 CXX test/cpp_headers/ublk.o 00:04:37.333 CXX test/cpp_headers/uuid.o 00:04:37.333 CXX test/cpp_headers/util.o 00:04:37.590 CXX test/cpp_headers/version.o 00:04:37.590 CXX test/cpp_headers/vfio_user_pci.o 00:04:37.590 CXX test/cpp_headers/vfio_user_spec.o 00:04:37.590 LINK verify 00:04:37.590 CXX test/cpp_headers/vhost.o 00:04:37.590 LINK vtophys 00:04:37.590 CXX test/cpp_headers/vmd.o 00:04:37.590 CXX test/cpp_headers/xor.o 00:04:37.590 CXX test/cpp_headers/zipf.o 00:04:37.590 LINK jsoncat 00:04:37.590 LINK poller_perf 00:04:37.590 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:37.590 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:37.590 LINK bdev_svc 00:04:37.590 LINK spdk_dd 00:04:37.849 LINK spdk_trace 00:04:37.849 LINK pci_ut 00:04:37.849 LINK spdk_nvme 00:04:37.849 LINK test_dma 00:04:37.849 CC examples/sock/hello_world/hello_sock.o 00:04:37.849 CC examples/vmd/led/led.o 00:04:37.849 CC examples/vmd/lsvmd/lsvmd.o 00:04:37.849 LINK spdk_bdev 00:04:37.849 CC examples/idxd/perf/perf.o 00:04:37.849 LINK spdk_nvme_perf 00:04:37.849 CC examples/thread/thread/thread_ex.o 00:04:37.849 LINK spdk_nvme_identify 00:04:37.849 CC test/event/reactor_perf/reactor_perf.o 00:04:38.107 CC test/event/event_perf/event_perf.o 00:04:38.107 CC test/event/reactor/reactor.o 00:04:38.107 CC test/event/app_repeat/app_repeat.o 00:04:38.107 LINK nvme_fuzz 00:04:38.107 CC test/event/scheduler/scheduler.o 00:04:38.107 LINK vhost_fuzz 00:04:38.107 LINK mem_callbacks 00:04:38.107 LINK lsvmd 00:04:38.107 LINK spdk_top 00:04:38.107 LINK led 00:04:38.107 LINK reactor_perf 00:04:38.107 CC app/vhost/vhost.o 00:04:38.107 LINK event_perf 00:04:38.107 LINK hello_sock 00:04:38.107 LINK reactor 00:04:38.107 LINK app_repeat 00:04:38.365 LINK idxd_perf 00:04:38.365 LINK thread 00:04:38.365 LINK scheduler 00:04:38.365 CC test/nvme/sgl/sgl.o 00:04:38.365 CC test/nvme/err_injection/err_injection.o 00:04:38.365 CC test/nvme/aer/aer.o 00:04:38.365 CC test/nvme/reset/reset.o 00:04:38.365 CC test/nvme/reserve/reserve.o 00:04:38.366 CC test/nvme/fused_ordering/fused_ordering.o 00:04:38.366 CC test/nvme/simple_copy/simple_copy.o 00:04:38.366 CC test/nvme/compliance/nvme_compliance.o 00:04:38.366 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:38.366 CC test/nvme/startup/startup.o 00:04:38.366 CC test/nvme/boot_partition/boot_partition.o 00:04:38.366 CC test/nvme/e2edp/nvme_dp.o 00:04:38.366 CC test/nvme/connect_stress/connect_stress.o 00:04:38.366 CC test/nvme/cuse/cuse.o 00:04:38.366 CC test/nvme/overhead/overhead.o 00:04:38.366 CC test/nvme/fdp/fdp.o 00:04:38.366 CC test/accel/dif/dif.o 00:04:38.366 CC test/blobfs/mkfs/mkfs.o 00:04:38.366 LINK vhost 00:04:38.624 CC test/lvol/esnap/esnap.o 00:04:38.624 LINK err_injection 00:04:38.624 LINK memory_ut 00:04:38.624 LINK boot_partition 00:04:38.624 LINK doorbell_aers 00:04:38.624 LINK fused_ordering 00:04:38.624 LINK startup 00:04:38.624 LINK reserve 00:04:38.624 LINK connect_stress 00:04:38.624 LINK sgl 00:04:38.624 LINK simple_copy 00:04:38.624 LINK reset 00:04:38.624 CC examples/nvme/hello_world/hello_world.o 00:04:38.624 CC examples/nvme/arbitration/arbitration.o 00:04:38.624 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:38.624 CC examples/nvme/abort/abort.o 00:04:38.624 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:38.624 CC examples/nvme/reconnect/reconnect.o 00:04:38.624 CC examples/nvme/hotplug/hotplug.o 00:04:38.624 LINK aer 00:04:38.624 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:38.624 LINK nvme_dp 00:04:38.624 LINK mkfs 00:04:38.624 LINK nvme_compliance 00:04:38.624 LINK overhead 00:04:38.624 LINK fdp 00:04:38.624 CC examples/accel/perf/accel_perf.o 00:04:38.883 CC examples/blob/cli/blobcli.o 00:04:38.883 CC examples/blob/hello_world/hello_blob.o 00:04:38.883 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:38.883 LINK pmr_persistence 00:04:38.883 LINK hello_world 00:04:38.883 LINK cmb_copy 00:04:38.883 LINK hotplug 00:04:38.883 LINK arbitration 00:04:38.883 LINK reconnect 00:04:38.883 LINK abort 00:04:38.883 LINK iscsi_fuzz 00:04:38.883 LINK dif 00:04:38.883 LINK nvme_manage 00:04:38.883 LINK hello_blob 00:04:39.142 LINK hello_fsdev 00:04:39.142 LINK accel_perf 00:04:39.142 LINK blobcli 00:04:39.401 LINK cuse 00:04:39.401 CC test/bdev/bdevio/bdevio.o 00:04:39.660 CC examples/bdev/hello_world/hello_bdev.o 00:04:39.660 CC examples/bdev/bdevperf/bdevperf.o 00:04:39.919 LINK bdevio 00:04:39.919 LINK hello_bdev 00:04:40.178 LINK bdevperf 00:04:40.745 CC examples/nvmf/nvmf/nvmf.o 00:04:41.004 LINK nvmf 00:04:41.941 LINK esnap 00:04:42.200 00:04:42.200 real 0m55.467s 00:04:42.200 user 8m16.920s 00:04:42.200 sys 3m46.959s 00:04:42.200 12:18:47 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:42.200 12:18:47 make -- common/autotest_common.sh@10 -- $ set +x 00:04:42.200 ************************************ 00:04:42.200 END TEST make 00:04:42.200 ************************************ 00:04:42.200 12:18:47 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:42.200 12:18:47 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:42.200 12:18:47 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:42.200 12:18:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:42.200 12:18:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:42.200 12:18:47 -- pm/common@44 -- $ pid=4099823 00:04:42.200 12:18:47 -- pm/common@50 -- $ kill -TERM 4099823 00:04:42.200 12:18:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:42.200 12:18:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:42.200 12:18:47 -- pm/common@44 -- $ pid=4099824 00:04:42.200 12:18:47 -- pm/common@50 -- $ kill -TERM 4099824 00:04:42.200 12:18:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:42.200 12:18:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:42.200 12:18:47 -- pm/common@44 -- $ pid=4099826 00:04:42.200 12:18:47 -- pm/common@50 -- $ kill -TERM 4099826 00:04:42.200 12:18:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:42.200 12:18:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:42.200 12:18:47 -- pm/common@44 -- $ pid=4099849 00:04:42.200 12:18:47 -- pm/common@50 -- $ sudo -E kill -TERM 4099849 00:04:42.200 12:18:47 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:42.200 12:18:47 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:42.460 12:18:48 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:42.460 12:18:48 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:42.460 12:18:48 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:42.460 12:18:48 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:42.460 12:18:48 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.460 12:18:48 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.460 12:18:48 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.460 12:18:48 -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.460 12:18:48 -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.460 12:18:48 -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.460 12:18:48 -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.460 12:18:48 -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.460 12:18:48 -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.460 12:18:48 -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.460 12:18:48 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.460 12:18:48 -- scripts/common.sh@344 -- # case "$op" in 00:04:42.460 12:18:48 -- scripts/common.sh@345 -- # : 1 00:04:42.460 12:18:48 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.460 12:18:48 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.460 12:18:48 -- scripts/common.sh@365 -- # decimal 1 00:04:42.460 12:18:48 -- scripts/common.sh@353 -- # local d=1 00:04:42.460 12:18:48 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.460 12:18:48 -- scripts/common.sh@355 -- # echo 1 00:04:42.460 12:18:48 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.460 12:18:48 -- scripts/common.sh@366 -- # decimal 2 00:04:42.460 12:18:48 -- scripts/common.sh@353 -- # local d=2 00:04:42.460 12:18:48 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.460 12:18:48 -- scripts/common.sh@355 -- # echo 2 00:04:42.460 12:18:48 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.460 12:18:48 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.460 12:18:48 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.460 12:18:48 -- scripts/common.sh@368 -- # return 0 00:04:42.460 12:18:48 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.460 12:18:48 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:42.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.460 --rc genhtml_branch_coverage=1 00:04:42.460 --rc genhtml_function_coverage=1 00:04:42.460 --rc genhtml_legend=1 00:04:42.460 --rc geninfo_all_blocks=1 00:04:42.460 --rc geninfo_unexecuted_blocks=1 00:04:42.460 00:04:42.460 ' 00:04:42.460 12:18:48 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:42.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.460 --rc genhtml_branch_coverage=1 00:04:42.460 --rc genhtml_function_coverage=1 00:04:42.460 --rc genhtml_legend=1 00:04:42.460 --rc geninfo_all_blocks=1 00:04:42.460 --rc geninfo_unexecuted_blocks=1 00:04:42.460 00:04:42.460 ' 00:04:42.460 12:18:48 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:42.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.460 --rc genhtml_branch_coverage=1 00:04:42.460 --rc genhtml_function_coverage=1 00:04:42.460 --rc genhtml_legend=1 00:04:42.460 --rc geninfo_all_blocks=1 00:04:42.460 --rc geninfo_unexecuted_blocks=1 00:04:42.460 00:04:42.460 ' 00:04:42.460 12:18:48 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:42.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.460 --rc genhtml_branch_coverage=1 00:04:42.460 --rc genhtml_function_coverage=1 00:04:42.460 --rc genhtml_legend=1 00:04:42.460 --rc geninfo_all_blocks=1 00:04:42.460 --rc geninfo_unexecuted_blocks=1 00:04:42.460 00:04:42.460 ' 00:04:42.460 12:18:48 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:42.460 12:18:48 -- nvmf/common.sh@7 -- # uname -s 00:04:42.460 12:18:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:42.460 12:18:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:42.460 12:18:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:42.460 12:18:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:42.460 12:18:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:42.460 12:18:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:42.460 12:18:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:42.460 12:18:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:42.460 12:18:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:42.460 12:18:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:42.460 12:18:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:04:42.460 12:18:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:04:42.460 12:18:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:42.460 12:18:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:42.460 12:18:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:42.460 12:18:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:42.460 12:18:48 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:42.460 12:18:48 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:42.460 12:18:48 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:42.460 12:18:48 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:42.460 12:18:48 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:42.460 12:18:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.460 12:18:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.460 12:18:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.460 12:18:48 -- paths/export.sh@5 -- # export PATH 00:04:42.461 12:18:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.461 12:18:48 -- nvmf/common.sh@51 -- # : 0 00:04:42.461 12:18:48 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:42.461 12:18:48 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:42.461 12:18:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:42.461 12:18:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:42.461 12:18:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:42.461 12:18:48 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:42.461 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:42.461 12:18:48 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:42.461 12:18:48 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:42.461 12:18:48 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:42.461 12:18:48 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:42.461 12:18:48 -- spdk/autotest.sh@32 -- # uname -s 00:04:42.461 12:18:48 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:42.461 12:18:48 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:42.461 12:18:48 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:42.461 12:18:48 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:42.461 12:18:48 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:42.461 12:18:48 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:42.461 12:18:48 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:42.461 12:18:48 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:42.461 12:18:48 -- spdk/autotest.sh@48 -- # udevadm_pid=4162816 00:04:42.461 12:18:48 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:42.461 12:18:48 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:42.461 12:18:48 -- pm/common@17 -- # local monitor 00:04:42.461 12:18:48 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:42.461 12:18:48 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:42.461 12:18:48 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:42.461 12:18:48 -- pm/common@21 -- # date +%s 00:04:42.461 12:18:48 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:42.461 12:18:48 -- pm/common@21 -- # date +%s 00:04:42.461 12:18:48 -- pm/common@25 -- # sleep 1 00:04:42.461 12:18:48 -- pm/common@21 -- # date +%s 00:04:42.461 12:18:48 -- pm/common@21 -- # date +%s 00:04:42.461 12:18:48 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732101528 00:04:42.461 12:18:48 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732101528 00:04:42.461 12:18:48 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732101528 00:04:42.461 12:18:48 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732101528 00:04:42.720 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732101528_collect-cpu-load.pm.log 00:04:42.720 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732101528_collect-vmstat.pm.log 00:04:42.720 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732101528_collect-cpu-temp.pm.log 00:04:42.720 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732101528_collect-bmc-pm.bmc.pm.log 00:04:43.655 12:18:49 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:43.655 12:18:49 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:43.655 12:18:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:43.655 12:18:49 -- common/autotest_common.sh@10 -- # set +x 00:04:43.655 12:18:49 -- spdk/autotest.sh@59 -- # create_test_list 00:04:43.655 12:18:49 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:43.655 12:18:49 -- common/autotest_common.sh@10 -- # set +x 00:04:43.655 12:18:49 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:43.655 12:18:49 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:43.655 12:18:49 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:43.655 12:18:49 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:43.655 12:18:49 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:43.655 12:18:49 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:43.655 12:18:49 -- common/autotest_common.sh@1457 -- # uname 00:04:43.655 12:18:49 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:43.655 12:18:49 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:43.655 12:18:49 -- common/autotest_common.sh@1477 -- # uname 00:04:43.655 12:18:49 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:43.655 12:18:49 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:43.655 12:18:49 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:43.656 lcov: LCOV version 1.15 00:04:43.656 12:18:49 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:55.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:55.863 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:08.074 12:19:13 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:08.074 12:19:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:08.074 12:19:13 -- common/autotest_common.sh@10 -- # set +x 00:05:08.074 12:19:13 -- spdk/autotest.sh@78 -- # rm -f 00:05:08.074 12:19:13 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:11.364 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:05:11.364 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:05:11.364 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:05:11.364 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:05:11.364 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:05:11.364 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:05:11.364 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:05:11.364 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:05:11.364 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:05:11.364 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:05:11.365 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:05:11.365 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:05:11.365 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:05:11.365 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:05:11.365 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:05:11.365 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:05:11.365 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:05:11.365 12:19:16 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:11.365 12:19:16 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:11.365 12:19:16 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:11.365 12:19:16 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:11.365 12:19:16 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:11.365 12:19:16 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:11.365 12:19:16 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:11.365 12:19:16 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:11.365 12:19:16 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:11.365 12:19:16 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:11.365 12:19:16 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:11.365 12:19:16 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:11.365 12:19:16 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:11.365 12:19:16 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:11.365 12:19:16 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:11.365 No valid GPT data, bailing 00:05:11.365 12:19:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:11.365 12:19:17 -- scripts/common.sh@394 -- # pt= 00:05:11.365 12:19:17 -- scripts/common.sh@395 -- # return 1 00:05:11.365 12:19:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:11.365 1+0 records in 00:05:11.365 1+0 records out 00:05:11.365 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00406904 s, 258 MB/s 00:05:11.365 12:19:17 -- spdk/autotest.sh@105 -- # sync 00:05:11.365 12:19:17 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:11.365 12:19:17 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:11.365 12:19:17 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:16.814 12:19:22 -- spdk/autotest.sh@111 -- # uname -s 00:05:16.814 12:19:22 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:16.814 12:19:22 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:16.814 12:19:22 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:20.105 Hugepages 00:05:20.105 node hugesize free / total 00:05:20.105 node0 1048576kB 0 / 0 00:05:20.105 node0 2048kB 0 / 0 00:05:20.105 node1 1048576kB 0 / 0 00:05:20.105 node1 2048kB 0 / 0 00:05:20.105 00:05:20.105 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:20.105 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:20.105 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:20.105 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:20.105 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:20.105 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:20.105 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:20.105 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:20.105 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:20.105 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:05:20.105 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:20.105 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:20.105 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:20.105 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:20.105 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:20.105 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:20.105 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:20.105 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:20.105 12:19:25 -- spdk/autotest.sh@117 -- # uname -s 00:05:20.105 12:19:25 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:20.105 12:19:25 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:20.105 12:19:25 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:22.641 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:22.641 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:22.641 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:22.641 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:22.641 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:22.641 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:22.641 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:22.900 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:22.900 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:22.900 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:22.900 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:22.900 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:22.900 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:22.900 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:22.900 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:22.900 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:24.279 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:05:24.279 12:19:30 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:25.659 12:19:31 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:25.659 12:19:31 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:25.659 12:19:31 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:25.659 12:19:31 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:25.659 12:19:31 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:25.659 12:19:31 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:25.659 12:19:31 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:25.659 12:19:31 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:25.659 12:19:31 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:25.659 12:19:31 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:25.659 12:19:31 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:05:25.659 12:19:31 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:28.197 Waiting for block devices as requested 00:05:28.197 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:05:28.456 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:28.456 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:28.456 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:28.716 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:28.716 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:28.716 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:28.975 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:28.975 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:28.975 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:29.234 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:29.234 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:29.234 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:29.234 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:29.493 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:29.493 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:29.493 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:29.753 12:19:35 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:29.753 12:19:35 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:05:29.753 12:19:35 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:29.753 12:19:35 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:05:29.753 12:19:35 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:05:29.753 12:19:35 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:05:29.753 12:19:35 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:05:29.753 12:19:35 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:29.753 12:19:35 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:29.753 12:19:35 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:29.753 12:19:35 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:29.753 12:19:35 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:29.753 12:19:35 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:29.753 12:19:35 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:05:29.753 12:19:35 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:29.753 12:19:35 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:29.753 12:19:35 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:29.753 12:19:35 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:29.753 12:19:35 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:29.753 12:19:35 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:29.753 12:19:35 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:29.753 12:19:35 -- common/autotest_common.sh@1543 -- # continue 00:05:29.753 12:19:35 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:29.753 12:19:35 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:29.753 12:19:35 -- common/autotest_common.sh@10 -- # set +x 00:05:29.753 12:19:35 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:29.753 12:19:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:29.753 12:19:35 -- common/autotest_common.sh@10 -- # set +x 00:05:29.753 12:19:35 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:33.043 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:33.043 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:33.043 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:33.043 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:33.043 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:33.043 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:33.044 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:33.044 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:33.044 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:33.044 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:33.044 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:33.044 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:33.044 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:33.044 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:33.044 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:33.044 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:34.423 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:05:34.423 12:19:39 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:34.423 12:19:39 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:34.423 12:19:39 -- common/autotest_common.sh@10 -- # set +x 00:05:34.423 12:19:39 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:34.423 12:19:39 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:34.423 12:19:39 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:34.423 12:19:39 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:34.423 12:19:39 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:34.423 12:19:39 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:34.423 12:19:39 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:34.423 12:19:39 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:34.423 12:19:39 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:34.423 12:19:39 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:34.423 12:19:39 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:34.423 12:19:39 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:34.423 12:19:39 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:34.423 12:19:40 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:34.423 12:19:40 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:05:34.423 12:19:40 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:34.423 12:19:40 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:05:34.423 12:19:40 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:05:34.423 12:19:40 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:34.423 12:19:40 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:05:34.423 12:19:40 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:05:34.423 12:19:40 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:05:34.423 12:19:40 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:05:34.423 12:19:40 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=4177063 00:05:34.423 12:19:40 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:34.423 12:19:40 -- common/autotest_common.sh@1585 -- # waitforlisten 4177063 00:05:34.423 12:19:40 -- common/autotest_common.sh@835 -- # '[' -z 4177063 ']' 00:05:34.423 12:19:40 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.423 12:19:40 -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.423 12:19:40 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.423 12:19:40 -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.423 12:19:40 -- common/autotest_common.sh@10 -- # set +x 00:05:34.423 [2024-11-20 12:19:40.077354] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:05:34.423 [2024-11-20 12:19:40.077409] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4177063 ] 00:05:34.423 [2024-11-20 12:19:40.154063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.682 [2024-11-20 12:19:40.195319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.682 12:19:40 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.682 12:19:40 -- common/autotest_common.sh@868 -- # return 0 00:05:34.682 12:19:40 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:05:34.682 12:19:40 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:05:34.682 12:19:40 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:05:37.972 nvme0n1 00:05:37.972 12:19:43 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:37.972 [2024-11-20 12:19:43.588968] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:37.972 request: 00:05:37.972 { 00:05:37.972 "nvme_ctrlr_name": "nvme0", 00:05:37.972 "password": "test", 00:05:37.972 "method": "bdev_nvme_opal_revert", 00:05:37.972 "req_id": 1 00:05:37.972 } 00:05:37.972 Got JSON-RPC error response 00:05:37.972 response: 00:05:37.972 { 00:05:37.972 "code": -32602, 00:05:37.972 "message": "Invalid parameters" 00:05:37.972 } 00:05:37.972 12:19:43 -- common/autotest_common.sh@1591 -- # true 00:05:37.972 12:19:43 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:05:37.972 12:19:43 -- common/autotest_common.sh@1595 -- # killprocess 4177063 00:05:37.972 12:19:43 -- common/autotest_common.sh@954 -- # '[' -z 4177063 ']' 00:05:37.972 12:19:43 -- common/autotest_common.sh@958 -- # kill -0 4177063 00:05:37.972 12:19:43 -- common/autotest_common.sh@959 -- # uname 00:05:37.972 12:19:43 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:37.972 12:19:43 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4177063 00:05:37.972 12:19:43 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:37.972 12:19:43 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:37.972 12:19:43 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4177063' 00:05:37.972 killing process with pid 4177063 00:05:37.972 12:19:43 -- common/autotest_common.sh@973 -- # kill 4177063 00:05:37.972 12:19:43 -- common/autotest_common.sh@978 -- # wait 4177063 00:05:40.505 12:19:45 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:40.505 12:19:45 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:40.505 12:19:45 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:40.505 12:19:45 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:40.505 12:19:45 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:40.505 12:19:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:40.505 12:19:45 -- common/autotest_common.sh@10 -- # set +x 00:05:40.505 12:19:45 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:40.505 12:19:45 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:40.505 12:19:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.505 12:19:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.505 12:19:45 -- common/autotest_common.sh@10 -- # set +x 00:05:40.505 ************************************ 00:05:40.505 START TEST env 00:05:40.505 ************************************ 00:05:40.505 12:19:45 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:40.505 * Looking for test storage... 00:05:40.505 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:40.505 12:19:45 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:40.505 12:19:45 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:40.505 12:19:45 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:40.505 12:19:46 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:40.505 12:19:46 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.505 12:19:46 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.505 12:19:46 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.505 12:19:46 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.505 12:19:46 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.505 12:19:46 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.505 12:19:46 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.505 12:19:46 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.505 12:19:46 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.505 12:19:46 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.505 12:19:46 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.505 12:19:46 env -- scripts/common.sh@344 -- # case "$op" in 00:05:40.505 12:19:46 env -- scripts/common.sh@345 -- # : 1 00:05:40.505 12:19:46 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.505 12:19:46 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.505 12:19:46 env -- scripts/common.sh@365 -- # decimal 1 00:05:40.505 12:19:46 env -- scripts/common.sh@353 -- # local d=1 00:05:40.505 12:19:46 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.505 12:19:46 env -- scripts/common.sh@355 -- # echo 1 00:05:40.505 12:19:46 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.505 12:19:46 env -- scripts/common.sh@366 -- # decimal 2 00:05:40.505 12:19:46 env -- scripts/common.sh@353 -- # local d=2 00:05:40.505 12:19:46 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.505 12:19:46 env -- scripts/common.sh@355 -- # echo 2 00:05:40.505 12:19:46 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.505 12:19:46 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.505 12:19:46 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.505 12:19:46 env -- scripts/common.sh@368 -- # return 0 00:05:40.505 12:19:46 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.505 12:19:46 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:40.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.505 --rc genhtml_branch_coverage=1 00:05:40.505 --rc genhtml_function_coverage=1 00:05:40.505 --rc genhtml_legend=1 00:05:40.505 --rc geninfo_all_blocks=1 00:05:40.505 --rc geninfo_unexecuted_blocks=1 00:05:40.505 00:05:40.505 ' 00:05:40.505 12:19:46 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:40.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.505 --rc genhtml_branch_coverage=1 00:05:40.505 --rc genhtml_function_coverage=1 00:05:40.505 --rc genhtml_legend=1 00:05:40.505 --rc geninfo_all_blocks=1 00:05:40.505 --rc geninfo_unexecuted_blocks=1 00:05:40.505 00:05:40.505 ' 00:05:40.505 12:19:46 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:40.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.505 --rc genhtml_branch_coverage=1 00:05:40.505 --rc genhtml_function_coverage=1 00:05:40.505 --rc genhtml_legend=1 00:05:40.505 --rc geninfo_all_blocks=1 00:05:40.505 --rc geninfo_unexecuted_blocks=1 00:05:40.505 00:05:40.505 ' 00:05:40.505 12:19:46 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:40.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.505 --rc genhtml_branch_coverage=1 00:05:40.505 --rc genhtml_function_coverage=1 00:05:40.505 --rc genhtml_legend=1 00:05:40.505 --rc geninfo_all_blocks=1 00:05:40.505 --rc geninfo_unexecuted_blocks=1 00:05:40.505 00:05:40.505 ' 00:05:40.505 12:19:46 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:40.505 12:19:46 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.505 12:19:46 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.506 12:19:46 env -- common/autotest_common.sh@10 -- # set +x 00:05:40.506 ************************************ 00:05:40.506 START TEST env_memory 00:05:40.506 ************************************ 00:05:40.506 12:19:46 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:40.506 00:05:40.506 00:05:40.506 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.506 http://cunit.sourceforge.net/ 00:05:40.506 00:05:40.506 00:05:40.506 Suite: memory 00:05:40.506 Test: alloc and free memory map ...[2024-11-20 12:19:46.097996] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:40.506 passed 00:05:40.506 Test: mem map translation ...[2024-11-20 12:19:46.116403] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:40.506 [2024-11-20 12:19:46.116417] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:40.506 [2024-11-20 12:19:46.116452] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:40.506 [2024-11-20 12:19:46.116458] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:40.506 passed 00:05:40.506 Test: mem map registration ...[2024-11-20 12:19:46.152151] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:40.506 [2024-11-20 12:19:46.152164] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:40.506 passed 00:05:40.506 Test: mem map adjacent registrations ...passed 00:05:40.506 00:05:40.506 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.506 suites 1 1 n/a 0 0 00:05:40.506 tests 4 4 4 0 0 00:05:40.506 asserts 152 152 152 0 n/a 00:05:40.506 00:05:40.506 Elapsed time = 0.134 seconds 00:05:40.506 00:05:40.506 real 0m0.147s 00:05:40.506 user 0m0.136s 00:05:40.506 sys 0m0.010s 00:05:40.506 12:19:46 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.506 12:19:46 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:40.506 ************************************ 00:05:40.506 END TEST env_memory 00:05:40.506 ************************************ 00:05:40.506 12:19:46 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:40.506 12:19:46 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.506 12:19:46 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.506 12:19:46 env -- common/autotest_common.sh@10 -- # set +x 00:05:40.506 ************************************ 00:05:40.506 START TEST env_vtophys 00:05:40.506 ************************************ 00:05:40.506 12:19:46 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:40.766 EAL: lib.eal log level changed from notice to debug 00:05:40.766 EAL: Detected lcore 0 as core 0 on socket 0 00:05:40.766 EAL: Detected lcore 1 as core 1 on socket 0 00:05:40.766 EAL: Detected lcore 2 as core 2 on socket 0 00:05:40.766 EAL: Detected lcore 3 as core 3 on socket 0 00:05:40.766 EAL: Detected lcore 4 as core 4 on socket 0 00:05:40.766 EAL: Detected lcore 5 as core 5 on socket 0 00:05:40.766 EAL: Detected lcore 6 as core 6 on socket 0 00:05:40.766 EAL: Detected lcore 7 as core 8 on socket 0 00:05:40.766 EAL: Detected lcore 8 as core 9 on socket 0 00:05:40.766 EAL: Detected lcore 9 as core 10 on socket 0 00:05:40.766 EAL: Detected lcore 10 as core 11 on socket 0 00:05:40.766 EAL: Detected lcore 11 as core 12 on socket 0 00:05:40.766 EAL: Detected lcore 12 as core 13 on socket 0 00:05:40.766 EAL: Detected lcore 13 as core 16 on socket 0 00:05:40.766 EAL: Detected lcore 14 as core 17 on socket 0 00:05:40.766 EAL: Detected lcore 15 as core 18 on socket 0 00:05:40.766 EAL: Detected lcore 16 as core 19 on socket 0 00:05:40.766 EAL: Detected lcore 17 as core 20 on socket 0 00:05:40.766 EAL: Detected lcore 18 as core 21 on socket 0 00:05:40.766 EAL: Detected lcore 19 as core 25 on socket 0 00:05:40.766 EAL: Detected lcore 20 as core 26 on socket 0 00:05:40.766 EAL: Detected lcore 21 as core 27 on socket 0 00:05:40.766 EAL: Detected lcore 22 as core 28 on socket 0 00:05:40.766 EAL: Detected lcore 23 as core 29 on socket 0 00:05:40.766 EAL: Detected lcore 24 as core 0 on socket 1 00:05:40.766 EAL: Detected lcore 25 as core 1 on socket 1 00:05:40.766 EAL: Detected lcore 26 as core 2 on socket 1 00:05:40.766 EAL: Detected lcore 27 as core 3 on socket 1 00:05:40.766 EAL: Detected lcore 28 as core 4 on socket 1 00:05:40.766 EAL: Detected lcore 29 as core 5 on socket 1 00:05:40.766 EAL: Detected lcore 30 as core 6 on socket 1 00:05:40.766 EAL: Detected lcore 31 as core 8 on socket 1 00:05:40.766 EAL: Detected lcore 32 as core 10 on socket 1 00:05:40.766 EAL: Detected lcore 33 as core 11 on socket 1 00:05:40.766 EAL: Detected lcore 34 as core 12 on socket 1 00:05:40.766 EAL: Detected lcore 35 as core 13 on socket 1 00:05:40.766 EAL: Detected lcore 36 as core 16 on socket 1 00:05:40.766 EAL: Detected lcore 37 as core 17 on socket 1 00:05:40.766 EAL: Detected lcore 38 as core 18 on socket 1 00:05:40.766 EAL: Detected lcore 39 as core 19 on socket 1 00:05:40.766 EAL: Detected lcore 40 as core 20 on socket 1 00:05:40.766 EAL: Detected lcore 41 as core 21 on socket 1 00:05:40.766 EAL: Detected lcore 42 as core 24 on socket 1 00:05:40.766 EAL: Detected lcore 43 as core 25 on socket 1 00:05:40.766 EAL: Detected lcore 44 as core 26 on socket 1 00:05:40.766 EAL: Detected lcore 45 as core 27 on socket 1 00:05:40.766 EAL: Detected lcore 46 as core 28 on socket 1 00:05:40.766 EAL: Detected lcore 47 as core 29 on socket 1 00:05:40.766 EAL: Detected lcore 48 as core 0 on socket 0 00:05:40.766 EAL: Detected lcore 49 as core 1 on socket 0 00:05:40.766 EAL: Detected lcore 50 as core 2 on socket 0 00:05:40.766 EAL: Detected lcore 51 as core 3 on socket 0 00:05:40.766 EAL: Detected lcore 52 as core 4 on socket 0 00:05:40.766 EAL: Detected lcore 53 as core 5 on socket 0 00:05:40.766 EAL: Detected lcore 54 as core 6 on socket 0 00:05:40.766 EAL: Detected lcore 55 as core 8 on socket 0 00:05:40.766 EAL: Detected lcore 56 as core 9 on socket 0 00:05:40.766 EAL: Detected lcore 57 as core 10 on socket 0 00:05:40.766 EAL: Detected lcore 58 as core 11 on socket 0 00:05:40.766 EAL: Detected lcore 59 as core 12 on socket 0 00:05:40.766 EAL: Detected lcore 60 as core 13 on socket 0 00:05:40.766 EAL: Detected lcore 61 as core 16 on socket 0 00:05:40.766 EAL: Detected lcore 62 as core 17 on socket 0 00:05:40.766 EAL: Detected lcore 63 as core 18 on socket 0 00:05:40.766 EAL: Detected lcore 64 as core 19 on socket 0 00:05:40.766 EAL: Detected lcore 65 as core 20 on socket 0 00:05:40.766 EAL: Detected lcore 66 as core 21 on socket 0 00:05:40.766 EAL: Detected lcore 67 as core 25 on socket 0 00:05:40.766 EAL: Detected lcore 68 as core 26 on socket 0 00:05:40.766 EAL: Detected lcore 69 as core 27 on socket 0 00:05:40.766 EAL: Detected lcore 70 as core 28 on socket 0 00:05:40.766 EAL: Detected lcore 71 as core 29 on socket 0 00:05:40.766 EAL: Detected lcore 72 as core 0 on socket 1 00:05:40.766 EAL: Detected lcore 73 as core 1 on socket 1 00:05:40.766 EAL: Detected lcore 74 as core 2 on socket 1 00:05:40.766 EAL: Detected lcore 75 as core 3 on socket 1 00:05:40.766 EAL: Detected lcore 76 as core 4 on socket 1 00:05:40.766 EAL: Detected lcore 77 as core 5 on socket 1 00:05:40.766 EAL: Detected lcore 78 as core 6 on socket 1 00:05:40.766 EAL: Detected lcore 79 as core 8 on socket 1 00:05:40.766 EAL: Detected lcore 80 as core 10 on socket 1 00:05:40.766 EAL: Detected lcore 81 as core 11 on socket 1 00:05:40.766 EAL: Detected lcore 82 as core 12 on socket 1 00:05:40.766 EAL: Detected lcore 83 as core 13 on socket 1 00:05:40.766 EAL: Detected lcore 84 as core 16 on socket 1 00:05:40.766 EAL: Detected lcore 85 as core 17 on socket 1 00:05:40.766 EAL: Detected lcore 86 as core 18 on socket 1 00:05:40.766 EAL: Detected lcore 87 as core 19 on socket 1 00:05:40.766 EAL: Detected lcore 88 as core 20 on socket 1 00:05:40.766 EAL: Detected lcore 89 as core 21 on socket 1 00:05:40.766 EAL: Detected lcore 90 as core 24 on socket 1 00:05:40.766 EAL: Detected lcore 91 as core 25 on socket 1 00:05:40.766 EAL: Detected lcore 92 as core 26 on socket 1 00:05:40.766 EAL: Detected lcore 93 as core 27 on socket 1 00:05:40.766 EAL: Detected lcore 94 as core 28 on socket 1 00:05:40.766 EAL: Detected lcore 95 as core 29 on socket 1 00:05:40.766 EAL: Maximum logical cores by configuration: 128 00:05:40.766 EAL: Detected CPU lcores: 96 00:05:40.766 EAL: Detected NUMA nodes: 2 00:05:40.766 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:40.766 EAL: Detected shared linkage of DPDK 00:05:40.766 EAL: No shared files mode enabled, IPC will be disabled 00:05:40.766 EAL: Bus pci wants IOVA as 'DC' 00:05:40.766 EAL: Buses did not request a specific IOVA mode. 00:05:40.766 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:40.766 EAL: Selected IOVA mode 'VA' 00:05:40.766 EAL: Probing VFIO support... 00:05:40.766 EAL: IOMMU type 1 (Type 1) is supported 00:05:40.766 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:40.766 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:40.766 EAL: VFIO support initialized 00:05:40.766 EAL: Ask a virtual area of 0x2e000 bytes 00:05:40.766 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:40.766 EAL: Setting up physically contiguous memory... 00:05:40.766 EAL: Setting maximum number of open files to 524288 00:05:40.766 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:40.766 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:40.766 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:40.766 EAL: Ask a virtual area of 0x61000 bytes 00:05:40.766 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:40.766 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:40.766 EAL: Ask a virtual area of 0x400000000 bytes 00:05:40.766 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:40.766 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:40.766 EAL: Ask a virtual area of 0x61000 bytes 00:05:40.766 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:40.766 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:40.766 EAL: Ask a virtual area of 0x400000000 bytes 00:05:40.766 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:40.766 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:40.766 EAL: Ask a virtual area of 0x61000 bytes 00:05:40.766 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:40.766 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:40.766 EAL: Ask a virtual area of 0x400000000 bytes 00:05:40.766 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:40.766 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:40.766 EAL: Ask a virtual area of 0x61000 bytes 00:05:40.766 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:40.766 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:40.766 EAL: Ask a virtual area of 0x400000000 bytes 00:05:40.766 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:40.766 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:40.766 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:40.766 EAL: Ask a virtual area of 0x61000 bytes 00:05:40.766 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:40.766 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:40.766 EAL: Ask a virtual area of 0x400000000 bytes 00:05:40.766 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:40.766 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:40.766 EAL: Ask a virtual area of 0x61000 bytes 00:05:40.766 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:40.766 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:40.766 EAL: Ask a virtual area of 0x400000000 bytes 00:05:40.766 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:40.766 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:40.766 EAL: Ask a virtual area of 0x61000 bytes 00:05:40.766 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:40.766 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:40.767 EAL: Ask a virtual area of 0x400000000 bytes 00:05:40.767 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:40.767 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:40.767 EAL: Ask a virtual area of 0x61000 bytes 00:05:40.767 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:40.767 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:40.767 EAL: Ask a virtual area of 0x400000000 bytes 00:05:40.767 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:40.767 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:40.767 EAL: Hugepages will be freed exactly as allocated. 00:05:40.767 EAL: No shared files mode enabled, IPC is disabled 00:05:40.767 EAL: No shared files mode enabled, IPC is disabled 00:05:40.767 EAL: TSC frequency is ~2100000 KHz 00:05:40.767 EAL: Main lcore 0 is ready (tid=7f7ce83a9a00;cpuset=[0]) 00:05:40.767 EAL: Trying to obtain current memory policy. 00:05:40.767 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.767 EAL: Restoring previous memory policy: 0 00:05:40.767 EAL: request: mp_malloc_sync 00:05:40.767 EAL: No shared files mode enabled, IPC is disabled 00:05:40.767 EAL: Heap on socket 0 was expanded by 2MB 00:05:40.767 EAL: No shared files mode enabled, IPC is disabled 00:05:40.767 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:40.767 EAL: Mem event callback 'spdk:(nil)' registered 00:05:40.767 00:05:40.767 00:05:40.767 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.767 http://cunit.sourceforge.net/ 00:05:40.767 00:05:40.767 00:05:40.767 Suite: components_suite 00:05:40.767 Test: vtophys_malloc_test ...passed 00:05:40.767 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:40.767 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.767 EAL: Restoring previous memory policy: 4 00:05:40.767 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.767 EAL: request: mp_malloc_sync 00:05:40.767 EAL: No shared files mode enabled, IPC is disabled 00:05:40.767 EAL: Heap on socket 0 was expanded by 4MB 00:05:40.767 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.767 EAL: request: mp_malloc_sync 00:05:40.767 EAL: No shared files mode enabled, IPC is disabled 00:05:40.767 EAL: Heap on socket 0 was shrunk by 4MB 00:05:40.767 EAL: Trying to obtain current memory policy. 00:05:40.767 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.767 EAL: Restoring previous memory policy: 4 00:05:40.767 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.767 EAL: request: mp_malloc_sync 00:05:40.767 EAL: No shared files mode enabled, IPC is disabled 00:05:40.767 EAL: Heap on socket 0 was expanded by 6MB 00:05:40.767 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.767 EAL: request: mp_malloc_sync 00:05:40.767 EAL: No shared files mode enabled, IPC is disabled 00:05:40.767 EAL: Heap on socket 0 was shrunk by 6MB 00:05:40.767 EAL: Trying to obtain current memory policy. 00:05:40.767 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.767 EAL: Restoring previous memory policy: 4 00:05:40.767 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.767 EAL: request: mp_malloc_sync 00:05:40.767 EAL: No shared files mode enabled, IPC is disabled 00:05:40.767 EAL: Heap on socket 0 was expanded by 10MB 00:05:40.767 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.767 EAL: request: mp_malloc_sync 00:05:40.767 EAL: No shared files mode enabled, IPC is disabled 00:05:40.767 EAL: Heap on socket 0 was shrunk by 10MB 00:05:40.767 EAL: Trying to obtain current memory policy. 00:05:40.767 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.767 EAL: Restoring previous memory policy: 4 00:05:40.767 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.767 EAL: request: mp_malloc_sync 00:05:40.767 EAL: No shared files mode enabled, IPC is disabled 00:05:40.767 EAL: Heap on socket 0 was expanded by 18MB 00:05:40.767 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.767 EAL: request: mp_malloc_sync 00:05:40.767 EAL: No shared files mode enabled, IPC is disabled 00:05:40.767 EAL: Heap on socket 0 was shrunk by 18MB 00:05:40.767 EAL: Trying to obtain current memory policy. 00:05:40.767 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.767 EAL: Restoring previous memory policy: 4 00:05:40.767 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.767 EAL: request: mp_malloc_sync 00:05:40.767 EAL: No shared files mode enabled, IPC is disabled 00:05:40.767 EAL: Heap on socket 0 was expanded by 34MB 00:05:40.767 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.767 EAL: request: mp_malloc_sync 00:05:40.767 EAL: No shared files mode enabled, IPC is disabled 00:05:40.767 EAL: Heap on socket 0 was shrunk by 34MB 00:05:40.767 EAL: Trying to obtain current memory policy. 00:05:40.767 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.767 EAL: Restoring previous memory policy: 4 00:05:40.767 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.767 EAL: request: mp_malloc_sync 00:05:40.767 EAL: No shared files mode enabled, IPC is disabled 00:05:40.767 EAL: Heap on socket 0 was expanded by 66MB 00:05:40.767 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.767 EAL: request: mp_malloc_sync 00:05:40.767 EAL: No shared files mode enabled, IPC is disabled 00:05:40.767 EAL: Heap on socket 0 was shrunk by 66MB 00:05:40.767 EAL: Trying to obtain current memory policy. 00:05:40.767 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.767 EAL: Restoring previous memory policy: 4 00:05:40.767 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.767 EAL: request: mp_malloc_sync 00:05:40.767 EAL: No shared files mode enabled, IPC is disabled 00:05:40.767 EAL: Heap on socket 0 was expanded by 130MB 00:05:40.767 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.767 EAL: request: mp_malloc_sync 00:05:40.767 EAL: No shared files mode enabled, IPC is disabled 00:05:40.767 EAL: Heap on socket 0 was shrunk by 130MB 00:05:40.767 EAL: Trying to obtain current memory policy. 00:05:40.767 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.767 EAL: Restoring previous memory policy: 4 00:05:40.767 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.767 EAL: request: mp_malloc_sync 00:05:40.767 EAL: No shared files mode enabled, IPC is disabled 00:05:40.767 EAL: Heap on socket 0 was expanded by 258MB 00:05:41.027 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.027 EAL: request: mp_malloc_sync 00:05:41.027 EAL: No shared files mode enabled, IPC is disabled 00:05:41.027 EAL: Heap on socket 0 was shrunk by 258MB 00:05:41.027 EAL: Trying to obtain current memory policy. 00:05:41.027 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.027 EAL: Restoring previous memory policy: 4 00:05:41.027 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.027 EAL: request: mp_malloc_sync 00:05:41.027 EAL: No shared files mode enabled, IPC is disabled 00:05:41.027 EAL: Heap on socket 0 was expanded by 514MB 00:05:41.027 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.287 EAL: request: mp_malloc_sync 00:05:41.287 EAL: No shared files mode enabled, IPC is disabled 00:05:41.287 EAL: Heap on socket 0 was shrunk by 514MB 00:05:41.287 EAL: Trying to obtain current memory policy. 00:05:41.287 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.287 EAL: Restoring previous memory policy: 4 00:05:41.287 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.287 EAL: request: mp_malloc_sync 00:05:41.287 EAL: No shared files mode enabled, IPC is disabled 00:05:41.287 EAL: Heap on socket 0 was expanded by 1026MB 00:05:41.546 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.805 EAL: request: mp_malloc_sync 00:05:41.805 EAL: No shared files mode enabled, IPC is disabled 00:05:41.805 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:41.805 passed 00:05:41.805 00:05:41.805 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.805 suites 1 1 n/a 0 0 00:05:41.805 tests 2 2 2 0 0 00:05:41.805 asserts 497 497 497 0 n/a 00:05:41.805 00:05:41.805 Elapsed time = 0.970 seconds 00:05:41.805 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.805 EAL: request: mp_malloc_sync 00:05:41.805 EAL: No shared files mode enabled, IPC is disabled 00:05:41.805 EAL: Heap on socket 0 was shrunk by 2MB 00:05:41.805 EAL: No shared files mode enabled, IPC is disabled 00:05:41.805 EAL: No shared files mode enabled, IPC is disabled 00:05:41.805 EAL: No shared files mode enabled, IPC is disabled 00:05:41.805 00:05:41.805 real 0m1.099s 00:05:41.805 user 0m0.655s 00:05:41.805 sys 0m0.415s 00:05:41.805 12:19:47 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.805 12:19:47 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:41.805 ************************************ 00:05:41.805 END TEST env_vtophys 00:05:41.805 ************************************ 00:05:41.805 12:19:47 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:41.805 12:19:47 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.805 12:19:47 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.805 12:19:47 env -- common/autotest_common.sh@10 -- # set +x 00:05:41.805 ************************************ 00:05:41.805 START TEST env_pci 00:05:41.805 ************************************ 00:05:41.805 12:19:47 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:41.805 00:05:41.805 00:05:41.805 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.805 http://cunit.sourceforge.net/ 00:05:41.805 00:05:41.805 00:05:41.805 Suite: pci 00:05:41.805 Test: pci_hook ...[2024-11-20 12:19:47.451620] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 4178379 has claimed it 00:05:41.805 EAL: Cannot find device (10000:00:01.0) 00:05:41.805 EAL: Failed to attach device on primary process 00:05:41.805 passed 00:05:41.805 00:05:41.805 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.805 suites 1 1 n/a 0 0 00:05:41.805 tests 1 1 1 0 0 00:05:41.805 asserts 25 25 25 0 n/a 00:05:41.805 00:05:41.805 Elapsed time = 0.026 seconds 00:05:41.805 00:05:41.805 real 0m0.046s 00:05:41.805 user 0m0.013s 00:05:41.805 sys 0m0.032s 00:05:41.805 12:19:47 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.805 12:19:47 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:41.805 ************************************ 00:05:41.805 END TEST env_pci 00:05:41.805 ************************************ 00:05:41.805 12:19:47 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:41.805 12:19:47 env -- env/env.sh@15 -- # uname 00:05:41.805 12:19:47 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:41.806 12:19:47 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:41.806 12:19:47 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:41.806 12:19:47 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:41.806 12:19:47 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.806 12:19:47 env -- common/autotest_common.sh@10 -- # set +x 00:05:41.806 ************************************ 00:05:41.806 START TEST env_dpdk_post_init 00:05:41.806 ************************************ 00:05:41.806 12:19:47 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:42.065 EAL: Detected CPU lcores: 96 00:05:42.065 EAL: Detected NUMA nodes: 2 00:05:42.065 EAL: Detected shared linkage of DPDK 00:05:42.065 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:42.065 EAL: Selected IOVA mode 'VA' 00:05:42.065 EAL: VFIO support initialized 00:05:42.065 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:42.065 EAL: Using IOMMU type 1 (Type 1) 00:05:42.065 EAL: Ignore mapping IO port bar(1) 00:05:42.065 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:42.065 EAL: Ignore mapping IO port bar(1) 00:05:42.065 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:42.065 EAL: Ignore mapping IO port bar(1) 00:05:42.065 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:42.065 EAL: Ignore mapping IO port bar(1) 00:05:42.065 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:42.065 EAL: Ignore mapping IO port bar(1) 00:05:42.065 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:42.065 EAL: Ignore mapping IO port bar(1) 00:05:42.065 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:42.065 EAL: Ignore mapping IO port bar(1) 00:05:42.065 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:42.065 EAL: Ignore mapping IO port bar(1) 00:05:42.065 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:43.002 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:05:43.002 EAL: Ignore mapping IO port bar(1) 00:05:43.002 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:43.002 EAL: Ignore mapping IO port bar(1) 00:05:43.002 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:43.002 EAL: Ignore mapping IO port bar(1) 00:05:43.002 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:43.003 EAL: Ignore mapping IO port bar(1) 00:05:43.003 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:43.003 EAL: Ignore mapping IO port bar(1) 00:05:43.003 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:43.003 EAL: Ignore mapping IO port bar(1) 00:05:43.003 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:43.003 EAL: Ignore mapping IO port bar(1) 00:05:43.003 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:43.003 EAL: Ignore mapping IO port bar(1) 00:05:43.003 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:47.192 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:05:47.192 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:05:47.192 Starting DPDK initialization... 00:05:47.192 Starting SPDK post initialization... 00:05:47.192 SPDK NVMe probe 00:05:47.192 Attaching to 0000:5e:00.0 00:05:47.192 Attached to 0000:5e:00.0 00:05:47.192 Cleaning up... 00:05:47.192 00:05:47.192 real 0m4.960s 00:05:47.192 user 0m3.523s 00:05:47.192 sys 0m0.499s 00:05:47.192 12:19:52 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.192 12:19:52 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:47.192 ************************************ 00:05:47.192 END TEST env_dpdk_post_init 00:05:47.192 ************************************ 00:05:47.192 12:19:52 env -- env/env.sh@26 -- # uname 00:05:47.192 12:19:52 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:47.192 12:19:52 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:47.192 12:19:52 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.192 12:19:52 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.192 12:19:52 env -- common/autotest_common.sh@10 -- # set +x 00:05:47.192 ************************************ 00:05:47.192 START TEST env_mem_callbacks 00:05:47.192 ************************************ 00:05:47.192 12:19:52 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:47.192 EAL: Detected CPU lcores: 96 00:05:47.192 EAL: Detected NUMA nodes: 2 00:05:47.192 EAL: Detected shared linkage of DPDK 00:05:47.192 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:47.192 EAL: Selected IOVA mode 'VA' 00:05:47.192 EAL: VFIO support initialized 00:05:47.192 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:47.192 00:05:47.192 00:05:47.192 CUnit - A unit testing framework for C - Version 2.1-3 00:05:47.192 http://cunit.sourceforge.net/ 00:05:47.192 00:05:47.192 00:05:47.192 Suite: memory 00:05:47.192 Test: test ... 00:05:47.192 register 0x200000200000 2097152 00:05:47.192 malloc 3145728 00:05:47.192 register 0x200000400000 4194304 00:05:47.192 buf 0x200000500000 len 3145728 PASSED 00:05:47.192 malloc 64 00:05:47.192 buf 0x2000004fff40 len 64 PASSED 00:05:47.192 malloc 4194304 00:05:47.192 register 0x200000800000 6291456 00:05:47.192 buf 0x200000a00000 len 4194304 PASSED 00:05:47.192 free 0x200000500000 3145728 00:05:47.192 free 0x2000004fff40 64 00:05:47.192 unregister 0x200000400000 4194304 PASSED 00:05:47.192 free 0x200000a00000 4194304 00:05:47.192 unregister 0x200000800000 6291456 PASSED 00:05:47.192 malloc 8388608 00:05:47.192 register 0x200000400000 10485760 00:05:47.192 buf 0x200000600000 len 8388608 PASSED 00:05:47.192 free 0x200000600000 8388608 00:05:47.192 unregister 0x200000400000 10485760 PASSED 00:05:47.192 passed 00:05:47.192 00:05:47.192 Run Summary: Type Total Ran Passed Failed Inactive 00:05:47.192 suites 1 1 n/a 0 0 00:05:47.192 tests 1 1 1 0 0 00:05:47.192 asserts 15 15 15 0 n/a 00:05:47.192 00:05:47.192 Elapsed time = 0.008 seconds 00:05:47.192 00:05:47.192 real 0m0.059s 00:05:47.192 user 0m0.023s 00:05:47.192 sys 0m0.035s 00:05:47.192 12:19:52 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.192 12:19:52 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:47.192 ************************************ 00:05:47.192 END TEST env_mem_callbacks 00:05:47.192 ************************************ 00:05:47.192 00:05:47.192 real 0m6.841s 00:05:47.192 user 0m4.582s 00:05:47.192 sys 0m1.328s 00:05:47.192 12:19:52 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.192 12:19:52 env -- common/autotest_common.sh@10 -- # set +x 00:05:47.192 ************************************ 00:05:47.192 END TEST env 00:05:47.192 ************************************ 00:05:47.192 12:19:52 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:47.192 12:19:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.192 12:19:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.192 12:19:52 -- common/autotest_common.sh@10 -- # set +x 00:05:47.192 ************************************ 00:05:47.192 START TEST rpc 00:05:47.192 ************************************ 00:05:47.192 12:19:52 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:47.192 * Looking for test storage... 00:05:47.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:47.193 12:19:52 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:47.193 12:19:52 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:47.193 12:19:52 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:47.193 12:19:52 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:47.193 12:19:52 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.193 12:19:52 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.193 12:19:52 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.193 12:19:52 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.193 12:19:52 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.193 12:19:52 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.193 12:19:52 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.193 12:19:52 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.193 12:19:52 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.193 12:19:52 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.193 12:19:52 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.193 12:19:52 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:47.193 12:19:52 rpc -- scripts/common.sh@345 -- # : 1 00:05:47.193 12:19:52 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.193 12:19:52 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.193 12:19:52 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:47.193 12:19:52 rpc -- scripts/common.sh@353 -- # local d=1 00:05:47.193 12:19:52 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.193 12:19:52 rpc -- scripts/common.sh@355 -- # echo 1 00:05:47.193 12:19:52 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.193 12:19:52 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:47.193 12:19:52 rpc -- scripts/common.sh@353 -- # local d=2 00:05:47.193 12:19:52 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.193 12:19:52 rpc -- scripts/common.sh@355 -- # echo 2 00:05:47.193 12:19:52 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.193 12:19:52 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.193 12:19:52 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.193 12:19:52 rpc -- scripts/common.sh@368 -- # return 0 00:05:47.193 12:19:52 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.193 12:19:52 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:47.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.193 --rc genhtml_branch_coverage=1 00:05:47.193 --rc genhtml_function_coverage=1 00:05:47.193 --rc genhtml_legend=1 00:05:47.193 --rc geninfo_all_blocks=1 00:05:47.193 --rc geninfo_unexecuted_blocks=1 00:05:47.193 00:05:47.193 ' 00:05:47.193 12:19:52 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:47.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.193 --rc genhtml_branch_coverage=1 00:05:47.193 --rc genhtml_function_coverage=1 00:05:47.193 --rc genhtml_legend=1 00:05:47.193 --rc geninfo_all_blocks=1 00:05:47.193 --rc geninfo_unexecuted_blocks=1 00:05:47.193 00:05:47.193 ' 00:05:47.193 12:19:52 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:47.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.193 --rc genhtml_branch_coverage=1 00:05:47.193 --rc genhtml_function_coverage=1 00:05:47.193 --rc genhtml_legend=1 00:05:47.193 --rc geninfo_all_blocks=1 00:05:47.193 --rc geninfo_unexecuted_blocks=1 00:05:47.193 00:05:47.193 ' 00:05:47.193 12:19:52 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:47.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.193 --rc genhtml_branch_coverage=1 00:05:47.193 --rc genhtml_function_coverage=1 00:05:47.193 --rc genhtml_legend=1 00:05:47.193 --rc geninfo_all_blocks=1 00:05:47.193 --rc geninfo_unexecuted_blocks=1 00:05:47.193 00:05:47.193 ' 00:05:47.193 12:19:52 rpc -- rpc/rpc.sh@65 -- # spdk_pid=4179429 00:05:47.193 12:19:52 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:47.193 12:19:52 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:47.193 12:19:52 rpc -- rpc/rpc.sh@67 -- # waitforlisten 4179429 00:05:47.193 12:19:52 rpc -- common/autotest_common.sh@835 -- # '[' -z 4179429 ']' 00:05:47.193 12:19:52 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.193 12:19:52 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.193 12:19:52 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.193 12:19:52 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.193 12:19:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.451 [2024-11-20 12:19:52.982680] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:05:47.451 [2024-11-20 12:19:52.982728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4179429 ] 00:05:47.451 [2024-11-20 12:19:53.058752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.451 [2024-11-20 12:19:53.097462] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:47.452 [2024-11-20 12:19:53.097500] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 4179429' to capture a snapshot of events at runtime. 00:05:47.452 [2024-11-20 12:19:53.097510] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:47.452 [2024-11-20 12:19:53.097515] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:47.452 [2024-11-20 12:19:53.097521] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid4179429 for offline analysis/debug. 00:05:47.452 [2024-11-20 12:19:53.098072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.389 12:19:53 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.389 12:19:53 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:48.389 12:19:53 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:48.389 12:19:53 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:48.389 12:19:53 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:48.389 12:19:53 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:48.389 12:19:53 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.389 12:19:53 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.389 12:19:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.389 ************************************ 00:05:48.389 START TEST rpc_integrity 00:05:48.389 ************************************ 00:05:48.389 12:19:53 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:48.389 12:19:53 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:48.389 12:19:53 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.389 12:19:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.389 12:19:53 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.389 12:19:53 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:48.389 12:19:53 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:48.389 12:19:53 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:48.389 12:19:53 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:48.389 12:19:53 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.389 12:19:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.389 12:19:53 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.389 12:19:53 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:48.389 12:19:53 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:48.389 12:19:53 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.389 12:19:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.389 12:19:53 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.389 12:19:53 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:48.389 { 00:05:48.389 "name": "Malloc0", 00:05:48.389 "aliases": [ 00:05:48.389 "cdfd4018-94d1-4405-b4d1-23677d8a9502" 00:05:48.389 ], 00:05:48.389 "product_name": "Malloc disk", 00:05:48.389 "block_size": 512, 00:05:48.389 "num_blocks": 16384, 00:05:48.389 "uuid": "cdfd4018-94d1-4405-b4d1-23677d8a9502", 00:05:48.389 "assigned_rate_limits": { 00:05:48.389 "rw_ios_per_sec": 0, 00:05:48.389 "rw_mbytes_per_sec": 0, 00:05:48.389 "r_mbytes_per_sec": 0, 00:05:48.389 "w_mbytes_per_sec": 0 00:05:48.389 }, 00:05:48.389 "claimed": false, 00:05:48.389 "zoned": false, 00:05:48.389 "supported_io_types": { 00:05:48.389 "read": true, 00:05:48.389 "write": true, 00:05:48.389 "unmap": true, 00:05:48.389 "flush": true, 00:05:48.389 "reset": true, 00:05:48.389 "nvme_admin": false, 00:05:48.389 "nvme_io": false, 00:05:48.389 "nvme_io_md": false, 00:05:48.389 "write_zeroes": true, 00:05:48.389 "zcopy": true, 00:05:48.389 "get_zone_info": false, 00:05:48.389 "zone_management": false, 00:05:48.389 "zone_append": false, 00:05:48.389 "compare": false, 00:05:48.389 "compare_and_write": false, 00:05:48.389 "abort": true, 00:05:48.389 "seek_hole": false, 00:05:48.389 "seek_data": false, 00:05:48.389 "copy": true, 00:05:48.389 "nvme_iov_md": false 00:05:48.389 }, 00:05:48.389 "memory_domains": [ 00:05:48.389 { 00:05:48.389 "dma_device_id": "system", 00:05:48.389 "dma_device_type": 1 00:05:48.389 }, 00:05:48.389 { 00:05:48.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.389 "dma_device_type": 2 00:05:48.389 } 00:05:48.389 ], 00:05:48.389 "driver_specific": {} 00:05:48.389 } 00:05:48.389 ]' 00:05:48.389 12:19:53 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:48.389 12:19:53 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:48.389 12:19:53 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:48.389 12:19:53 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.389 12:19:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.389 [2024-11-20 12:19:53.963351] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:48.389 [2024-11-20 12:19:53.963380] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:48.389 [2024-11-20 12:19:53.963393] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20266e0 00:05:48.389 [2024-11-20 12:19:53.963399] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:48.389 [2024-11-20 12:19:53.964489] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:48.389 [2024-11-20 12:19:53.964510] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:48.389 Passthru0 00:05:48.389 12:19:53 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.389 12:19:53 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:48.389 12:19:53 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.389 12:19:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.389 12:19:53 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.389 12:19:53 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:48.389 { 00:05:48.389 "name": "Malloc0", 00:05:48.389 "aliases": [ 00:05:48.389 "cdfd4018-94d1-4405-b4d1-23677d8a9502" 00:05:48.389 ], 00:05:48.389 "product_name": "Malloc disk", 00:05:48.389 "block_size": 512, 00:05:48.389 "num_blocks": 16384, 00:05:48.389 "uuid": "cdfd4018-94d1-4405-b4d1-23677d8a9502", 00:05:48.389 "assigned_rate_limits": { 00:05:48.389 "rw_ios_per_sec": 0, 00:05:48.389 "rw_mbytes_per_sec": 0, 00:05:48.389 "r_mbytes_per_sec": 0, 00:05:48.389 "w_mbytes_per_sec": 0 00:05:48.389 }, 00:05:48.389 "claimed": true, 00:05:48.389 "claim_type": "exclusive_write", 00:05:48.389 "zoned": false, 00:05:48.389 "supported_io_types": { 00:05:48.389 "read": true, 00:05:48.389 "write": true, 00:05:48.389 "unmap": true, 00:05:48.389 "flush": true, 00:05:48.389 "reset": true, 00:05:48.389 "nvme_admin": false, 00:05:48.389 "nvme_io": false, 00:05:48.389 "nvme_io_md": false, 00:05:48.389 "write_zeroes": true, 00:05:48.389 "zcopy": true, 00:05:48.389 "get_zone_info": false, 00:05:48.389 "zone_management": false, 00:05:48.389 "zone_append": false, 00:05:48.389 "compare": false, 00:05:48.389 "compare_and_write": false, 00:05:48.389 "abort": true, 00:05:48.389 "seek_hole": false, 00:05:48.389 "seek_data": false, 00:05:48.389 "copy": true, 00:05:48.389 "nvme_iov_md": false 00:05:48.389 }, 00:05:48.389 "memory_domains": [ 00:05:48.389 { 00:05:48.389 "dma_device_id": "system", 00:05:48.389 "dma_device_type": 1 00:05:48.389 }, 00:05:48.389 { 00:05:48.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.389 "dma_device_type": 2 00:05:48.389 } 00:05:48.389 ], 00:05:48.389 "driver_specific": {} 00:05:48.389 }, 00:05:48.389 { 00:05:48.389 "name": "Passthru0", 00:05:48.390 "aliases": [ 00:05:48.390 "1aa88ba9-0e7e-5e35-bf50-1b3aaac46e11" 00:05:48.390 ], 00:05:48.390 "product_name": "passthru", 00:05:48.390 "block_size": 512, 00:05:48.390 "num_blocks": 16384, 00:05:48.390 "uuid": "1aa88ba9-0e7e-5e35-bf50-1b3aaac46e11", 00:05:48.390 "assigned_rate_limits": { 00:05:48.390 "rw_ios_per_sec": 0, 00:05:48.390 "rw_mbytes_per_sec": 0, 00:05:48.390 "r_mbytes_per_sec": 0, 00:05:48.390 "w_mbytes_per_sec": 0 00:05:48.390 }, 00:05:48.390 "claimed": false, 00:05:48.390 "zoned": false, 00:05:48.390 "supported_io_types": { 00:05:48.390 "read": true, 00:05:48.390 "write": true, 00:05:48.390 "unmap": true, 00:05:48.390 "flush": true, 00:05:48.390 "reset": true, 00:05:48.390 "nvme_admin": false, 00:05:48.390 "nvme_io": false, 00:05:48.390 "nvme_io_md": false, 00:05:48.390 "write_zeroes": true, 00:05:48.390 "zcopy": true, 00:05:48.390 "get_zone_info": false, 00:05:48.390 "zone_management": false, 00:05:48.390 "zone_append": false, 00:05:48.390 "compare": false, 00:05:48.390 "compare_and_write": false, 00:05:48.390 "abort": true, 00:05:48.390 "seek_hole": false, 00:05:48.390 "seek_data": false, 00:05:48.390 "copy": true, 00:05:48.390 "nvme_iov_md": false 00:05:48.390 }, 00:05:48.390 "memory_domains": [ 00:05:48.390 { 00:05:48.390 "dma_device_id": "system", 00:05:48.390 "dma_device_type": 1 00:05:48.390 }, 00:05:48.390 { 00:05:48.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.390 "dma_device_type": 2 00:05:48.390 } 00:05:48.390 ], 00:05:48.390 "driver_specific": { 00:05:48.390 "passthru": { 00:05:48.390 "name": "Passthru0", 00:05:48.390 "base_bdev_name": "Malloc0" 00:05:48.390 } 00:05:48.390 } 00:05:48.390 } 00:05:48.390 ]' 00:05:48.390 12:19:53 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:48.390 12:19:54 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:48.390 12:19:54 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:48.390 12:19:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.390 12:19:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.390 12:19:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.390 12:19:54 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:48.390 12:19:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.390 12:19:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.390 12:19:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.390 12:19:54 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:48.390 12:19:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.390 12:19:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.390 12:19:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.390 12:19:54 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:48.390 12:19:54 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:48.390 12:19:54 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:48.390 00:05:48.390 real 0m0.259s 00:05:48.390 user 0m0.168s 00:05:48.390 sys 0m0.031s 00:05:48.390 12:19:54 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.390 12:19:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.390 ************************************ 00:05:48.390 END TEST rpc_integrity 00:05:48.390 ************************************ 00:05:48.390 12:19:54 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:48.390 12:19:54 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.390 12:19:54 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.390 12:19:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.649 ************************************ 00:05:48.649 START TEST rpc_plugins 00:05:48.649 ************************************ 00:05:48.649 12:19:54 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:48.649 12:19:54 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:48.649 12:19:54 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.649 12:19:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:48.649 12:19:54 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.649 12:19:54 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:48.649 12:19:54 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:48.649 12:19:54 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.649 12:19:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:48.649 12:19:54 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.649 12:19:54 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:48.649 { 00:05:48.649 "name": "Malloc1", 00:05:48.649 "aliases": [ 00:05:48.649 "5c443309-a242-40e8-9019-5965954cb828" 00:05:48.649 ], 00:05:48.649 "product_name": "Malloc disk", 00:05:48.649 "block_size": 4096, 00:05:48.649 "num_blocks": 256, 00:05:48.649 "uuid": "5c443309-a242-40e8-9019-5965954cb828", 00:05:48.649 "assigned_rate_limits": { 00:05:48.649 "rw_ios_per_sec": 0, 00:05:48.649 "rw_mbytes_per_sec": 0, 00:05:48.649 "r_mbytes_per_sec": 0, 00:05:48.649 "w_mbytes_per_sec": 0 00:05:48.649 }, 00:05:48.649 "claimed": false, 00:05:48.649 "zoned": false, 00:05:48.649 "supported_io_types": { 00:05:48.649 "read": true, 00:05:48.649 "write": true, 00:05:48.649 "unmap": true, 00:05:48.649 "flush": true, 00:05:48.649 "reset": true, 00:05:48.649 "nvme_admin": false, 00:05:48.649 "nvme_io": false, 00:05:48.649 "nvme_io_md": false, 00:05:48.649 "write_zeroes": true, 00:05:48.649 "zcopy": true, 00:05:48.649 "get_zone_info": false, 00:05:48.649 "zone_management": false, 00:05:48.649 "zone_append": false, 00:05:48.649 "compare": false, 00:05:48.649 "compare_and_write": false, 00:05:48.649 "abort": true, 00:05:48.649 "seek_hole": false, 00:05:48.649 "seek_data": false, 00:05:48.649 "copy": true, 00:05:48.649 "nvme_iov_md": false 00:05:48.649 }, 00:05:48.649 "memory_domains": [ 00:05:48.649 { 00:05:48.649 "dma_device_id": "system", 00:05:48.649 "dma_device_type": 1 00:05:48.649 }, 00:05:48.649 { 00:05:48.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.649 "dma_device_type": 2 00:05:48.649 } 00:05:48.649 ], 00:05:48.649 "driver_specific": {} 00:05:48.649 } 00:05:48.649 ]' 00:05:48.649 12:19:54 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:48.649 12:19:54 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:48.649 12:19:54 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:48.649 12:19:54 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.649 12:19:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:48.649 12:19:54 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.649 12:19:54 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:48.649 12:19:54 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.649 12:19:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:48.649 12:19:54 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.649 12:19:54 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:48.649 12:19:54 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:48.649 12:19:54 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:48.649 00:05:48.649 real 0m0.142s 00:05:48.649 user 0m0.086s 00:05:48.649 sys 0m0.019s 00:05:48.649 12:19:54 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.649 12:19:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:48.649 ************************************ 00:05:48.649 END TEST rpc_plugins 00:05:48.649 ************************************ 00:05:48.649 12:19:54 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:48.649 12:19:54 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.649 12:19:54 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.649 12:19:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.649 ************************************ 00:05:48.649 START TEST rpc_trace_cmd_test 00:05:48.649 ************************************ 00:05:48.649 12:19:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:48.649 12:19:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:48.649 12:19:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:48.650 12:19:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.650 12:19:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:48.650 12:19:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.650 12:19:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:48.650 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid4179429", 00:05:48.650 "tpoint_group_mask": "0x8", 00:05:48.650 "iscsi_conn": { 00:05:48.650 "mask": "0x2", 00:05:48.650 "tpoint_mask": "0x0" 00:05:48.650 }, 00:05:48.650 "scsi": { 00:05:48.650 "mask": "0x4", 00:05:48.650 "tpoint_mask": "0x0" 00:05:48.650 }, 00:05:48.650 "bdev": { 00:05:48.650 "mask": "0x8", 00:05:48.650 "tpoint_mask": "0xffffffffffffffff" 00:05:48.650 }, 00:05:48.650 "nvmf_rdma": { 00:05:48.650 "mask": "0x10", 00:05:48.650 "tpoint_mask": "0x0" 00:05:48.650 }, 00:05:48.650 "nvmf_tcp": { 00:05:48.650 "mask": "0x20", 00:05:48.650 "tpoint_mask": "0x0" 00:05:48.650 }, 00:05:48.650 "ftl": { 00:05:48.650 "mask": "0x40", 00:05:48.650 "tpoint_mask": "0x0" 00:05:48.650 }, 00:05:48.650 "blobfs": { 00:05:48.650 "mask": "0x80", 00:05:48.650 "tpoint_mask": "0x0" 00:05:48.650 }, 00:05:48.650 "dsa": { 00:05:48.650 "mask": "0x200", 00:05:48.650 "tpoint_mask": "0x0" 00:05:48.650 }, 00:05:48.650 "thread": { 00:05:48.650 "mask": "0x400", 00:05:48.650 "tpoint_mask": "0x0" 00:05:48.650 }, 00:05:48.650 "nvme_pcie": { 00:05:48.650 "mask": "0x800", 00:05:48.650 "tpoint_mask": "0x0" 00:05:48.650 }, 00:05:48.650 "iaa": { 00:05:48.650 "mask": "0x1000", 00:05:48.650 "tpoint_mask": "0x0" 00:05:48.650 }, 00:05:48.650 "nvme_tcp": { 00:05:48.650 "mask": "0x2000", 00:05:48.650 "tpoint_mask": "0x0" 00:05:48.650 }, 00:05:48.650 "bdev_nvme": { 00:05:48.650 "mask": "0x4000", 00:05:48.650 "tpoint_mask": "0x0" 00:05:48.650 }, 00:05:48.650 "sock": { 00:05:48.650 "mask": "0x8000", 00:05:48.650 "tpoint_mask": "0x0" 00:05:48.650 }, 00:05:48.650 "blob": { 00:05:48.650 "mask": "0x10000", 00:05:48.650 "tpoint_mask": "0x0" 00:05:48.650 }, 00:05:48.650 "bdev_raid": { 00:05:48.650 "mask": "0x20000", 00:05:48.650 "tpoint_mask": "0x0" 00:05:48.650 }, 00:05:48.650 "scheduler": { 00:05:48.650 "mask": "0x40000", 00:05:48.650 "tpoint_mask": "0x0" 00:05:48.650 } 00:05:48.650 }' 00:05:48.650 12:19:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:48.908 12:19:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:48.908 12:19:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:48.909 12:19:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:48.909 12:19:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:48.909 12:19:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:48.909 12:19:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:48.909 12:19:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:48.909 12:19:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:48.909 12:19:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:48.909 00:05:48.909 real 0m0.216s 00:05:48.909 user 0m0.183s 00:05:48.909 sys 0m0.025s 00:05:48.909 12:19:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.909 12:19:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:48.909 ************************************ 00:05:48.909 END TEST rpc_trace_cmd_test 00:05:48.909 ************************************ 00:05:48.909 12:19:54 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:48.909 12:19:54 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:48.909 12:19:54 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:48.909 12:19:54 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.909 12:19:54 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.909 12:19:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.909 ************************************ 00:05:48.909 START TEST rpc_daemon_integrity 00:05:48.909 ************************************ 00:05:48.909 12:19:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:48.909 12:19:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:48.909 12:19:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.909 12:19:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.909 12:19:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.909 12:19:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:49.168 12:19:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:49.168 12:19:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:49.168 12:19:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:49.168 12:19:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.168 12:19:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.168 12:19:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.168 12:19:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:49.168 12:19:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:49.168 12:19:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.168 12:19:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.168 12:19:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.168 12:19:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:49.168 { 00:05:49.168 "name": "Malloc2", 00:05:49.168 "aliases": [ 00:05:49.168 "21f1b6f6-9b69-4323-92d9-78abf20e2d5f" 00:05:49.168 ], 00:05:49.168 "product_name": "Malloc disk", 00:05:49.168 "block_size": 512, 00:05:49.168 "num_blocks": 16384, 00:05:49.168 "uuid": "21f1b6f6-9b69-4323-92d9-78abf20e2d5f", 00:05:49.168 "assigned_rate_limits": { 00:05:49.168 "rw_ios_per_sec": 0, 00:05:49.168 "rw_mbytes_per_sec": 0, 00:05:49.168 "r_mbytes_per_sec": 0, 00:05:49.168 "w_mbytes_per_sec": 0 00:05:49.168 }, 00:05:49.168 "claimed": false, 00:05:49.168 "zoned": false, 00:05:49.168 "supported_io_types": { 00:05:49.168 "read": true, 00:05:49.168 "write": true, 00:05:49.168 "unmap": true, 00:05:49.168 "flush": true, 00:05:49.168 "reset": true, 00:05:49.168 "nvme_admin": false, 00:05:49.168 "nvme_io": false, 00:05:49.168 "nvme_io_md": false, 00:05:49.168 "write_zeroes": true, 00:05:49.168 "zcopy": true, 00:05:49.168 "get_zone_info": false, 00:05:49.168 "zone_management": false, 00:05:49.168 "zone_append": false, 00:05:49.168 "compare": false, 00:05:49.168 "compare_and_write": false, 00:05:49.168 "abort": true, 00:05:49.168 "seek_hole": false, 00:05:49.168 "seek_data": false, 00:05:49.168 "copy": true, 00:05:49.168 "nvme_iov_md": false 00:05:49.168 }, 00:05:49.168 "memory_domains": [ 00:05:49.168 { 00:05:49.168 "dma_device_id": "system", 00:05:49.168 "dma_device_type": 1 00:05:49.168 }, 00:05:49.168 { 00:05:49.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.168 "dma_device_type": 2 00:05:49.168 } 00:05:49.168 ], 00:05:49.168 "driver_specific": {} 00:05:49.168 } 00:05:49.168 ]' 00:05:49.168 12:19:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:49.168 12:19:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:49.168 12:19:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:49.169 12:19:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.169 12:19:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.169 [2024-11-20 12:19:54.793611] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:49.169 [2024-11-20 12:19:54.793639] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:49.169 [2024-11-20 12:19:54.793652] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20b6b70 00:05:49.169 [2024-11-20 12:19:54.793658] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:49.169 [2024-11-20 12:19:54.794617] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:49.169 [2024-11-20 12:19:54.794636] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:49.169 Passthru0 00:05:49.169 12:19:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.169 12:19:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:49.169 12:19:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.169 12:19:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.169 12:19:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.169 12:19:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:49.169 { 00:05:49.169 "name": "Malloc2", 00:05:49.169 "aliases": [ 00:05:49.169 "21f1b6f6-9b69-4323-92d9-78abf20e2d5f" 00:05:49.169 ], 00:05:49.169 "product_name": "Malloc disk", 00:05:49.169 "block_size": 512, 00:05:49.169 "num_blocks": 16384, 00:05:49.169 "uuid": "21f1b6f6-9b69-4323-92d9-78abf20e2d5f", 00:05:49.169 "assigned_rate_limits": { 00:05:49.169 "rw_ios_per_sec": 0, 00:05:49.169 "rw_mbytes_per_sec": 0, 00:05:49.169 "r_mbytes_per_sec": 0, 00:05:49.169 "w_mbytes_per_sec": 0 00:05:49.169 }, 00:05:49.169 "claimed": true, 00:05:49.169 "claim_type": "exclusive_write", 00:05:49.169 "zoned": false, 00:05:49.169 "supported_io_types": { 00:05:49.169 "read": true, 00:05:49.169 "write": true, 00:05:49.169 "unmap": true, 00:05:49.169 "flush": true, 00:05:49.169 "reset": true, 00:05:49.169 "nvme_admin": false, 00:05:49.169 "nvme_io": false, 00:05:49.169 "nvme_io_md": false, 00:05:49.169 "write_zeroes": true, 00:05:49.169 "zcopy": true, 00:05:49.169 "get_zone_info": false, 00:05:49.169 "zone_management": false, 00:05:49.169 "zone_append": false, 00:05:49.169 "compare": false, 00:05:49.169 "compare_and_write": false, 00:05:49.169 "abort": true, 00:05:49.169 "seek_hole": false, 00:05:49.169 "seek_data": false, 00:05:49.169 "copy": true, 00:05:49.169 "nvme_iov_md": false 00:05:49.169 }, 00:05:49.169 "memory_domains": [ 00:05:49.169 { 00:05:49.169 "dma_device_id": "system", 00:05:49.169 "dma_device_type": 1 00:05:49.169 }, 00:05:49.169 { 00:05:49.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.169 "dma_device_type": 2 00:05:49.169 } 00:05:49.169 ], 00:05:49.169 "driver_specific": {} 00:05:49.169 }, 00:05:49.169 { 00:05:49.169 "name": "Passthru0", 00:05:49.169 "aliases": [ 00:05:49.169 "080d4df8-23e2-5bf7-8ca8-cacbb65d777a" 00:05:49.169 ], 00:05:49.169 "product_name": "passthru", 00:05:49.169 "block_size": 512, 00:05:49.169 "num_blocks": 16384, 00:05:49.169 "uuid": "080d4df8-23e2-5bf7-8ca8-cacbb65d777a", 00:05:49.169 "assigned_rate_limits": { 00:05:49.169 "rw_ios_per_sec": 0, 00:05:49.169 "rw_mbytes_per_sec": 0, 00:05:49.169 "r_mbytes_per_sec": 0, 00:05:49.169 "w_mbytes_per_sec": 0 00:05:49.169 }, 00:05:49.169 "claimed": false, 00:05:49.169 "zoned": false, 00:05:49.169 "supported_io_types": { 00:05:49.169 "read": true, 00:05:49.169 "write": true, 00:05:49.169 "unmap": true, 00:05:49.169 "flush": true, 00:05:49.169 "reset": true, 00:05:49.169 "nvme_admin": false, 00:05:49.169 "nvme_io": false, 00:05:49.169 "nvme_io_md": false, 00:05:49.169 "write_zeroes": true, 00:05:49.169 "zcopy": true, 00:05:49.169 "get_zone_info": false, 00:05:49.169 "zone_management": false, 00:05:49.169 "zone_append": false, 00:05:49.169 "compare": false, 00:05:49.169 "compare_and_write": false, 00:05:49.169 "abort": true, 00:05:49.169 "seek_hole": false, 00:05:49.169 "seek_data": false, 00:05:49.169 "copy": true, 00:05:49.169 "nvme_iov_md": false 00:05:49.169 }, 00:05:49.169 "memory_domains": [ 00:05:49.169 { 00:05:49.169 "dma_device_id": "system", 00:05:49.169 "dma_device_type": 1 00:05:49.169 }, 00:05:49.169 { 00:05:49.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.169 "dma_device_type": 2 00:05:49.169 } 00:05:49.169 ], 00:05:49.169 "driver_specific": { 00:05:49.169 "passthru": { 00:05:49.169 "name": "Passthru0", 00:05:49.169 "base_bdev_name": "Malloc2" 00:05:49.169 } 00:05:49.169 } 00:05:49.169 } 00:05:49.169 ]' 00:05:49.169 12:19:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:49.169 12:19:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:49.169 12:19:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:49.169 12:19:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.169 12:19:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.169 12:19:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.169 12:19:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:49.169 12:19:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.169 12:19:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.169 12:19:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.169 12:19:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:49.169 12:19:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.169 12:19:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.169 12:19:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.169 12:19:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:49.169 12:19:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:49.428 12:19:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:49.428 00:05:49.428 real 0m0.273s 00:05:49.428 user 0m0.174s 00:05:49.428 sys 0m0.037s 00:05:49.428 12:19:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.428 12:19:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.428 ************************************ 00:05:49.428 END TEST rpc_daemon_integrity 00:05:49.428 ************************************ 00:05:49.428 12:19:54 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:49.428 12:19:54 rpc -- rpc/rpc.sh@84 -- # killprocess 4179429 00:05:49.428 12:19:54 rpc -- common/autotest_common.sh@954 -- # '[' -z 4179429 ']' 00:05:49.428 12:19:54 rpc -- common/autotest_common.sh@958 -- # kill -0 4179429 00:05:49.428 12:19:54 rpc -- common/autotest_common.sh@959 -- # uname 00:05:49.428 12:19:54 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.428 12:19:54 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4179429 00:05:49.428 12:19:55 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:49.428 12:19:55 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:49.428 12:19:55 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4179429' 00:05:49.428 killing process with pid 4179429 00:05:49.428 12:19:55 rpc -- common/autotest_common.sh@973 -- # kill 4179429 00:05:49.428 12:19:55 rpc -- common/autotest_common.sh@978 -- # wait 4179429 00:05:49.688 00:05:49.688 real 0m2.559s 00:05:49.688 user 0m3.245s 00:05:49.688 sys 0m0.733s 00:05:49.688 12:19:55 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.688 12:19:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.688 ************************************ 00:05:49.688 END TEST rpc 00:05:49.688 ************************************ 00:05:49.688 12:19:55 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:49.688 12:19:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.688 12:19:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.688 12:19:55 -- common/autotest_common.sh@10 -- # set +x 00:05:49.688 ************************************ 00:05:49.688 START TEST skip_rpc 00:05:49.688 ************************************ 00:05:49.688 12:19:55 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:49.947 * Looking for test storage... 00:05:49.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:49.947 12:19:55 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:49.947 12:19:55 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:49.947 12:19:55 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:49.947 12:19:55 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:49.947 12:19:55 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.947 12:19:55 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.947 12:19:55 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.947 12:19:55 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.947 12:19:55 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.947 12:19:55 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.947 12:19:55 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.947 12:19:55 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.947 12:19:55 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.947 12:19:55 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.947 12:19:55 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.947 12:19:55 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:49.947 12:19:55 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:49.947 12:19:55 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.947 12:19:55 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.947 12:19:55 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:49.947 12:19:55 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:49.947 12:19:55 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.947 12:19:55 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:49.947 12:19:55 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.947 12:19:55 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:49.947 12:19:55 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:49.947 12:19:55 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.947 12:19:55 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:49.947 12:19:55 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.947 12:19:55 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.947 12:19:55 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.947 12:19:55 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:49.947 12:19:55 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.947 12:19:55 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:49.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.947 --rc genhtml_branch_coverage=1 00:05:49.947 --rc genhtml_function_coverage=1 00:05:49.947 --rc genhtml_legend=1 00:05:49.947 --rc geninfo_all_blocks=1 00:05:49.947 --rc geninfo_unexecuted_blocks=1 00:05:49.947 00:05:49.947 ' 00:05:49.947 12:19:55 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:49.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.947 --rc genhtml_branch_coverage=1 00:05:49.947 --rc genhtml_function_coverage=1 00:05:49.947 --rc genhtml_legend=1 00:05:49.947 --rc geninfo_all_blocks=1 00:05:49.947 --rc geninfo_unexecuted_blocks=1 00:05:49.947 00:05:49.947 ' 00:05:49.947 12:19:55 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:49.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.947 --rc genhtml_branch_coverage=1 00:05:49.947 --rc genhtml_function_coverage=1 00:05:49.947 --rc genhtml_legend=1 00:05:49.947 --rc geninfo_all_blocks=1 00:05:49.947 --rc geninfo_unexecuted_blocks=1 00:05:49.947 00:05:49.947 ' 00:05:49.947 12:19:55 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:49.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.947 --rc genhtml_branch_coverage=1 00:05:49.947 --rc genhtml_function_coverage=1 00:05:49.947 --rc genhtml_legend=1 00:05:49.947 --rc geninfo_all_blocks=1 00:05:49.947 --rc geninfo_unexecuted_blocks=1 00:05:49.947 00:05:49.947 ' 00:05:49.947 12:19:55 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:49.947 12:19:55 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:49.947 12:19:55 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:49.947 12:19:55 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.947 12:19:55 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.947 12:19:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.947 ************************************ 00:05:49.947 START TEST skip_rpc 00:05:49.947 ************************************ 00:05:49.947 12:19:55 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:49.947 12:19:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=4180077 00:05:49.947 12:19:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:49.947 12:19:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:49.947 12:19:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:49.947 [2024-11-20 12:19:55.648723] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:05:49.947 [2024-11-20 12:19:55.648759] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4180077 ] 00:05:50.207 [2024-11-20 12:19:55.722631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.207 [2024-11-20 12:19:55.765799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.487 12:20:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:55.487 12:20:00 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:55.487 12:20:00 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:55.487 12:20:00 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:55.487 12:20:00 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.487 12:20:00 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:55.488 12:20:00 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.488 12:20:00 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:55.488 12:20:00 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.488 12:20:00 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.488 12:20:00 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:55.488 12:20:00 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:55.488 12:20:00 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:55.488 12:20:00 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:55.488 12:20:00 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:55.488 12:20:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:55.488 12:20:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 4180077 00:05:55.488 12:20:00 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 4180077 ']' 00:05:55.488 12:20:00 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 4180077 00:05:55.488 12:20:00 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:55.488 12:20:00 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.488 12:20:00 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4180077 00:05:55.488 12:20:00 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.488 12:20:00 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.488 12:20:00 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4180077' 00:05:55.488 killing process with pid 4180077 00:05:55.488 12:20:00 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 4180077 00:05:55.488 12:20:00 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 4180077 00:05:55.488 00:05:55.488 real 0m5.364s 00:05:55.488 user 0m5.113s 00:05:55.488 sys 0m0.284s 00:05:55.488 12:20:00 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.488 12:20:00 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.488 ************************************ 00:05:55.488 END TEST skip_rpc 00:05:55.488 ************************************ 00:05:55.488 12:20:00 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:55.488 12:20:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.488 12:20:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.488 12:20:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.488 ************************************ 00:05:55.488 START TEST skip_rpc_with_json 00:05:55.488 ************************************ 00:05:55.488 12:20:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:55.488 12:20:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:55.488 12:20:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=4181022 00:05:55.488 12:20:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:55.488 12:20:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.488 12:20:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 4181022 00:05:55.488 12:20:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 4181022 ']' 00:05:55.488 12:20:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.488 12:20:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.488 12:20:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.488 12:20:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.488 12:20:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:55.488 [2024-11-20 12:20:01.083329] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:05:55.488 [2024-11-20 12:20:01.083374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4181022 ] 00:05:55.488 [2024-11-20 12:20:01.157755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.488 [2024-11-20 12:20:01.199663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.747 12:20:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.747 12:20:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:55.747 12:20:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:55.747 12:20:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.747 12:20:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:55.747 [2024-11-20 12:20:01.412671] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:55.747 request: 00:05:55.747 { 00:05:55.747 "trtype": "tcp", 00:05:55.747 "method": "nvmf_get_transports", 00:05:55.747 "req_id": 1 00:05:55.747 } 00:05:55.747 Got JSON-RPC error response 00:05:55.747 response: 00:05:55.747 { 00:05:55.747 "code": -19, 00:05:55.747 "message": "No such device" 00:05:55.747 } 00:05:55.747 12:20:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:55.747 12:20:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:55.747 12:20:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.747 12:20:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:55.747 [2024-11-20 12:20:01.424782] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:55.747 12:20:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.747 12:20:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:55.747 12:20:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.747 12:20:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:56.006 12:20:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.006 12:20:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:56.006 { 00:05:56.006 "subsystems": [ 00:05:56.006 { 00:05:56.006 "subsystem": "fsdev", 00:05:56.006 "config": [ 00:05:56.006 { 00:05:56.006 "method": "fsdev_set_opts", 00:05:56.006 "params": { 00:05:56.006 "fsdev_io_pool_size": 65535, 00:05:56.006 "fsdev_io_cache_size": 256 00:05:56.006 } 00:05:56.006 } 00:05:56.006 ] 00:05:56.006 }, 00:05:56.006 { 00:05:56.006 "subsystem": "vfio_user_target", 00:05:56.006 "config": null 00:05:56.006 }, 00:05:56.006 { 00:05:56.006 "subsystem": "keyring", 00:05:56.006 "config": [] 00:05:56.006 }, 00:05:56.006 { 00:05:56.006 "subsystem": "iobuf", 00:05:56.006 "config": [ 00:05:56.006 { 00:05:56.006 "method": "iobuf_set_options", 00:05:56.006 "params": { 00:05:56.006 "small_pool_count": 8192, 00:05:56.006 "large_pool_count": 1024, 00:05:56.006 "small_bufsize": 8192, 00:05:56.006 "large_bufsize": 135168, 00:05:56.006 "enable_numa": false 00:05:56.006 } 00:05:56.006 } 00:05:56.006 ] 00:05:56.006 }, 00:05:56.006 { 00:05:56.006 "subsystem": "sock", 00:05:56.006 "config": [ 00:05:56.006 { 00:05:56.006 "method": "sock_set_default_impl", 00:05:56.006 "params": { 00:05:56.006 "impl_name": "posix" 00:05:56.006 } 00:05:56.006 }, 00:05:56.006 { 00:05:56.006 "method": "sock_impl_set_options", 00:05:56.006 "params": { 00:05:56.006 "impl_name": "ssl", 00:05:56.006 "recv_buf_size": 4096, 00:05:56.006 "send_buf_size": 4096, 00:05:56.006 "enable_recv_pipe": true, 00:05:56.006 "enable_quickack": false, 00:05:56.006 "enable_placement_id": 0, 00:05:56.006 "enable_zerocopy_send_server": true, 00:05:56.006 "enable_zerocopy_send_client": false, 00:05:56.006 "zerocopy_threshold": 0, 00:05:56.006 "tls_version": 0, 00:05:56.006 "enable_ktls": false 00:05:56.006 } 00:05:56.006 }, 00:05:56.006 { 00:05:56.006 "method": "sock_impl_set_options", 00:05:56.006 "params": { 00:05:56.006 "impl_name": "posix", 00:05:56.006 "recv_buf_size": 2097152, 00:05:56.006 "send_buf_size": 2097152, 00:05:56.006 "enable_recv_pipe": true, 00:05:56.006 "enable_quickack": false, 00:05:56.006 "enable_placement_id": 0, 00:05:56.006 "enable_zerocopy_send_server": true, 00:05:56.006 "enable_zerocopy_send_client": false, 00:05:56.006 "zerocopy_threshold": 0, 00:05:56.006 "tls_version": 0, 00:05:56.006 "enable_ktls": false 00:05:56.006 } 00:05:56.006 } 00:05:56.006 ] 00:05:56.006 }, 00:05:56.006 { 00:05:56.006 "subsystem": "vmd", 00:05:56.006 "config": [] 00:05:56.006 }, 00:05:56.006 { 00:05:56.006 "subsystem": "accel", 00:05:56.006 "config": [ 00:05:56.006 { 00:05:56.006 "method": "accel_set_options", 00:05:56.006 "params": { 00:05:56.006 "small_cache_size": 128, 00:05:56.006 "large_cache_size": 16, 00:05:56.006 "task_count": 2048, 00:05:56.006 "sequence_count": 2048, 00:05:56.006 "buf_count": 2048 00:05:56.006 } 00:05:56.006 } 00:05:56.006 ] 00:05:56.006 }, 00:05:56.006 { 00:05:56.006 "subsystem": "bdev", 00:05:56.006 "config": [ 00:05:56.006 { 00:05:56.006 "method": "bdev_set_options", 00:05:56.006 "params": { 00:05:56.006 "bdev_io_pool_size": 65535, 00:05:56.006 "bdev_io_cache_size": 256, 00:05:56.006 "bdev_auto_examine": true, 00:05:56.006 "iobuf_small_cache_size": 128, 00:05:56.006 "iobuf_large_cache_size": 16 00:05:56.006 } 00:05:56.006 }, 00:05:56.006 { 00:05:56.006 "method": "bdev_raid_set_options", 00:05:56.006 "params": { 00:05:56.006 "process_window_size_kb": 1024, 00:05:56.006 "process_max_bandwidth_mb_sec": 0 00:05:56.006 } 00:05:56.006 }, 00:05:56.006 { 00:05:56.006 "method": "bdev_iscsi_set_options", 00:05:56.006 "params": { 00:05:56.006 "timeout_sec": 30 00:05:56.006 } 00:05:56.006 }, 00:05:56.006 { 00:05:56.006 "method": "bdev_nvme_set_options", 00:05:56.006 "params": { 00:05:56.006 "action_on_timeout": "none", 00:05:56.006 "timeout_us": 0, 00:05:56.006 "timeout_admin_us": 0, 00:05:56.006 "keep_alive_timeout_ms": 10000, 00:05:56.006 "arbitration_burst": 0, 00:05:56.006 "low_priority_weight": 0, 00:05:56.006 "medium_priority_weight": 0, 00:05:56.006 "high_priority_weight": 0, 00:05:56.006 "nvme_adminq_poll_period_us": 10000, 00:05:56.007 "nvme_ioq_poll_period_us": 0, 00:05:56.007 "io_queue_requests": 0, 00:05:56.007 "delay_cmd_submit": true, 00:05:56.007 "transport_retry_count": 4, 00:05:56.007 "bdev_retry_count": 3, 00:05:56.007 "transport_ack_timeout": 0, 00:05:56.007 "ctrlr_loss_timeout_sec": 0, 00:05:56.007 "reconnect_delay_sec": 0, 00:05:56.007 "fast_io_fail_timeout_sec": 0, 00:05:56.007 "disable_auto_failback": false, 00:05:56.007 "generate_uuids": false, 00:05:56.007 "transport_tos": 0, 00:05:56.007 "nvme_error_stat": false, 00:05:56.007 "rdma_srq_size": 0, 00:05:56.007 "io_path_stat": false, 00:05:56.007 "allow_accel_sequence": false, 00:05:56.007 "rdma_max_cq_size": 0, 00:05:56.007 "rdma_cm_event_timeout_ms": 0, 00:05:56.007 "dhchap_digests": [ 00:05:56.007 "sha256", 00:05:56.007 "sha384", 00:05:56.007 "sha512" 00:05:56.007 ], 00:05:56.007 "dhchap_dhgroups": [ 00:05:56.007 "null", 00:05:56.007 "ffdhe2048", 00:05:56.007 "ffdhe3072", 00:05:56.007 "ffdhe4096", 00:05:56.007 "ffdhe6144", 00:05:56.007 "ffdhe8192" 00:05:56.007 ] 00:05:56.007 } 00:05:56.007 }, 00:05:56.007 { 00:05:56.007 "method": "bdev_nvme_set_hotplug", 00:05:56.007 "params": { 00:05:56.007 "period_us": 100000, 00:05:56.007 "enable": false 00:05:56.007 } 00:05:56.007 }, 00:05:56.007 { 00:05:56.007 "method": "bdev_wait_for_examine" 00:05:56.007 } 00:05:56.007 ] 00:05:56.007 }, 00:05:56.007 { 00:05:56.007 "subsystem": "scsi", 00:05:56.007 "config": null 00:05:56.007 }, 00:05:56.007 { 00:05:56.007 "subsystem": "scheduler", 00:05:56.007 "config": [ 00:05:56.007 { 00:05:56.007 "method": "framework_set_scheduler", 00:05:56.007 "params": { 00:05:56.007 "name": "static" 00:05:56.007 } 00:05:56.007 } 00:05:56.007 ] 00:05:56.007 }, 00:05:56.007 { 00:05:56.007 "subsystem": "vhost_scsi", 00:05:56.007 "config": [] 00:05:56.007 }, 00:05:56.007 { 00:05:56.007 "subsystem": "vhost_blk", 00:05:56.007 "config": [] 00:05:56.007 }, 00:05:56.007 { 00:05:56.007 "subsystem": "ublk", 00:05:56.007 "config": [] 00:05:56.007 }, 00:05:56.007 { 00:05:56.007 "subsystem": "nbd", 00:05:56.007 "config": [] 00:05:56.007 }, 00:05:56.007 { 00:05:56.007 "subsystem": "nvmf", 00:05:56.007 "config": [ 00:05:56.007 { 00:05:56.007 "method": "nvmf_set_config", 00:05:56.007 "params": { 00:05:56.007 "discovery_filter": "match_any", 00:05:56.007 "admin_cmd_passthru": { 00:05:56.007 "identify_ctrlr": false 00:05:56.007 }, 00:05:56.007 "dhchap_digests": [ 00:05:56.007 "sha256", 00:05:56.007 "sha384", 00:05:56.007 "sha512" 00:05:56.007 ], 00:05:56.007 "dhchap_dhgroups": [ 00:05:56.007 "null", 00:05:56.007 "ffdhe2048", 00:05:56.007 "ffdhe3072", 00:05:56.007 "ffdhe4096", 00:05:56.007 "ffdhe6144", 00:05:56.007 "ffdhe8192" 00:05:56.007 ] 00:05:56.007 } 00:05:56.007 }, 00:05:56.007 { 00:05:56.007 "method": "nvmf_set_max_subsystems", 00:05:56.007 "params": { 00:05:56.007 "max_subsystems": 1024 00:05:56.007 } 00:05:56.007 }, 00:05:56.007 { 00:05:56.007 "method": "nvmf_set_crdt", 00:05:56.007 "params": { 00:05:56.007 "crdt1": 0, 00:05:56.007 "crdt2": 0, 00:05:56.007 "crdt3": 0 00:05:56.007 } 00:05:56.007 }, 00:05:56.007 { 00:05:56.007 "method": "nvmf_create_transport", 00:05:56.007 "params": { 00:05:56.007 "trtype": "TCP", 00:05:56.007 "max_queue_depth": 128, 00:05:56.007 "max_io_qpairs_per_ctrlr": 127, 00:05:56.007 "in_capsule_data_size": 4096, 00:05:56.007 "max_io_size": 131072, 00:05:56.007 "io_unit_size": 131072, 00:05:56.007 "max_aq_depth": 128, 00:05:56.007 "num_shared_buffers": 511, 00:05:56.007 "buf_cache_size": 4294967295, 00:05:56.007 "dif_insert_or_strip": false, 00:05:56.007 "zcopy": false, 00:05:56.007 "c2h_success": true, 00:05:56.007 "sock_priority": 0, 00:05:56.007 "abort_timeout_sec": 1, 00:05:56.007 "ack_timeout": 0, 00:05:56.007 "data_wr_pool_size": 0 00:05:56.007 } 00:05:56.007 } 00:05:56.007 ] 00:05:56.007 }, 00:05:56.007 { 00:05:56.007 "subsystem": "iscsi", 00:05:56.007 "config": [ 00:05:56.007 { 00:05:56.007 "method": "iscsi_set_options", 00:05:56.007 "params": { 00:05:56.007 "node_base": "iqn.2016-06.io.spdk", 00:05:56.007 "max_sessions": 128, 00:05:56.007 "max_connections_per_session": 2, 00:05:56.007 "max_queue_depth": 64, 00:05:56.007 "default_time2wait": 2, 00:05:56.007 "default_time2retain": 20, 00:05:56.007 "first_burst_length": 8192, 00:05:56.007 "immediate_data": true, 00:05:56.007 "allow_duplicated_isid": false, 00:05:56.007 "error_recovery_level": 0, 00:05:56.007 "nop_timeout": 60, 00:05:56.007 "nop_in_interval": 30, 00:05:56.007 "disable_chap": false, 00:05:56.007 "require_chap": false, 00:05:56.007 "mutual_chap": false, 00:05:56.007 "chap_group": 0, 00:05:56.007 "max_large_datain_per_connection": 64, 00:05:56.007 "max_r2t_per_connection": 4, 00:05:56.007 "pdu_pool_size": 36864, 00:05:56.007 "immediate_data_pool_size": 16384, 00:05:56.007 "data_out_pool_size": 2048 00:05:56.007 } 00:05:56.007 } 00:05:56.007 ] 00:05:56.007 } 00:05:56.007 ] 00:05:56.007 } 00:05:56.007 12:20:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:56.007 12:20:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 4181022 00:05:56.007 12:20:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 4181022 ']' 00:05:56.007 12:20:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 4181022 00:05:56.007 12:20:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:56.007 12:20:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.007 12:20:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4181022 00:05:56.007 12:20:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:56.007 12:20:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:56.007 12:20:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4181022' 00:05:56.007 killing process with pid 4181022 00:05:56.007 12:20:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 4181022 00:05:56.007 12:20:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 4181022 00:05:56.265 12:20:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=4181146 00:05:56.265 12:20:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:56.265 12:20:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:01.537 12:20:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 4181146 00:06:01.537 12:20:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 4181146 ']' 00:06:01.537 12:20:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 4181146 00:06:01.537 12:20:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:01.538 12:20:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.538 12:20:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4181146 00:06:01.538 12:20:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.538 12:20:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.538 12:20:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4181146' 00:06:01.538 killing process with pid 4181146 00:06:01.538 12:20:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 4181146 00:06:01.538 12:20:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 4181146 00:06:01.538 12:20:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:01.538 12:20:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:01.796 00:06:01.796 real 0m6.275s 00:06:01.796 user 0m5.972s 00:06:01.796 sys 0m0.595s 00:06:01.796 12:20:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.796 12:20:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:01.796 ************************************ 00:06:01.796 END TEST skip_rpc_with_json 00:06:01.796 ************************************ 00:06:01.796 12:20:07 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:01.796 12:20:07 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.796 12:20:07 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.796 12:20:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.796 ************************************ 00:06:01.796 START TEST skip_rpc_with_delay 00:06:01.796 ************************************ 00:06:01.796 12:20:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:01.796 12:20:07 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:01.796 12:20:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:01.796 12:20:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:01.796 12:20:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:01.796 12:20:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.796 12:20:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:01.796 12:20:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.796 12:20:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:01.796 12:20:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.797 12:20:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:01.797 12:20:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:01.797 12:20:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:01.797 [2024-11-20 12:20:07.428273] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:01.797 12:20:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:01.797 12:20:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:01.797 12:20:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:01.797 12:20:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:01.797 00:06:01.797 real 0m0.067s 00:06:01.797 user 0m0.045s 00:06:01.797 sys 0m0.022s 00:06:01.797 12:20:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.797 12:20:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:01.797 ************************************ 00:06:01.797 END TEST skip_rpc_with_delay 00:06:01.797 ************************************ 00:06:01.797 12:20:07 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:01.797 12:20:07 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:01.797 12:20:07 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:01.797 12:20:07 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.797 12:20:07 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.797 12:20:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.797 ************************************ 00:06:01.797 START TEST exit_on_failed_rpc_init 00:06:01.797 ************************************ 00:06:01.797 12:20:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:01.797 12:20:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=4182157 00:06:01.797 12:20:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:01.797 12:20:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 4182157 00:06:01.797 12:20:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 4182157 ']' 00:06:01.797 12:20:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.797 12:20:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.797 12:20:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.797 12:20:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.797 12:20:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:01.797 [2024-11-20 12:20:07.555550] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:06:01.797 [2024-11-20 12:20:07.555593] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4182157 ] 00:06:02.056 [2024-11-20 12:20:07.629528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.056 [2024-11-20 12:20:07.671454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.315 12:20:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.315 12:20:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:02.315 12:20:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:02.315 12:20:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:02.315 12:20:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:02.315 12:20:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:02.315 12:20:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:02.315 12:20:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.315 12:20:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:02.315 12:20:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.315 12:20:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:02.315 12:20:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.315 12:20:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:02.315 12:20:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:02.315 12:20:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:02.315 [2024-11-20 12:20:07.948998] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:06:02.315 [2024-11-20 12:20:07.949042] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4182243 ] 00:06:02.315 [2024-11-20 12:20:08.022717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.315 [2024-11-20 12:20:08.063416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.315 [2024-11-20 12:20:08.063468] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:02.315 [2024-11-20 12:20:08.063478] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:02.315 [2024-11-20 12:20:08.063486] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:02.574 12:20:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:02.574 12:20:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:02.574 12:20:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:02.574 12:20:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:02.574 12:20:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:02.574 12:20:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:02.574 12:20:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:02.574 12:20:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 4182157 00:06:02.574 12:20:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 4182157 ']' 00:06:02.574 12:20:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 4182157 00:06:02.574 12:20:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:02.574 12:20:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.574 12:20:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4182157 00:06:02.574 12:20:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.574 12:20:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.574 12:20:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4182157' 00:06:02.574 killing process with pid 4182157 00:06:02.574 12:20:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 4182157 00:06:02.574 12:20:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 4182157 00:06:02.834 00:06:02.834 real 0m0.942s 00:06:02.834 user 0m1.009s 00:06:02.834 sys 0m0.385s 00:06:02.834 12:20:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.834 12:20:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:02.834 ************************************ 00:06:02.834 END TEST exit_on_failed_rpc_init 00:06:02.834 ************************************ 00:06:02.834 12:20:08 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:02.834 00:06:02.834 real 0m13.101s 00:06:02.834 user 0m12.345s 00:06:02.834 sys 0m1.565s 00:06:02.834 12:20:08 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.834 12:20:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.834 ************************************ 00:06:02.834 END TEST skip_rpc 00:06:02.834 ************************************ 00:06:02.834 12:20:08 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:02.834 12:20:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.834 12:20:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.834 12:20:08 -- common/autotest_common.sh@10 -- # set +x 00:06:02.834 ************************************ 00:06:02.834 START TEST rpc_client 00:06:02.834 ************************************ 00:06:02.834 12:20:08 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:03.094 * Looking for test storage... 00:06:03.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:03.094 12:20:08 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:03.094 12:20:08 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:06:03.094 12:20:08 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:03.094 12:20:08 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:03.094 12:20:08 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.094 12:20:08 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.094 12:20:08 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.094 12:20:08 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.094 12:20:08 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.094 12:20:08 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.094 12:20:08 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.094 12:20:08 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.094 12:20:08 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.094 12:20:08 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.094 12:20:08 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.094 12:20:08 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:03.094 12:20:08 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:03.094 12:20:08 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.094 12:20:08 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.094 12:20:08 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:03.094 12:20:08 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:03.094 12:20:08 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.094 12:20:08 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:03.094 12:20:08 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.094 12:20:08 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:03.094 12:20:08 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:03.094 12:20:08 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.094 12:20:08 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:03.094 12:20:08 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.094 12:20:08 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.094 12:20:08 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.094 12:20:08 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:03.094 12:20:08 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.094 12:20:08 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:03.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.094 --rc genhtml_branch_coverage=1 00:06:03.094 --rc genhtml_function_coverage=1 00:06:03.094 --rc genhtml_legend=1 00:06:03.094 --rc geninfo_all_blocks=1 00:06:03.094 --rc geninfo_unexecuted_blocks=1 00:06:03.094 00:06:03.094 ' 00:06:03.094 12:20:08 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:03.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.094 --rc genhtml_branch_coverage=1 00:06:03.094 --rc genhtml_function_coverage=1 00:06:03.094 --rc genhtml_legend=1 00:06:03.094 --rc geninfo_all_blocks=1 00:06:03.094 --rc geninfo_unexecuted_blocks=1 00:06:03.094 00:06:03.094 ' 00:06:03.094 12:20:08 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:03.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.094 --rc genhtml_branch_coverage=1 00:06:03.094 --rc genhtml_function_coverage=1 00:06:03.094 --rc genhtml_legend=1 00:06:03.094 --rc geninfo_all_blocks=1 00:06:03.094 --rc geninfo_unexecuted_blocks=1 00:06:03.094 00:06:03.094 ' 00:06:03.094 12:20:08 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:03.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.094 --rc genhtml_branch_coverage=1 00:06:03.094 --rc genhtml_function_coverage=1 00:06:03.094 --rc genhtml_legend=1 00:06:03.094 --rc geninfo_all_blocks=1 00:06:03.094 --rc geninfo_unexecuted_blocks=1 00:06:03.094 00:06:03.094 ' 00:06:03.094 12:20:08 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:03.094 OK 00:06:03.094 12:20:08 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:03.094 00:06:03.094 real 0m0.198s 00:06:03.094 user 0m0.113s 00:06:03.094 sys 0m0.097s 00:06:03.094 12:20:08 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.094 12:20:08 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:03.094 ************************************ 00:06:03.094 END TEST rpc_client 00:06:03.094 ************************************ 00:06:03.094 12:20:08 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:03.094 12:20:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.094 12:20:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.094 12:20:08 -- common/autotest_common.sh@10 -- # set +x 00:06:03.094 ************************************ 00:06:03.094 START TEST json_config 00:06:03.094 ************************************ 00:06:03.094 12:20:08 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:03.354 12:20:08 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:03.354 12:20:08 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:06:03.354 12:20:08 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:03.354 12:20:08 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:03.354 12:20:08 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.354 12:20:08 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.354 12:20:08 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.354 12:20:08 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.354 12:20:08 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.354 12:20:08 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.354 12:20:08 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.354 12:20:08 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.354 12:20:08 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.354 12:20:08 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.354 12:20:08 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.354 12:20:08 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:03.354 12:20:08 json_config -- scripts/common.sh@345 -- # : 1 00:06:03.355 12:20:08 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.355 12:20:08 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.355 12:20:08 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:03.355 12:20:08 json_config -- scripts/common.sh@353 -- # local d=1 00:06:03.355 12:20:08 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.355 12:20:08 json_config -- scripts/common.sh@355 -- # echo 1 00:06:03.355 12:20:08 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.355 12:20:08 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:03.355 12:20:08 json_config -- scripts/common.sh@353 -- # local d=2 00:06:03.355 12:20:08 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.355 12:20:08 json_config -- scripts/common.sh@355 -- # echo 2 00:06:03.355 12:20:08 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.355 12:20:08 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.355 12:20:08 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.355 12:20:08 json_config -- scripts/common.sh@368 -- # return 0 00:06:03.355 12:20:08 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.355 12:20:08 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:03.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.355 --rc genhtml_branch_coverage=1 00:06:03.355 --rc genhtml_function_coverage=1 00:06:03.355 --rc genhtml_legend=1 00:06:03.355 --rc geninfo_all_blocks=1 00:06:03.355 --rc geninfo_unexecuted_blocks=1 00:06:03.355 00:06:03.355 ' 00:06:03.355 12:20:08 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:03.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.355 --rc genhtml_branch_coverage=1 00:06:03.355 --rc genhtml_function_coverage=1 00:06:03.355 --rc genhtml_legend=1 00:06:03.355 --rc geninfo_all_blocks=1 00:06:03.355 --rc geninfo_unexecuted_blocks=1 00:06:03.355 00:06:03.355 ' 00:06:03.355 12:20:08 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:03.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.355 --rc genhtml_branch_coverage=1 00:06:03.355 --rc genhtml_function_coverage=1 00:06:03.355 --rc genhtml_legend=1 00:06:03.355 --rc geninfo_all_blocks=1 00:06:03.355 --rc geninfo_unexecuted_blocks=1 00:06:03.355 00:06:03.355 ' 00:06:03.355 12:20:08 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:03.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.355 --rc genhtml_branch_coverage=1 00:06:03.355 --rc genhtml_function_coverage=1 00:06:03.355 --rc genhtml_legend=1 00:06:03.355 --rc geninfo_all_blocks=1 00:06:03.355 --rc geninfo_unexecuted_blocks=1 00:06:03.355 00:06:03.355 ' 00:06:03.355 12:20:08 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:03.355 12:20:08 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:03.355 12:20:08 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:03.355 12:20:08 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:03.355 12:20:08 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:03.355 12:20:08 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:03.355 12:20:08 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:03.355 12:20:08 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:03.355 12:20:08 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:03.355 12:20:08 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:03.355 12:20:08 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:03.355 12:20:08 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:03.355 12:20:08 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:03.355 12:20:08 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:03.355 12:20:08 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:03.355 12:20:08 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:03.355 12:20:08 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:03.355 12:20:08 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:03.355 12:20:08 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:03.355 12:20:08 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:03.355 12:20:09 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:03.355 12:20:09 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:03.355 12:20:09 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:03.355 12:20:09 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.355 12:20:09 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.355 12:20:09 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.355 12:20:09 json_config -- paths/export.sh@5 -- # export PATH 00:06:03.355 12:20:09 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.355 12:20:09 json_config -- nvmf/common.sh@51 -- # : 0 00:06:03.355 12:20:09 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:03.355 12:20:09 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:03.355 12:20:09 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:03.355 12:20:09 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:03.355 12:20:09 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:03.355 12:20:09 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:03.355 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:03.355 12:20:09 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:03.355 12:20:09 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:03.355 12:20:09 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:03.355 12:20:09 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:03.355 12:20:09 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:03.355 12:20:09 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:03.355 12:20:09 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:03.355 12:20:09 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:03.355 12:20:09 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:03.355 12:20:09 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:03.355 12:20:09 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:03.355 12:20:09 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:03.355 12:20:09 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:03.355 12:20:09 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:03.355 12:20:09 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:03.355 12:20:09 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:03.355 12:20:09 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:03.355 12:20:09 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:03.355 12:20:09 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:03.355 INFO: JSON configuration test init 00:06:03.355 12:20:09 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:03.355 12:20:09 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:03.355 12:20:09 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:03.355 12:20:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.355 12:20:09 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:03.355 12:20:09 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:03.355 12:20:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.355 12:20:09 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:03.355 12:20:09 json_config -- json_config/common.sh@9 -- # local app=target 00:06:03.355 12:20:09 json_config -- json_config/common.sh@10 -- # shift 00:06:03.355 12:20:09 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:03.355 12:20:09 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:03.355 12:20:09 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:03.355 12:20:09 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:03.355 12:20:09 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:03.355 12:20:09 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=4182597 00:06:03.355 12:20:09 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:03.355 Waiting for target to run... 00:06:03.355 12:20:09 json_config -- json_config/common.sh@25 -- # waitforlisten 4182597 /var/tmp/spdk_tgt.sock 00:06:03.355 12:20:09 json_config -- common/autotest_common.sh@835 -- # '[' -z 4182597 ']' 00:06:03.356 12:20:09 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:03.356 12:20:09 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:03.356 12:20:09 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.356 12:20:09 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:03.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:03.356 12:20:09 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.356 12:20:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.356 [2024-11-20 12:20:09.078655] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:06:03.356 [2024-11-20 12:20:09.078704] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4182597 ] 00:06:03.923 [2024-11-20 12:20:09.531237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.923 [2024-11-20 12:20:09.587889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.183 12:20:09 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.183 12:20:09 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:04.183 12:20:09 json_config -- json_config/common.sh@26 -- # echo '' 00:06:04.183 00:06:04.183 12:20:09 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:04.183 12:20:09 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:04.183 12:20:09 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:04.183 12:20:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.183 12:20:09 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:04.183 12:20:09 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:04.183 12:20:09 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:04.183 12:20:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.183 12:20:09 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:04.183 12:20:09 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:04.183 12:20:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:07.471 12:20:13 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:07.472 12:20:13 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:07.472 12:20:13 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:07.472 12:20:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.472 12:20:13 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:07.472 12:20:13 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:07.472 12:20:13 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:07.472 12:20:13 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:07.472 12:20:13 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:07.472 12:20:13 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:07.472 12:20:13 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:07.472 12:20:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:07.763 12:20:13 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:07.763 12:20:13 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:07.763 12:20:13 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:07.763 12:20:13 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:07.763 12:20:13 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:07.763 12:20:13 json_config -- json_config/json_config.sh@54 -- # sort 00:06:07.763 12:20:13 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:07.763 12:20:13 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:07.763 12:20:13 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:07.763 12:20:13 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:07.763 12:20:13 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:07.763 12:20:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.763 12:20:13 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:07.763 12:20:13 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:07.763 12:20:13 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:07.763 12:20:13 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:07.763 12:20:13 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:07.763 12:20:13 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:07.763 12:20:13 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:07.763 12:20:13 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:07.763 12:20:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.763 12:20:13 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:07.763 12:20:13 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:07.763 12:20:13 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:07.763 12:20:13 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:07.763 12:20:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:07.763 MallocForNvmf0 00:06:07.763 12:20:13 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:07.763 12:20:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:08.022 MallocForNvmf1 00:06:08.022 12:20:13 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:08.022 12:20:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:08.281 [2024-11-20 12:20:13.850151] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:08.281 12:20:13 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:08.281 12:20:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:08.539 12:20:14 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:08.539 12:20:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:08.539 12:20:14 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:08.539 12:20:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:08.798 12:20:14 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:08.798 12:20:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:09.057 [2024-11-20 12:20:14.640628] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:09.057 12:20:14 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:09.057 12:20:14 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:09.057 12:20:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:09.057 12:20:14 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:09.057 12:20:14 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:09.057 12:20:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:09.057 12:20:14 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:09.057 12:20:14 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:09.057 12:20:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:09.315 MallocBdevForConfigChangeCheck 00:06:09.315 12:20:14 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:09.315 12:20:14 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:09.315 12:20:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:09.315 12:20:14 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:09.315 12:20:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:09.573 12:20:15 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:09.573 INFO: shutting down applications... 00:06:09.573 12:20:15 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:09.573 12:20:15 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:09.573 12:20:15 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:09.573 12:20:15 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:12.108 Calling clear_iscsi_subsystem 00:06:12.108 Calling clear_nvmf_subsystem 00:06:12.108 Calling clear_nbd_subsystem 00:06:12.108 Calling clear_ublk_subsystem 00:06:12.108 Calling clear_vhost_blk_subsystem 00:06:12.108 Calling clear_vhost_scsi_subsystem 00:06:12.108 Calling clear_bdev_subsystem 00:06:12.108 12:20:17 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:12.108 12:20:17 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:12.108 12:20:17 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:12.108 12:20:17 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:12.108 12:20:17 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:12.108 12:20:17 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:12.108 12:20:17 json_config -- json_config/json_config.sh@352 -- # break 00:06:12.108 12:20:17 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:12.108 12:20:17 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:12.108 12:20:17 json_config -- json_config/common.sh@31 -- # local app=target 00:06:12.108 12:20:17 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:12.108 12:20:17 json_config -- json_config/common.sh@35 -- # [[ -n 4182597 ]] 00:06:12.108 12:20:17 json_config -- json_config/common.sh@38 -- # kill -SIGINT 4182597 00:06:12.108 12:20:17 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:12.108 12:20:17 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:12.108 12:20:17 json_config -- json_config/common.sh@41 -- # kill -0 4182597 00:06:12.108 12:20:17 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:12.678 12:20:18 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:12.678 12:20:18 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:12.678 12:20:18 json_config -- json_config/common.sh@41 -- # kill -0 4182597 00:06:12.678 12:20:18 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:12.678 12:20:18 json_config -- json_config/common.sh@43 -- # break 00:06:12.678 12:20:18 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:12.678 12:20:18 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:12.678 SPDK target shutdown done 00:06:12.678 12:20:18 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:12.678 INFO: relaunching applications... 00:06:12.678 12:20:18 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:12.678 12:20:18 json_config -- json_config/common.sh@9 -- # local app=target 00:06:12.678 12:20:18 json_config -- json_config/common.sh@10 -- # shift 00:06:12.678 12:20:18 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:12.678 12:20:18 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:12.678 12:20:18 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:12.678 12:20:18 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:12.678 12:20:18 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:12.678 12:20:18 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=4184162 00:06:12.678 12:20:18 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:12.678 Waiting for target to run... 00:06:12.678 12:20:18 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:12.678 12:20:18 json_config -- json_config/common.sh@25 -- # waitforlisten 4184162 /var/tmp/spdk_tgt.sock 00:06:12.678 12:20:18 json_config -- common/autotest_common.sh@835 -- # '[' -z 4184162 ']' 00:06:12.678 12:20:18 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:12.678 12:20:18 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.678 12:20:18 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:12.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:12.678 12:20:18 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.678 12:20:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.678 [2024-11-20 12:20:18.412885] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:06:12.678 [2024-11-20 12:20:18.412944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4184162 ] 00:06:13.246 [2024-11-20 12:20:18.874996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.246 [2024-11-20 12:20:18.930052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.536 [2024-11-20 12:20:21.965191] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:16.536 [2024-11-20 12:20:21.997545] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:17.105 12:20:22 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.106 12:20:22 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:17.106 12:20:22 json_config -- json_config/common.sh@26 -- # echo '' 00:06:17.106 00:06:17.106 12:20:22 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:17.106 12:20:22 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:17.106 INFO: Checking if target configuration is the same... 00:06:17.106 12:20:22 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:17.106 12:20:22 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:17.106 12:20:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:17.106 + '[' 2 -ne 2 ']' 00:06:17.106 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:17.106 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:17.106 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:17.106 +++ basename /dev/fd/62 00:06:17.106 ++ mktemp /tmp/62.XXX 00:06:17.106 + tmp_file_1=/tmp/62.TAE 00:06:17.106 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:17.106 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:17.106 + tmp_file_2=/tmp/spdk_tgt_config.json.F8i 00:06:17.106 + ret=0 00:06:17.106 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:17.365 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:17.365 + diff -u /tmp/62.TAE /tmp/spdk_tgt_config.json.F8i 00:06:17.365 + echo 'INFO: JSON config files are the same' 00:06:17.365 INFO: JSON config files are the same 00:06:17.365 + rm /tmp/62.TAE /tmp/spdk_tgt_config.json.F8i 00:06:17.365 + exit 0 00:06:17.365 12:20:23 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:17.365 12:20:23 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:17.365 INFO: changing configuration and checking if this can be detected... 00:06:17.365 12:20:23 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:17.365 12:20:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:17.625 12:20:23 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:17.625 12:20:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:17.625 12:20:23 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:17.625 + '[' 2 -ne 2 ']' 00:06:17.625 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:17.625 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:17.625 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:17.625 +++ basename /dev/fd/62 00:06:17.625 ++ mktemp /tmp/62.XXX 00:06:17.625 + tmp_file_1=/tmp/62.3kn 00:06:17.625 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:17.625 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:17.625 + tmp_file_2=/tmp/spdk_tgt_config.json.ffH 00:06:17.625 + ret=0 00:06:17.625 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:17.884 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:17.884 + diff -u /tmp/62.3kn /tmp/spdk_tgt_config.json.ffH 00:06:17.884 + ret=1 00:06:17.884 + echo '=== Start of file: /tmp/62.3kn ===' 00:06:17.884 + cat /tmp/62.3kn 00:06:17.884 + echo '=== End of file: /tmp/62.3kn ===' 00:06:17.884 + echo '' 00:06:17.884 + echo '=== Start of file: /tmp/spdk_tgt_config.json.ffH ===' 00:06:17.884 + cat /tmp/spdk_tgt_config.json.ffH 00:06:18.143 + echo '=== End of file: /tmp/spdk_tgt_config.json.ffH ===' 00:06:18.143 + echo '' 00:06:18.143 + rm /tmp/62.3kn /tmp/spdk_tgt_config.json.ffH 00:06:18.143 + exit 1 00:06:18.143 12:20:23 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:18.143 INFO: configuration change detected. 00:06:18.143 12:20:23 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:18.143 12:20:23 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:18.143 12:20:23 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:18.143 12:20:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:18.143 12:20:23 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:18.143 12:20:23 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:18.143 12:20:23 json_config -- json_config/json_config.sh@324 -- # [[ -n 4184162 ]] 00:06:18.143 12:20:23 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:18.143 12:20:23 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:18.143 12:20:23 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:18.143 12:20:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:18.143 12:20:23 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:18.143 12:20:23 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:18.143 12:20:23 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:18.143 12:20:23 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:18.143 12:20:23 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:18.143 12:20:23 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:18.143 12:20:23 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:18.143 12:20:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:18.143 12:20:23 json_config -- json_config/json_config.sh@330 -- # killprocess 4184162 00:06:18.143 12:20:23 json_config -- common/autotest_common.sh@954 -- # '[' -z 4184162 ']' 00:06:18.143 12:20:23 json_config -- common/autotest_common.sh@958 -- # kill -0 4184162 00:06:18.143 12:20:23 json_config -- common/autotest_common.sh@959 -- # uname 00:06:18.143 12:20:23 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.143 12:20:23 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4184162 00:06:18.143 12:20:23 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:18.143 12:20:23 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:18.143 12:20:23 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4184162' 00:06:18.143 killing process with pid 4184162 00:06:18.143 12:20:23 json_config -- common/autotest_common.sh@973 -- # kill 4184162 00:06:18.143 12:20:23 json_config -- common/autotest_common.sh@978 -- # wait 4184162 00:06:20.674 12:20:25 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:20.674 12:20:25 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:20.674 12:20:25 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:20.674 12:20:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:20.674 12:20:25 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:20.674 12:20:25 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:20.674 INFO: Success 00:06:20.674 00:06:20.674 real 0m17.051s 00:06:20.674 user 0m17.430s 00:06:20.674 sys 0m2.813s 00:06:20.674 12:20:25 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.674 12:20:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:20.674 ************************************ 00:06:20.674 END TEST json_config 00:06:20.674 ************************************ 00:06:20.674 12:20:25 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:20.674 12:20:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.674 12:20:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.674 12:20:25 -- common/autotest_common.sh@10 -- # set +x 00:06:20.674 ************************************ 00:06:20.674 START TEST json_config_extra_key 00:06:20.674 ************************************ 00:06:20.674 12:20:25 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:20.674 12:20:26 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:20.674 12:20:26 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:06:20.674 12:20:26 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:20.674 12:20:26 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:20.674 12:20:26 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.674 12:20:26 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.674 12:20:26 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.674 12:20:26 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.674 12:20:26 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.674 12:20:26 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.674 12:20:26 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.674 12:20:26 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.674 12:20:26 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.674 12:20:26 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.674 12:20:26 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.674 12:20:26 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:20.674 12:20:26 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:20.674 12:20:26 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.674 12:20:26 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.674 12:20:26 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:20.674 12:20:26 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:20.674 12:20:26 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.674 12:20:26 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:20.674 12:20:26 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.674 12:20:26 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:20.675 12:20:26 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:20.675 12:20:26 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.675 12:20:26 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:20.675 12:20:26 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.675 12:20:26 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.675 12:20:26 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.675 12:20:26 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:20.675 12:20:26 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.675 12:20:26 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:20.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.675 --rc genhtml_branch_coverage=1 00:06:20.675 --rc genhtml_function_coverage=1 00:06:20.675 --rc genhtml_legend=1 00:06:20.675 --rc geninfo_all_blocks=1 00:06:20.675 --rc geninfo_unexecuted_blocks=1 00:06:20.675 00:06:20.675 ' 00:06:20.675 12:20:26 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:20.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.675 --rc genhtml_branch_coverage=1 00:06:20.675 --rc genhtml_function_coverage=1 00:06:20.675 --rc genhtml_legend=1 00:06:20.675 --rc geninfo_all_blocks=1 00:06:20.675 --rc geninfo_unexecuted_blocks=1 00:06:20.675 00:06:20.675 ' 00:06:20.675 12:20:26 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:20.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.675 --rc genhtml_branch_coverage=1 00:06:20.675 --rc genhtml_function_coverage=1 00:06:20.675 --rc genhtml_legend=1 00:06:20.675 --rc geninfo_all_blocks=1 00:06:20.675 --rc geninfo_unexecuted_blocks=1 00:06:20.675 00:06:20.675 ' 00:06:20.675 12:20:26 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:20.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.675 --rc genhtml_branch_coverage=1 00:06:20.675 --rc genhtml_function_coverage=1 00:06:20.675 --rc genhtml_legend=1 00:06:20.675 --rc geninfo_all_blocks=1 00:06:20.675 --rc geninfo_unexecuted_blocks=1 00:06:20.675 00:06:20.675 ' 00:06:20.675 12:20:26 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:20.675 12:20:26 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:20.675 12:20:26 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:20.675 12:20:26 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:20.675 12:20:26 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:20.675 12:20:26 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:20.675 12:20:26 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:20.675 12:20:26 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:20.675 12:20:26 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:20.675 12:20:26 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:20.675 12:20:26 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:20.675 12:20:26 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:20.675 12:20:26 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:20.675 12:20:26 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:20.675 12:20:26 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:20.675 12:20:26 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:20.675 12:20:26 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:20.675 12:20:26 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:20.675 12:20:26 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:20.675 12:20:26 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:20.675 12:20:26 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:20.675 12:20:26 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:20.675 12:20:26 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:20.675 12:20:26 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.675 12:20:26 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.675 12:20:26 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.675 12:20:26 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:20.675 12:20:26 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.675 12:20:26 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:20.675 12:20:26 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:20.675 12:20:26 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:20.675 12:20:26 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:20.675 12:20:26 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:20.675 12:20:26 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:20.675 12:20:26 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:20.675 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:20.675 12:20:26 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:20.675 12:20:26 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:20.675 12:20:26 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:20.675 12:20:26 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:20.675 12:20:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:20.675 12:20:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:20.675 12:20:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:20.675 12:20:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:20.675 12:20:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:20.675 12:20:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:20.675 12:20:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:20.675 12:20:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:20.675 12:20:26 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:20.675 12:20:26 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:20.675 INFO: launching applications... 00:06:20.675 12:20:26 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:20.675 12:20:26 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:20.675 12:20:26 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:20.675 12:20:26 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:20.675 12:20:26 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:20.675 12:20:26 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:20.675 12:20:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:20.675 12:20:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:20.675 12:20:26 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=4185626 00:06:20.675 12:20:26 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:20.675 Waiting for target to run... 00:06:20.675 12:20:26 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 4185626 /var/tmp/spdk_tgt.sock 00:06:20.675 12:20:26 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:20.675 12:20:26 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 4185626 ']' 00:06:20.676 12:20:26 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:20.676 12:20:26 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.676 12:20:26 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:20.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:20.676 12:20:26 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.676 12:20:26 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:20.676 [2024-11-20 12:20:26.189838] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:06:20.676 [2024-11-20 12:20:26.189883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4185626 ] 00:06:20.936 [2024-11-20 12:20:26.478290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.936 [2024-11-20 12:20:26.512294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.505 12:20:26 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.505 12:20:26 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:21.505 12:20:26 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:21.505 00:06:21.505 12:20:26 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:21.505 INFO: shutting down applications... 00:06:21.505 12:20:26 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:21.505 12:20:26 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:21.505 12:20:26 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:21.505 12:20:26 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 4185626 ]] 00:06:21.505 12:20:26 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 4185626 00:06:21.505 12:20:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:21.505 12:20:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:21.505 12:20:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 4185626 00:06:21.505 12:20:26 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:21.764 12:20:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:21.764 12:20:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:21.764 12:20:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 4185626 00:06:21.764 12:20:27 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:21.764 12:20:27 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:21.764 12:20:27 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:21.764 12:20:27 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:21.764 SPDK target shutdown done 00:06:21.764 12:20:27 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:21.764 Success 00:06:21.764 00:06:21.764 real 0m1.556s 00:06:21.764 user 0m1.324s 00:06:21.764 sys 0m0.393s 00:06:21.764 12:20:27 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.764 12:20:27 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:21.764 ************************************ 00:06:21.764 END TEST json_config_extra_key 00:06:21.764 ************************************ 00:06:22.025 12:20:27 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:22.025 12:20:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.025 12:20:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.025 12:20:27 -- common/autotest_common.sh@10 -- # set +x 00:06:22.025 ************************************ 00:06:22.025 START TEST alias_rpc 00:06:22.025 ************************************ 00:06:22.025 12:20:27 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:22.025 * Looking for test storage... 00:06:22.025 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:22.025 12:20:27 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:22.025 12:20:27 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:22.025 12:20:27 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:22.025 12:20:27 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:22.025 12:20:27 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.025 12:20:27 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.025 12:20:27 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.025 12:20:27 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.025 12:20:27 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.025 12:20:27 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.025 12:20:27 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.025 12:20:27 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.025 12:20:27 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.025 12:20:27 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.025 12:20:27 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.025 12:20:27 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:22.025 12:20:27 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:22.025 12:20:27 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.025 12:20:27 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.025 12:20:27 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:22.025 12:20:27 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:22.025 12:20:27 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.025 12:20:27 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:22.025 12:20:27 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.025 12:20:27 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:22.025 12:20:27 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:22.025 12:20:27 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.025 12:20:27 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:22.025 12:20:27 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.025 12:20:27 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.025 12:20:27 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.025 12:20:27 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:22.025 12:20:27 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.025 12:20:27 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:22.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.025 --rc genhtml_branch_coverage=1 00:06:22.025 --rc genhtml_function_coverage=1 00:06:22.025 --rc genhtml_legend=1 00:06:22.025 --rc geninfo_all_blocks=1 00:06:22.025 --rc geninfo_unexecuted_blocks=1 00:06:22.025 00:06:22.025 ' 00:06:22.025 12:20:27 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:22.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.025 --rc genhtml_branch_coverage=1 00:06:22.025 --rc genhtml_function_coverage=1 00:06:22.025 --rc genhtml_legend=1 00:06:22.025 --rc geninfo_all_blocks=1 00:06:22.025 --rc geninfo_unexecuted_blocks=1 00:06:22.025 00:06:22.025 ' 00:06:22.025 12:20:27 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:22.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.025 --rc genhtml_branch_coverage=1 00:06:22.025 --rc genhtml_function_coverage=1 00:06:22.025 --rc genhtml_legend=1 00:06:22.025 --rc geninfo_all_blocks=1 00:06:22.025 --rc geninfo_unexecuted_blocks=1 00:06:22.025 00:06:22.025 ' 00:06:22.025 12:20:27 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:22.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.025 --rc genhtml_branch_coverage=1 00:06:22.025 --rc genhtml_function_coverage=1 00:06:22.025 --rc genhtml_legend=1 00:06:22.025 --rc geninfo_all_blocks=1 00:06:22.025 --rc geninfo_unexecuted_blocks=1 00:06:22.025 00:06:22.025 ' 00:06:22.025 12:20:27 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:22.025 12:20:27 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=4185910 00:06:22.025 12:20:27 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 4185910 00:06:22.025 12:20:27 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:22.025 12:20:27 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 4185910 ']' 00:06:22.025 12:20:27 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.025 12:20:27 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.025 12:20:27 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.025 12:20:27 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.025 12:20:27 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.284 [2024-11-20 12:20:27.804174] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:06:22.284 [2024-11-20 12:20:27.804223] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4185910 ] 00:06:22.284 [2024-11-20 12:20:27.877317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.284 [2024-11-20 12:20:27.919131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.543 12:20:28 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.543 12:20:28 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:22.543 12:20:28 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:22.802 12:20:28 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 4185910 00:06:22.802 12:20:28 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 4185910 ']' 00:06:22.802 12:20:28 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 4185910 00:06:22.802 12:20:28 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:22.802 12:20:28 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.802 12:20:28 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4185910 00:06:22.802 12:20:28 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.802 12:20:28 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.802 12:20:28 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4185910' 00:06:22.802 killing process with pid 4185910 00:06:22.802 12:20:28 alias_rpc -- common/autotest_common.sh@973 -- # kill 4185910 00:06:22.802 12:20:28 alias_rpc -- common/autotest_common.sh@978 -- # wait 4185910 00:06:23.061 00:06:23.062 real 0m1.121s 00:06:23.062 user 0m1.152s 00:06:23.062 sys 0m0.392s 00:06:23.062 12:20:28 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.062 12:20:28 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.062 ************************************ 00:06:23.062 END TEST alias_rpc 00:06:23.062 ************************************ 00:06:23.062 12:20:28 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:23.062 12:20:28 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:23.062 12:20:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.062 12:20:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.062 12:20:28 -- common/autotest_common.sh@10 -- # set +x 00:06:23.062 ************************************ 00:06:23.062 START TEST spdkcli_tcp 00:06:23.062 ************************************ 00:06:23.062 12:20:28 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:23.321 * Looking for test storage... 00:06:23.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:23.321 12:20:28 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:23.321 12:20:28 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:23.321 12:20:28 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:23.321 12:20:28 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:23.321 12:20:28 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.321 12:20:28 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.321 12:20:28 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.321 12:20:28 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.321 12:20:28 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.321 12:20:28 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.321 12:20:28 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.321 12:20:28 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.321 12:20:28 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.321 12:20:28 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.321 12:20:28 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.321 12:20:28 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:23.321 12:20:28 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:23.321 12:20:28 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.321 12:20:28 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.321 12:20:28 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:23.321 12:20:28 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:23.321 12:20:28 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.321 12:20:28 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:23.321 12:20:28 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.321 12:20:28 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:23.321 12:20:28 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:23.321 12:20:28 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.321 12:20:28 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:23.321 12:20:28 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.321 12:20:28 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.321 12:20:28 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.321 12:20:28 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:23.321 12:20:28 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.321 12:20:28 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:23.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.321 --rc genhtml_branch_coverage=1 00:06:23.321 --rc genhtml_function_coverage=1 00:06:23.321 --rc genhtml_legend=1 00:06:23.321 --rc geninfo_all_blocks=1 00:06:23.321 --rc geninfo_unexecuted_blocks=1 00:06:23.321 00:06:23.321 ' 00:06:23.321 12:20:28 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:23.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.321 --rc genhtml_branch_coverage=1 00:06:23.321 --rc genhtml_function_coverage=1 00:06:23.321 --rc genhtml_legend=1 00:06:23.321 --rc geninfo_all_blocks=1 00:06:23.321 --rc geninfo_unexecuted_blocks=1 00:06:23.321 00:06:23.321 ' 00:06:23.321 12:20:28 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:23.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.321 --rc genhtml_branch_coverage=1 00:06:23.321 --rc genhtml_function_coverage=1 00:06:23.321 --rc genhtml_legend=1 00:06:23.321 --rc geninfo_all_blocks=1 00:06:23.321 --rc geninfo_unexecuted_blocks=1 00:06:23.321 00:06:23.321 ' 00:06:23.321 12:20:28 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:23.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.321 --rc genhtml_branch_coverage=1 00:06:23.321 --rc genhtml_function_coverage=1 00:06:23.321 --rc genhtml_legend=1 00:06:23.321 --rc geninfo_all_blocks=1 00:06:23.321 --rc geninfo_unexecuted_blocks=1 00:06:23.321 00:06:23.321 ' 00:06:23.321 12:20:28 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:23.321 12:20:28 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:23.321 12:20:28 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:23.321 12:20:28 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:23.321 12:20:28 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:23.321 12:20:28 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:23.322 12:20:28 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:23.322 12:20:28 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:23.322 12:20:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:23.322 12:20:28 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=4186197 00:06:23.322 12:20:28 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 4186197 00:06:23.322 12:20:28 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:23.322 12:20:28 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 4186197 ']' 00:06:23.322 12:20:28 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.322 12:20:28 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.322 12:20:28 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.322 12:20:28 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.322 12:20:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:23.322 [2024-11-20 12:20:29.002396] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:06:23.322 [2024-11-20 12:20:29.002445] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4186197 ] 00:06:23.322 [2024-11-20 12:20:29.075041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:23.589 [2024-11-20 12:20:29.118244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.589 [2024-11-20 12:20:29.118244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.158 12:20:29 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.158 12:20:29 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:24.158 12:20:29 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=4186430 00:06:24.158 12:20:29 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:24.158 12:20:29 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:24.417 [ 00:06:24.417 "bdev_malloc_delete", 00:06:24.417 "bdev_malloc_create", 00:06:24.417 "bdev_null_resize", 00:06:24.417 "bdev_null_delete", 00:06:24.417 "bdev_null_create", 00:06:24.417 "bdev_nvme_cuse_unregister", 00:06:24.417 "bdev_nvme_cuse_register", 00:06:24.417 "bdev_opal_new_user", 00:06:24.417 "bdev_opal_set_lock_state", 00:06:24.417 "bdev_opal_delete", 00:06:24.417 "bdev_opal_get_info", 00:06:24.417 "bdev_opal_create", 00:06:24.417 "bdev_nvme_opal_revert", 00:06:24.417 "bdev_nvme_opal_init", 00:06:24.417 "bdev_nvme_send_cmd", 00:06:24.417 "bdev_nvme_set_keys", 00:06:24.418 "bdev_nvme_get_path_iostat", 00:06:24.418 "bdev_nvme_get_mdns_discovery_info", 00:06:24.418 "bdev_nvme_stop_mdns_discovery", 00:06:24.418 "bdev_nvme_start_mdns_discovery", 00:06:24.418 "bdev_nvme_set_multipath_policy", 00:06:24.418 "bdev_nvme_set_preferred_path", 00:06:24.418 "bdev_nvme_get_io_paths", 00:06:24.418 "bdev_nvme_remove_error_injection", 00:06:24.418 "bdev_nvme_add_error_injection", 00:06:24.418 "bdev_nvme_get_discovery_info", 00:06:24.418 "bdev_nvme_stop_discovery", 00:06:24.418 "bdev_nvme_start_discovery", 00:06:24.418 "bdev_nvme_get_controller_health_info", 00:06:24.418 "bdev_nvme_disable_controller", 00:06:24.418 "bdev_nvme_enable_controller", 00:06:24.418 "bdev_nvme_reset_controller", 00:06:24.418 "bdev_nvme_get_transport_statistics", 00:06:24.418 "bdev_nvme_apply_firmware", 00:06:24.418 "bdev_nvme_detach_controller", 00:06:24.418 "bdev_nvme_get_controllers", 00:06:24.418 "bdev_nvme_attach_controller", 00:06:24.418 "bdev_nvme_set_hotplug", 00:06:24.418 "bdev_nvme_set_options", 00:06:24.418 "bdev_passthru_delete", 00:06:24.418 "bdev_passthru_create", 00:06:24.418 "bdev_lvol_set_parent_bdev", 00:06:24.418 "bdev_lvol_set_parent", 00:06:24.418 "bdev_lvol_check_shallow_copy", 00:06:24.418 "bdev_lvol_start_shallow_copy", 00:06:24.418 "bdev_lvol_grow_lvstore", 00:06:24.418 "bdev_lvol_get_lvols", 00:06:24.418 "bdev_lvol_get_lvstores", 00:06:24.418 "bdev_lvol_delete", 00:06:24.418 "bdev_lvol_set_read_only", 00:06:24.418 "bdev_lvol_resize", 00:06:24.418 "bdev_lvol_decouple_parent", 00:06:24.418 "bdev_lvol_inflate", 00:06:24.418 "bdev_lvol_rename", 00:06:24.418 "bdev_lvol_clone_bdev", 00:06:24.418 "bdev_lvol_clone", 00:06:24.418 "bdev_lvol_snapshot", 00:06:24.418 "bdev_lvol_create", 00:06:24.418 "bdev_lvol_delete_lvstore", 00:06:24.418 "bdev_lvol_rename_lvstore", 00:06:24.418 "bdev_lvol_create_lvstore", 00:06:24.418 "bdev_raid_set_options", 00:06:24.418 "bdev_raid_remove_base_bdev", 00:06:24.418 "bdev_raid_add_base_bdev", 00:06:24.418 "bdev_raid_delete", 00:06:24.418 "bdev_raid_create", 00:06:24.418 "bdev_raid_get_bdevs", 00:06:24.418 "bdev_error_inject_error", 00:06:24.418 "bdev_error_delete", 00:06:24.418 "bdev_error_create", 00:06:24.418 "bdev_split_delete", 00:06:24.418 "bdev_split_create", 00:06:24.418 "bdev_delay_delete", 00:06:24.418 "bdev_delay_create", 00:06:24.418 "bdev_delay_update_latency", 00:06:24.418 "bdev_zone_block_delete", 00:06:24.418 "bdev_zone_block_create", 00:06:24.418 "blobfs_create", 00:06:24.418 "blobfs_detect", 00:06:24.418 "blobfs_set_cache_size", 00:06:24.418 "bdev_aio_delete", 00:06:24.418 "bdev_aio_rescan", 00:06:24.418 "bdev_aio_create", 00:06:24.418 "bdev_ftl_set_property", 00:06:24.418 "bdev_ftl_get_properties", 00:06:24.418 "bdev_ftl_get_stats", 00:06:24.418 "bdev_ftl_unmap", 00:06:24.418 "bdev_ftl_unload", 00:06:24.418 "bdev_ftl_delete", 00:06:24.418 "bdev_ftl_load", 00:06:24.418 "bdev_ftl_create", 00:06:24.418 "bdev_virtio_attach_controller", 00:06:24.418 "bdev_virtio_scsi_get_devices", 00:06:24.418 "bdev_virtio_detach_controller", 00:06:24.418 "bdev_virtio_blk_set_hotplug", 00:06:24.418 "bdev_iscsi_delete", 00:06:24.418 "bdev_iscsi_create", 00:06:24.418 "bdev_iscsi_set_options", 00:06:24.418 "accel_error_inject_error", 00:06:24.418 "ioat_scan_accel_module", 00:06:24.418 "dsa_scan_accel_module", 00:06:24.418 "iaa_scan_accel_module", 00:06:24.418 "vfu_virtio_create_fs_endpoint", 00:06:24.418 "vfu_virtio_create_scsi_endpoint", 00:06:24.418 "vfu_virtio_scsi_remove_target", 00:06:24.418 "vfu_virtio_scsi_add_target", 00:06:24.418 "vfu_virtio_create_blk_endpoint", 00:06:24.418 "vfu_virtio_delete_endpoint", 00:06:24.418 "keyring_file_remove_key", 00:06:24.418 "keyring_file_add_key", 00:06:24.418 "keyring_linux_set_options", 00:06:24.418 "fsdev_aio_delete", 00:06:24.418 "fsdev_aio_create", 00:06:24.418 "iscsi_get_histogram", 00:06:24.418 "iscsi_enable_histogram", 00:06:24.418 "iscsi_set_options", 00:06:24.418 "iscsi_get_auth_groups", 00:06:24.418 "iscsi_auth_group_remove_secret", 00:06:24.418 "iscsi_auth_group_add_secret", 00:06:24.418 "iscsi_delete_auth_group", 00:06:24.418 "iscsi_create_auth_group", 00:06:24.418 "iscsi_set_discovery_auth", 00:06:24.418 "iscsi_get_options", 00:06:24.418 "iscsi_target_node_request_logout", 00:06:24.418 "iscsi_target_node_set_redirect", 00:06:24.418 "iscsi_target_node_set_auth", 00:06:24.418 "iscsi_target_node_add_lun", 00:06:24.418 "iscsi_get_stats", 00:06:24.418 "iscsi_get_connections", 00:06:24.418 "iscsi_portal_group_set_auth", 00:06:24.418 "iscsi_start_portal_group", 00:06:24.418 "iscsi_delete_portal_group", 00:06:24.418 "iscsi_create_portal_group", 00:06:24.418 "iscsi_get_portal_groups", 00:06:24.418 "iscsi_delete_target_node", 00:06:24.418 "iscsi_target_node_remove_pg_ig_maps", 00:06:24.418 "iscsi_target_node_add_pg_ig_maps", 00:06:24.418 "iscsi_create_target_node", 00:06:24.418 "iscsi_get_target_nodes", 00:06:24.418 "iscsi_delete_initiator_group", 00:06:24.418 "iscsi_initiator_group_remove_initiators", 00:06:24.418 "iscsi_initiator_group_add_initiators", 00:06:24.418 "iscsi_create_initiator_group", 00:06:24.418 "iscsi_get_initiator_groups", 00:06:24.418 "nvmf_set_crdt", 00:06:24.418 "nvmf_set_config", 00:06:24.418 "nvmf_set_max_subsystems", 00:06:24.418 "nvmf_stop_mdns_prr", 00:06:24.418 "nvmf_publish_mdns_prr", 00:06:24.418 "nvmf_subsystem_get_listeners", 00:06:24.418 "nvmf_subsystem_get_qpairs", 00:06:24.418 "nvmf_subsystem_get_controllers", 00:06:24.418 "nvmf_get_stats", 00:06:24.418 "nvmf_get_transports", 00:06:24.418 "nvmf_create_transport", 00:06:24.418 "nvmf_get_targets", 00:06:24.418 "nvmf_delete_target", 00:06:24.418 "nvmf_create_target", 00:06:24.418 "nvmf_subsystem_allow_any_host", 00:06:24.418 "nvmf_subsystem_set_keys", 00:06:24.418 "nvmf_subsystem_remove_host", 00:06:24.418 "nvmf_subsystem_add_host", 00:06:24.418 "nvmf_ns_remove_host", 00:06:24.418 "nvmf_ns_add_host", 00:06:24.418 "nvmf_subsystem_remove_ns", 00:06:24.418 "nvmf_subsystem_set_ns_ana_group", 00:06:24.418 "nvmf_subsystem_add_ns", 00:06:24.418 "nvmf_subsystem_listener_set_ana_state", 00:06:24.418 "nvmf_discovery_get_referrals", 00:06:24.418 "nvmf_discovery_remove_referral", 00:06:24.418 "nvmf_discovery_add_referral", 00:06:24.418 "nvmf_subsystem_remove_listener", 00:06:24.418 "nvmf_subsystem_add_listener", 00:06:24.418 "nvmf_delete_subsystem", 00:06:24.418 "nvmf_create_subsystem", 00:06:24.418 "nvmf_get_subsystems", 00:06:24.418 "env_dpdk_get_mem_stats", 00:06:24.418 "nbd_get_disks", 00:06:24.418 "nbd_stop_disk", 00:06:24.418 "nbd_start_disk", 00:06:24.418 "ublk_recover_disk", 00:06:24.418 "ublk_get_disks", 00:06:24.418 "ublk_stop_disk", 00:06:24.418 "ublk_start_disk", 00:06:24.418 "ublk_destroy_target", 00:06:24.418 "ublk_create_target", 00:06:24.418 "virtio_blk_create_transport", 00:06:24.418 "virtio_blk_get_transports", 00:06:24.418 "vhost_controller_set_coalescing", 00:06:24.418 "vhost_get_controllers", 00:06:24.418 "vhost_delete_controller", 00:06:24.418 "vhost_create_blk_controller", 00:06:24.419 "vhost_scsi_controller_remove_target", 00:06:24.419 "vhost_scsi_controller_add_target", 00:06:24.419 "vhost_start_scsi_controller", 00:06:24.419 "vhost_create_scsi_controller", 00:06:24.419 "thread_set_cpumask", 00:06:24.419 "scheduler_set_options", 00:06:24.419 "framework_get_governor", 00:06:24.419 "framework_get_scheduler", 00:06:24.419 "framework_set_scheduler", 00:06:24.419 "framework_get_reactors", 00:06:24.419 "thread_get_io_channels", 00:06:24.419 "thread_get_pollers", 00:06:24.419 "thread_get_stats", 00:06:24.419 "framework_monitor_context_switch", 00:06:24.419 "spdk_kill_instance", 00:06:24.419 "log_enable_timestamps", 00:06:24.419 "log_get_flags", 00:06:24.419 "log_clear_flag", 00:06:24.419 "log_set_flag", 00:06:24.419 "log_get_level", 00:06:24.419 "log_set_level", 00:06:24.419 "log_get_print_level", 00:06:24.419 "log_set_print_level", 00:06:24.419 "framework_enable_cpumask_locks", 00:06:24.419 "framework_disable_cpumask_locks", 00:06:24.419 "framework_wait_init", 00:06:24.419 "framework_start_init", 00:06:24.419 "scsi_get_devices", 00:06:24.419 "bdev_get_histogram", 00:06:24.419 "bdev_enable_histogram", 00:06:24.419 "bdev_set_qos_limit", 00:06:24.419 "bdev_set_qd_sampling_period", 00:06:24.419 "bdev_get_bdevs", 00:06:24.419 "bdev_reset_iostat", 00:06:24.419 "bdev_get_iostat", 00:06:24.419 "bdev_examine", 00:06:24.419 "bdev_wait_for_examine", 00:06:24.419 "bdev_set_options", 00:06:24.419 "accel_get_stats", 00:06:24.419 "accel_set_options", 00:06:24.419 "accel_set_driver", 00:06:24.419 "accel_crypto_key_destroy", 00:06:24.419 "accel_crypto_keys_get", 00:06:24.419 "accel_crypto_key_create", 00:06:24.419 "accel_assign_opc", 00:06:24.419 "accel_get_module_info", 00:06:24.419 "accel_get_opc_assignments", 00:06:24.419 "vmd_rescan", 00:06:24.419 "vmd_remove_device", 00:06:24.419 "vmd_enable", 00:06:24.419 "sock_get_default_impl", 00:06:24.419 "sock_set_default_impl", 00:06:24.419 "sock_impl_set_options", 00:06:24.419 "sock_impl_get_options", 00:06:24.419 "iobuf_get_stats", 00:06:24.419 "iobuf_set_options", 00:06:24.419 "keyring_get_keys", 00:06:24.419 "vfu_tgt_set_base_path", 00:06:24.419 "framework_get_pci_devices", 00:06:24.419 "framework_get_config", 00:06:24.419 "framework_get_subsystems", 00:06:24.419 "fsdev_set_opts", 00:06:24.419 "fsdev_get_opts", 00:06:24.419 "trace_get_info", 00:06:24.419 "trace_get_tpoint_group_mask", 00:06:24.419 "trace_disable_tpoint_group", 00:06:24.419 "trace_enable_tpoint_group", 00:06:24.419 "trace_clear_tpoint_mask", 00:06:24.419 "trace_set_tpoint_mask", 00:06:24.419 "notify_get_notifications", 00:06:24.419 "notify_get_types", 00:06:24.419 "spdk_get_version", 00:06:24.419 "rpc_get_methods" 00:06:24.419 ] 00:06:24.419 12:20:30 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:24.419 12:20:30 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:24.419 12:20:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:24.419 12:20:30 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:24.419 12:20:30 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 4186197 00:06:24.419 12:20:30 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 4186197 ']' 00:06:24.419 12:20:30 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 4186197 00:06:24.419 12:20:30 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:24.419 12:20:30 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:24.419 12:20:30 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4186197 00:06:24.419 12:20:30 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:24.419 12:20:30 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:24.419 12:20:30 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4186197' 00:06:24.419 killing process with pid 4186197 00:06:24.419 12:20:30 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 4186197 00:06:24.419 12:20:30 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 4186197 00:06:24.679 00:06:24.679 real 0m1.643s 00:06:24.679 user 0m3.055s 00:06:24.679 sys 0m0.474s 00:06:24.679 12:20:30 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.679 12:20:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:24.679 ************************************ 00:06:24.679 END TEST spdkcli_tcp 00:06:24.679 ************************************ 00:06:24.939 12:20:30 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:24.939 12:20:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.939 12:20:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.939 12:20:30 -- common/autotest_common.sh@10 -- # set +x 00:06:24.939 ************************************ 00:06:24.939 START TEST dpdk_mem_utility 00:06:24.939 ************************************ 00:06:24.939 12:20:30 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:24.939 * Looking for test storage... 00:06:24.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:24.939 12:20:30 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:24.939 12:20:30 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:24.939 12:20:30 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:24.939 12:20:30 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:24.939 12:20:30 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.939 12:20:30 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.939 12:20:30 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.939 12:20:30 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.939 12:20:30 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.939 12:20:30 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.939 12:20:30 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.939 12:20:30 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.939 12:20:30 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.939 12:20:30 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.939 12:20:30 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.939 12:20:30 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:24.939 12:20:30 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:24.939 12:20:30 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.939 12:20:30 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.939 12:20:30 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:24.939 12:20:30 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:24.939 12:20:30 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.939 12:20:30 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:24.939 12:20:30 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.939 12:20:30 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:24.939 12:20:30 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:24.939 12:20:30 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.939 12:20:30 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:24.939 12:20:30 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.939 12:20:30 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.939 12:20:30 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.939 12:20:30 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:24.939 12:20:30 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.939 12:20:30 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:24.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.939 --rc genhtml_branch_coverage=1 00:06:24.939 --rc genhtml_function_coverage=1 00:06:24.939 --rc genhtml_legend=1 00:06:24.939 --rc geninfo_all_blocks=1 00:06:24.939 --rc geninfo_unexecuted_blocks=1 00:06:24.939 00:06:24.939 ' 00:06:24.939 12:20:30 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:24.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.939 --rc genhtml_branch_coverage=1 00:06:24.939 --rc genhtml_function_coverage=1 00:06:24.939 --rc genhtml_legend=1 00:06:24.939 --rc geninfo_all_blocks=1 00:06:24.939 --rc geninfo_unexecuted_blocks=1 00:06:24.939 00:06:24.939 ' 00:06:24.939 12:20:30 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:24.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.939 --rc genhtml_branch_coverage=1 00:06:24.939 --rc genhtml_function_coverage=1 00:06:24.939 --rc genhtml_legend=1 00:06:24.939 --rc geninfo_all_blocks=1 00:06:24.939 --rc geninfo_unexecuted_blocks=1 00:06:24.939 00:06:24.939 ' 00:06:24.939 12:20:30 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:24.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.939 --rc genhtml_branch_coverage=1 00:06:24.939 --rc genhtml_function_coverage=1 00:06:24.939 --rc genhtml_legend=1 00:06:24.939 --rc geninfo_all_blocks=1 00:06:24.939 --rc geninfo_unexecuted_blocks=1 00:06:24.939 00:06:24.939 ' 00:06:24.939 12:20:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:24.939 12:20:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=4186517 00:06:24.939 12:20:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 4186517 00:06:24.939 12:20:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:24.939 12:20:30 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 4186517 ']' 00:06:24.939 12:20:30 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.939 12:20:30 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.939 12:20:30 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.939 12:20:30 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.939 12:20:30 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:25.199 [2024-11-20 12:20:30.708411] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:06:25.199 [2024-11-20 12:20:30.708454] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4186517 ] 00:06:25.199 [2024-11-20 12:20:30.764896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.199 [2024-11-20 12:20:30.806188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.459 12:20:31 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.459 12:20:31 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:25.459 12:20:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:25.459 12:20:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:25.459 12:20:31 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.459 12:20:31 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:25.459 { 00:06:25.459 "filename": "/tmp/spdk_mem_dump.txt" 00:06:25.459 } 00:06:25.459 12:20:31 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.459 12:20:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:25.459 DPDK memory size 810.000000 MiB in 1 heap(s) 00:06:25.459 1 heaps totaling size 810.000000 MiB 00:06:25.459 size: 810.000000 MiB heap id: 0 00:06:25.459 end heaps---------- 00:06:25.459 9 mempools totaling size 595.772034 MiB 00:06:25.459 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:25.459 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:25.459 size: 92.545471 MiB name: bdev_io_4186517 00:06:25.459 size: 50.003479 MiB name: msgpool_4186517 00:06:25.459 size: 36.509338 MiB name: fsdev_io_4186517 00:06:25.459 size: 21.763794 MiB name: PDU_Pool 00:06:25.459 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:25.459 size: 4.133484 MiB name: evtpool_4186517 00:06:25.459 size: 0.026123 MiB name: Session_Pool 00:06:25.459 end mempools------- 00:06:25.459 6 memzones totaling size 4.142822 MiB 00:06:25.459 size: 1.000366 MiB name: RG_ring_0_4186517 00:06:25.459 size: 1.000366 MiB name: RG_ring_1_4186517 00:06:25.459 size: 1.000366 MiB name: RG_ring_4_4186517 00:06:25.459 size: 1.000366 MiB name: RG_ring_5_4186517 00:06:25.459 size: 0.125366 MiB name: RG_ring_2_4186517 00:06:25.459 size: 0.015991 MiB name: RG_ring_3_4186517 00:06:25.459 end memzones------- 00:06:25.459 12:20:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:25.459 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:06:25.459 list of free elements. size: 10.862488 MiB 00:06:25.459 element at address: 0x200018a00000 with size: 0.999878 MiB 00:06:25.460 element at address: 0x200018c00000 with size: 0.999878 MiB 00:06:25.460 element at address: 0x200000400000 with size: 0.998535 MiB 00:06:25.460 element at address: 0x200031800000 with size: 0.994446 MiB 00:06:25.460 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:25.460 element at address: 0x200012c00000 with size: 0.954285 MiB 00:06:25.460 element at address: 0x200018e00000 with size: 0.936584 MiB 00:06:25.460 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:25.460 element at address: 0x20001a600000 with size: 0.582886 MiB 00:06:25.460 element at address: 0x200000c00000 with size: 0.495422 MiB 00:06:25.460 element at address: 0x20000a600000 with size: 0.490723 MiB 00:06:25.460 element at address: 0x200019000000 with size: 0.485657 MiB 00:06:25.460 element at address: 0x200003e00000 with size: 0.481934 MiB 00:06:25.460 element at address: 0x200027a00000 with size: 0.410034 MiB 00:06:25.460 element at address: 0x200000800000 with size: 0.355042 MiB 00:06:25.460 list of standard malloc elements. size: 199.218628 MiB 00:06:25.460 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:25.460 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:25.460 element at address: 0x200018afff80 with size: 1.000122 MiB 00:06:25.460 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:06:25.460 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:25.460 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:25.460 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:06:25.460 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:25.460 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:06:25.460 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:25.460 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:25.460 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:25.460 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:25.460 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:06:25.460 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:25.460 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:25.460 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:06:25.460 element at address: 0x20000085b040 with size: 0.000183 MiB 00:06:25.460 element at address: 0x20000085f300 with size: 0.000183 MiB 00:06:25.460 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:25.460 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:25.460 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:25.460 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:25.460 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:25.460 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:25.460 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:25.460 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:25.460 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:25.460 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:25.460 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:25.460 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:25.460 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:25.460 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:25.460 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:06:25.460 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:06:25.460 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:06:25.460 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:06:25.460 element at address: 0x20001a695380 with size: 0.000183 MiB 00:06:25.460 element at address: 0x20001a695440 with size: 0.000183 MiB 00:06:25.460 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:06:25.460 element at address: 0x200027a69040 with size: 0.000183 MiB 00:06:25.460 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:06:25.460 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:06:25.460 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:06:25.460 list of memzone associated elements. size: 599.918884 MiB 00:06:25.460 element at address: 0x20001a695500 with size: 211.416748 MiB 00:06:25.460 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:25.460 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:06:25.460 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:25.460 element at address: 0x200012df4780 with size: 92.045044 MiB 00:06:25.460 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_4186517_0 00:06:25.460 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:25.460 associated memzone info: size: 48.002930 MiB name: MP_msgpool_4186517_0 00:06:25.460 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:25.460 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_4186517_0 00:06:25.460 element at address: 0x2000191be940 with size: 20.255554 MiB 00:06:25.460 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:25.460 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:06:25.460 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:25.460 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:25.460 associated memzone info: size: 3.000122 MiB name: MP_evtpool_4186517_0 00:06:25.460 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:25.460 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_4186517 00:06:25.460 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:25.460 associated memzone info: size: 1.007996 MiB name: MP_evtpool_4186517 00:06:25.460 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:25.460 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:25.460 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:06:25.460 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:25.460 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:25.460 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:25.460 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:25.460 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:25.460 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:25.460 associated memzone info: size: 1.000366 MiB name: RG_ring_0_4186517 00:06:25.460 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:25.460 associated memzone info: size: 1.000366 MiB name: RG_ring_1_4186517 00:06:25.460 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:06:25.460 associated memzone info: size: 1.000366 MiB name: RG_ring_4_4186517 00:06:25.460 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:06:25.460 associated memzone info: size: 1.000366 MiB name: RG_ring_5_4186517 00:06:25.460 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:25.460 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_4186517 00:06:25.460 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:25.460 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_4186517 00:06:25.460 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:25.460 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:25.460 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:25.460 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:25.460 element at address: 0x20001907c540 with size: 0.250488 MiB 00:06:25.460 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:25.460 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:25.460 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_4186517 00:06:25.460 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:06:25.460 associated memzone info: size: 0.125366 MiB name: RG_ring_2_4186517 00:06:25.460 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:25.460 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:25.460 element at address: 0x200027a69100 with size: 0.023743 MiB 00:06:25.460 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:25.460 element at address: 0x20000085b100 with size: 0.016113 MiB 00:06:25.460 associated memzone info: size: 0.015991 MiB name: RG_ring_3_4186517 00:06:25.460 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:06:25.460 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:25.460 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:06:25.460 associated memzone info: size: 0.000183 MiB name: MP_msgpool_4186517 00:06:25.460 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:25.460 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_4186517 00:06:25.460 element at address: 0x20000085af00 with size: 0.000305 MiB 00:06:25.461 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_4186517 00:06:25.461 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:06:25.461 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:25.461 12:20:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:25.461 12:20:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 4186517 00:06:25.461 12:20:31 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 4186517 ']' 00:06:25.461 12:20:31 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 4186517 00:06:25.461 12:20:31 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:25.461 12:20:31 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.461 12:20:31 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4186517 00:06:25.461 12:20:31 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:25.461 12:20:31 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:25.461 12:20:31 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4186517' 00:06:25.461 killing process with pid 4186517 00:06:25.461 12:20:31 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 4186517 00:06:25.461 12:20:31 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 4186517 00:06:26.028 00:06:26.029 real 0m1.003s 00:06:26.029 user 0m0.961s 00:06:26.029 sys 0m0.401s 00:06:26.029 12:20:31 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.029 12:20:31 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:26.029 ************************************ 00:06:26.029 END TEST dpdk_mem_utility 00:06:26.029 ************************************ 00:06:26.029 12:20:31 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:26.029 12:20:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.029 12:20:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.029 12:20:31 -- common/autotest_common.sh@10 -- # set +x 00:06:26.029 ************************************ 00:06:26.029 START TEST event 00:06:26.029 ************************************ 00:06:26.029 12:20:31 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:26.029 * Looking for test storage... 00:06:26.029 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:26.029 12:20:31 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:26.029 12:20:31 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:26.029 12:20:31 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:26.029 12:20:31 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:26.029 12:20:31 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.029 12:20:31 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.029 12:20:31 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.029 12:20:31 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.029 12:20:31 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.029 12:20:31 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.029 12:20:31 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.029 12:20:31 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.029 12:20:31 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.029 12:20:31 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.029 12:20:31 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.029 12:20:31 event -- scripts/common.sh@344 -- # case "$op" in 00:06:26.029 12:20:31 event -- scripts/common.sh@345 -- # : 1 00:06:26.029 12:20:31 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.029 12:20:31 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.029 12:20:31 event -- scripts/common.sh@365 -- # decimal 1 00:06:26.029 12:20:31 event -- scripts/common.sh@353 -- # local d=1 00:06:26.029 12:20:31 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.029 12:20:31 event -- scripts/common.sh@355 -- # echo 1 00:06:26.029 12:20:31 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.029 12:20:31 event -- scripts/common.sh@366 -- # decimal 2 00:06:26.029 12:20:31 event -- scripts/common.sh@353 -- # local d=2 00:06:26.029 12:20:31 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.029 12:20:31 event -- scripts/common.sh@355 -- # echo 2 00:06:26.029 12:20:31 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.029 12:20:31 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.029 12:20:31 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.029 12:20:31 event -- scripts/common.sh@368 -- # return 0 00:06:26.029 12:20:31 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.029 12:20:31 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:26.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.029 --rc genhtml_branch_coverage=1 00:06:26.029 --rc genhtml_function_coverage=1 00:06:26.029 --rc genhtml_legend=1 00:06:26.029 --rc geninfo_all_blocks=1 00:06:26.029 --rc geninfo_unexecuted_blocks=1 00:06:26.029 00:06:26.029 ' 00:06:26.029 12:20:31 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:26.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.029 --rc genhtml_branch_coverage=1 00:06:26.029 --rc genhtml_function_coverage=1 00:06:26.029 --rc genhtml_legend=1 00:06:26.029 --rc geninfo_all_blocks=1 00:06:26.029 --rc geninfo_unexecuted_blocks=1 00:06:26.029 00:06:26.029 ' 00:06:26.029 12:20:31 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:26.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.029 --rc genhtml_branch_coverage=1 00:06:26.029 --rc genhtml_function_coverage=1 00:06:26.029 --rc genhtml_legend=1 00:06:26.029 --rc geninfo_all_blocks=1 00:06:26.029 --rc geninfo_unexecuted_blocks=1 00:06:26.029 00:06:26.029 ' 00:06:26.029 12:20:31 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:26.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.029 --rc genhtml_branch_coverage=1 00:06:26.029 --rc genhtml_function_coverage=1 00:06:26.029 --rc genhtml_legend=1 00:06:26.029 --rc geninfo_all_blocks=1 00:06:26.029 --rc geninfo_unexecuted_blocks=1 00:06:26.029 00:06:26.029 ' 00:06:26.029 12:20:31 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:26.029 12:20:31 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:26.029 12:20:31 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:26.029 12:20:31 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:26.029 12:20:31 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.029 12:20:31 event -- common/autotest_common.sh@10 -- # set +x 00:06:26.029 ************************************ 00:06:26.029 START TEST event_perf 00:06:26.029 ************************************ 00:06:26.029 12:20:31 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:26.029 Running I/O for 1 seconds...[2024-11-20 12:20:31.787691] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:06:26.029 [2024-11-20 12:20:31.787759] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4186807 ] 00:06:26.288 [2024-11-20 12:20:31.865999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:26.288 [2024-11-20 12:20:31.909804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.288 [2024-11-20 12:20:31.909911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.288 [2024-11-20 12:20:31.910020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.288 [2024-11-20 12:20:31.910020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:27.227 Running I/O for 1 seconds... 00:06:27.227 lcore 0: 205631 00:06:27.227 lcore 1: 205630 00:06:27.227 lcore 2: 205629 00:06:27.227 lcore 3: 205631 00:06:27.227 done. 00:06:27.227 00:06:27.227 real 0m1.183s 00:06:27.227 user 0m4.103s 00:06:27.227 sys 0m0.077s 00:06:27.227 12:20:32 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.227 12:20:32 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:27.228 ************************************ 00:06:27.228 END TEST event_perf 00:06:27.228 ************************************ 00:06:27.228 12:20:32 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:27.228 12:20:32 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:27.228 12:20:32 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.228 12:20:32 event -- common/autotest_common.sh@10 -- # set +x 00:06:27.487 ************************************ 00:06:27.487 START TEST event_reactor 00:06:27.487 ************************************ 00:06:27.487 12:20:33 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:27.487 [2024-11-20 12:20:33.034492] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:06:27.487 [2024-11-20 12:20:33.034541] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4187058 ] 00:06:27.487 [2024-11-20 12:20:33.109366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.487 [2024-11-20 12:20:33.148773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.432 test_start 00:06:28.432 oneshot 00:06:28.432 tick 100 00:06:28.432 tick 100 00:06:28.432 tick 250 00:06:28.432 tick 100 00:06:28.432 tick 100 00:06:28.432 tick 250 00:06:28.432 tick 100 00:06:28.432 tick 500 00:06:28.432 tick 100 00:06:28.432 tick 100 00:06:28.432 tick 250 00:06:28.432 tick 100 00:06:28.432 tick 100 00:06:28.432 test_end 00:06:28.432 00:06:28.432 real 0m1.166s 00:06:28.432 user 0m1.091s 00:06:28.432 sys 0m0.071s 00:06:28.432 12:20:34 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.432 12:20:34 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:28.432 ************************************ 00:06:28.432 END TEST event_reactor 00:06:28.432 ************************************ 00:06:28.694 12:20:34 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:28.694 12:20:34 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:28.694 12:20:34 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.694 12:20:34 event -- common/autotest_common.sh@10 -- # set +x 00:06:28.694 ************************************ 00:06:28.694 START TEST event_reactor_perf 00:06:28.694 ************************************ 00:06:28.694 12:20:34 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:28.694 [2024-11-20 12:20:34.276268] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:06:28.694 [2024-11-20 12:20:34.276336] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4187310 ] 00:06:28.694 [2024-11-20 12:20:34.355659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.694 [2024-11-20 12:20:34.396307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.071 test_start 00:06:30.071 test_end 00:06:30.071 Performance: 511911 events per second 00:06:30.071 00:06:30.071 real 0m1.180s 00:06:30.071 user 0m1.102s 00:06:30.071 sys 0m0.075s 00:06:30.071 12:20:35 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.071 12:20:35 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:30.071 ************************************ 00:06:30.071 END TEST event_reactor_perf 00:06:30.071 ************************************ 00:06:30.071 12:20:35 event -- event/event.sh@49 -- # uname -s 00:06:30.071 12:20:35 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:30.071 12:20:35 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:30.071 12:20:35 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.071 12:20:35 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.071 12:20:35 event -- common/autotest_common.sh@10 -- # set +x 00:06:30.071 ************************************ 00:06:30.071 START TEST event_scheduler 00:06:30.071 ************************************ 00:06:30.071 12:20:35 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:30.071 * Looking for test storage... 00:06:30.071 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:30.071 12:20:35 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:30.071 12:20:35 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:30.071 12:20:35 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:30.071 12:20:35 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:30.071 12:20:35 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.071 12:20:35 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.071 12:20:35 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.071 12:20:35 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.071 12:20:35 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.071 12:20:35 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.071 12:20:35 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.071 12:20:35 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.071 12:20:35 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.071 12:20:35 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.071 12:20:35 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.071 12:20:35 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:30.071 12:20:35 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:30.071 12:20:35 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.071 12:20:35 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.071 12:20:35 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:30.071 12:20:35 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:30.071 12:20:35 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.071 12:20:35 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:30.071 12:20:35 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.071 12:20:35 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:30.071 12:20:35 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:30.071 12:20:35 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.071 12:20:35 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:30.071 12:20:35 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.071 12:20:35 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.071 12:20:35 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.071 12:20:35 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:30.071 12:20:35 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.071 12:20:35 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:30.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.071 --rc genhtml_branch_coverage=1 00:06:30.071 --rc genhtml_function_coverage=1 00:06:30.071 --rc genhtml_legend=1 00:06:30.071 --rc geninfo_all_blocks=1 00:06:30.071 --rc geninfo_unexecuted_blocks=1 00:06:30.071 00:06:30.071 ' 00:06:30.071 12:20:35 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:30.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.071 --rc genhtml_branch_coverage=1 00:06:30.071 --rc genhtml_function_coverage=1 00:06:30.071 --rc genhtml_legend=1 00:06:30.071 --rc geninfo_all_blocks=1 00:06:30.071 --rc geninfo_unexecuted_blocks=1 00:06:30.071 00:06:30.071 ' 00:06:30.071 12:20:35 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:30.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.072 --rc genhtml_branch_coverage=1 00:06:30.072 --rc genhtml_function_coverage=1 00:06:30.072 --rc genhtml_legend=1 00:06:30.072 --rc geninfo_all_blocks=1 00:06:30.072 --rc geninfo_unexecuted_blocks=1 00:06:30.072 00:06:30.072 ' 00:06:30.072 12:20:35 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:30.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.072 --rc genhtml_branch_coverage=1 00:06:30.072 --rc genhtml_function_coverage=1 00:06:30.072 --rc genhtml_legend=1 00:06:30.072 --rc geninfo_all_blocks=1 00:06:30.072 --rc geninfo_unexecuted_blocks=1 00:06:30.072 00:06:30.072 ' 00:06:30.072 12:20:35 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:30.072 12:20:35 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=4187595 00:06:30.072 12:20:35 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:30.072 12:20:35 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:30.072 12:20:35 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 4187595 00:06:30.072 12:20:35 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 4187595 ']' 00:06:30.072 12:20:35 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.072 12:20:35 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.072 12:20:35 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.072 12:20:35 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.072 12:20:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:30.072 [2024-11-20 12:20:35.727193] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:06:30.072 [2024-11-20 12:20:35.727249] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4187595 ] 00:06:30.072 [2024-11-20 12:20:35.803193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:30.331 [2024-11-20 12:20:35.846674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.331 [2024-11-20 12:20:35.846784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.331 [2024-11-20 12:20:35.846890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.331 [2024-11-20 12:20:35.846891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:30.900 12:20:36 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.900 12:20:36 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:30.900 12:20:36 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:30.900 12:20:36 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.900 12:20:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:30.900 [2024-11-20 12:20:36.573336] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:30.900 [2024-11-20 12:20:36.573354] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:30.900 [2024-11-20 12:20:36.573364] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:30.900 [2024-11-20 12:20:36.573370] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:30.900 [2024-11-20 12:20:36.573375] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:30.900 12:20:36 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.900 12:20:36 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:30.900 12:20:36 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.900 12:20:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:30.900 [2024-11-20 12:20:36.646750] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:30.900 12:20:36 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.900 12:20:36 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:30.900 12:20:36 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.900 12:20:36 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.900 12:20:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:31.160 ************************************ 00:06:31.160 START TEST scheduler_create_thread 00:06:31.160 ************************************ 00:06:31.160 12:20:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:31.160 12:20:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:31.160 12:20:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.160 12:20:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.160 2 00:06:31.160 12:20:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.160 12:20:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:31.160 12:20:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.160 12:20:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.160 3 00:06:31.160 12:20:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.160 12:20:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:31.160 12:20:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.160 12:20:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.160 4 00:06:31.160 12:20:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.160 12:20:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:31.160 12:20:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.160 12:20:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.160 5 00:06:31.160 12:20:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.160 12:20:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:31.160 12:20:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.160 12:20:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.160 6 00:06:31.160 12:20:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.160 12:20:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:31.160 12:20:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.160 12:20:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.160 7 00:06:31.160 12:20:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.160 12:20:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:31.160 12:20:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.160 12:20:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.160 8 00:06:31.160 12:20:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.160 12:20:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:31.160 12:20:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.160 12:20:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.160 9 00:06:31.160 12:20:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.160 12:20:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:31.160 12:20:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.160 12:20:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.160 10 00:06:31.160 12:20:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.160 12:20:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:31.160 12:20:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.160 12:20:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.160 12:20:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.160 12:20:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:31.160 12:20:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:31.160 12:20:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.160 12:20:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:32.097 12:20:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.097 12:20:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:32.097 12:20:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.097 12:20:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.476 12:20:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.476 12:20:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:33.476 12:20:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:33.476 12:20:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.476 12:20:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.443 12:20:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.443 00:06:34.443 real 0m3.382s 00:06:34.443 user 0m0.023s 00:06:34.443 sys 0m0.007s 00:06:34.443 12:20:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.443 12:20:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.443 ************************************ 00:06:34.443 END TEST scheduler_create_thread 00:06:34.443 ************************************ 00:06:34.443 12:20:40 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:34.443 12:20:40 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 4187595 00:06:34.443 12:20:40 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 4187595 ']' 00:06:34.443 12:20:40 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 4187595 00:06:34.443 12:20:40 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:34.443 12:20:40 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.443 12:20:40 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4187595 00:06:34.443 12:20:40 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:34.443 12:20:40 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:34.443 12:20:40 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4187595' 00:06:34.443 killing process with pid 4187595 00:06:34.443 12:20:40 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 4187595 00:06:34.443 12:20:40 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 4187595 00:06:34.716 [2024-11-20 12:20:40.447001] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:34.998 00:06:34.999 real 0m5.145s 00:06:34.999 user 0m10.663s 00:06:34.999 sys 0m0.422s 00:06:34.999 12:20:40 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.999 12:20:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:34.999 ************************************ 00:06:34.999 END TEST event_scheduler 00:06:34.999 ************************************ 00:06:34.999 12:20:40 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:34.999 12:20:40 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:34.999 12:20:40 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:34.999 12:20:40 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.999 12:20:40 event -- common/autotest_common.sh@10 -- # set +x 00:06:34.999 ************************************ 00:06:34.999 START TEST app_repeat 00:06:34.999 ************************************ 00:06:34.999 12:20:40 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:34.999 12:20:40 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.999 12:20:40 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.999 12:20:40 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:34.999 12:20:40 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:34.999 12:20:40 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:34.999 12:20:40 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:34.999 12:20:40 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:34.999 12:20:40 event.app_repeat -- event/event.sh@19 -- # repeat_pid=4188562 00:06:34.999 12:20:40 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:34.999 12:20:40 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:34.999 12:20:40 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 4188562' 00:06:34.999 Process app_repeat pid: 4188562 00:06:34.999 12:20:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:34.999 12:20:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:34.999 spdk_app_start Round 0 00:06:34.999 12:20:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4188562 /var/tmp/spdk-nbd.sock 00:06:34.999 12:20:40 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 4188562 ']' 00:06:34.999 12:20:40 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:34.999 12:20:40 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.999 12:20:40 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:34.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:34.999 12:20:40 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.999 12:20:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:35.325 [2024-11-20 12:20:40.765555] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:06:35.325 [2024-11-20 12:20:40.765606] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4188562 ] 00:06:35.325 [2024-11-20 12:20:40.840478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:35.325 [2024-11-20 12:20:40.887544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.325 [2024-11-20 12:20:40.887546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.325 12:20:40 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.325 12:20:40 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:35.325 12:20:40 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:35.584 Malloc0 00:06:35.584 12:20:41 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:35.843 Malloc1 00:06:35.843 12:20:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:35.843 12:20:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.843 12:20:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:35.843 12:20:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:35.843 12:20:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.843 12:20:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:35.843 12:20:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:35.843 12:20:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.843 12:20:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:35.843 12:20:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:35.843 12:20:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.843 12:20:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:35.843 12:20:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:35.843 12:20:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:35.843 12:20:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:35.843 12:20:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:35.843 /dev/nbd0 00:06:36.102 12:20:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:36.102 12:20:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:36.102 12:20:41 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:36.102 12:20:41 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:36.102 12:20:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:36.102 12:20:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:36.102 12:20:41 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:36.102 12:20:41 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:36.102 12:20:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:36.102 12:20:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:36.102 12:20:41 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:36.102 1+0 records in 00:06:36.102 1+0 records out 00:06:36.102 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00012252 s, 33.4 MB/s 00:06:36.102 12:20:41 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:36.102 12:20:41 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:36.102 12:20:41 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:36.102 12:20:41 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:36.102 12:20:41 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:36.102 12:20:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:36.102 12:20:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.102 12:20:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:36.102 /dev/nbd1 00:06:36.360 12:20:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:36.361 12:20:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:36.361 12:20:41 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:36.361 12:20:41 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:36.361 12:20:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:36.361 12:20:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:36.361 12:20:41 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:36.361 12:20:41 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:36.361 12:20:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:36.361 12:20:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:36.361 12:20:41 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:36.361 1+0 records in 00:06:36.361 1+0 records out 00:06:36.361 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00019464 s, 21.0 MB/s 00:06:36.361 12:20:41 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:36.361 12:20:41 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:36.361 12:20:41 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:36.361 12:20:41 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:36.361 12:20:41 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:36.361 12:20:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:36.361 12:20:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.361 12:20:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:36.361 12:20:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.361 12:20:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:36.361 12:20:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:36.361 { 00:06:36.361 "nbd_device": "/dev/nbd0", 00:06:36.361 "bdev_name": "Malloc0" 00:06:36.361 }, 00:06:36.361 { 00:06:36.361 "nbd_device": "/dev/nbd1", 00:06:36.361 "bdev_name": "Malloc1" 00:06:36.361 } 00:06:36.361 ]' 00:06:36.361 12:20:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:36.361 { 00:06:36.361 "nbd_device": "/dev/nbd0", 00:06:36.361 "bdev_name": "Malloc0" 00:06:36.361 }, 00:06:36.361 { 00:06:36.361 "nbd_device": "/dev/nbd1", 00:06:36.361 "bdev_name": "Malloc1" 00:06:36.361 } 00:06:36.361 ]' 00:06:36.361 12:20:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:36.620 12:20:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:36.620 /dev/nbd1' 00:06:36.620 12:20:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:36.620 /dev/nbd1' 00:06:36.620 12:20:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:36.620 12:20:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:36.620 12:20:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:36.620 12:20:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:36.620 12:20:42 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:36.620 12:20:42 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:36.620 12:20:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.620 12:20:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:36.620 12:20:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:36.620 12:20:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:36.620 12:20:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:36.620 12:20:42 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:36.620 256+0 records in 00:06:36.620 256+0 records out 00:06:36.620 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010261 s, 102 MB/s 00:06:36.620 12:20:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:36.620 12:20:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:36.620 256+0 records in 00:06:36.620 256+0 records out 00:06:36.620 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014686 s, 71.4 MB/s 00:06:36.620 12:20:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:36.620 12:20:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:36.620 256+0 records in 00:06:36.620 256+0 records out 00:06:36.620 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152408 s, 68.8 MB/s 00:06:36.620 12:20:42 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:36.620 12:20:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.620 12:20:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:36.620 12:20:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:36.620 12:20:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:36.620 12:20:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:36.620 12:20:42 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:36.620 12:20:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:36.620 12:20:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:36.620 12:20:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:36.620 12:20:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:36.620 12:20:42 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:36.620 12:20:42 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:36.620 12:20:42 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.620 12:20:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.620 12:20:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:36.620 12:20:42 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:36.620 12:20:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.620 12:20:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:36.879 12:20:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:36.879 12:20:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:36.879 12:20:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:36.879 12:20:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.879 12:20:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.879 12:20:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:36.879 12:20:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:36.879 12:20:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.879 12:20:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.879 12:20:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:36.879 12:20:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:36.879 12:20:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:36.879 12:20:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:36.879 12:20:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.879 12:20:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.879 12:20:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:37.137 12:20:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:37.137 12:20:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:37.137 12:20:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:37.137 12:20:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.137 12:20:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:37.137 12:20:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:37.137 12:20:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:37.137 12:20:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:37.137 12:20:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:37.137 12:20:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:37.137 12:20:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:37.138 12:20:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:37.138 12:20:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:37.138 12:20:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:37.138 12:20:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:37.138 12:20:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:37.138 12:20:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:37.138 12:20:42 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:37.397 12:20:43 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:37.656 [2024-11-20 12:20:43.237708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:37.656 [2024-11-20 12:20:43.274746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.656 [2024-11-20 12:20:43.274747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.656 [2024-11-20 12:20:43.315405] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:37.656 [2024-11-20 12:20:43.315449] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:40.945 12:20:46 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:40.945 12:20:46 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:40.945 spdk_app_start Round 1 00:06:40.945 12:20:46 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4188562 /var/tmp/spdk-nbd.sock 00:06:40.945 12:20:46 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 4188562 ']' 00:06:40.945 12:20:46 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:40.945 12:20:46 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.945 12:20:46 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:40.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:40.945 12:20:46 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.945 12:20:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:40.945 12:20:46 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.945 12:20:46 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:40.945 12:20:46 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:40.945 Malloc0 00:06:40.945 12:20:46 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:40.945 Malloc1 00:06:41.204 12:20:46 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:41.204 12:20:46 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.204 12:20:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:41.204 12:20:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:41.204 12:20:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.204 12:20:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:41.204 12:20:46 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:41.204 12:20:46 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.204 12:20:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:41.204 12:20:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:41.204 12:20:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.204 12:20:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:41.204 12:20:46 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:41.204 12:20:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:41.204 12:20:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.204 12:20:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:41.204 /dev/nbd0 00:06:41.204 12:20:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:41.204 12:20:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:41.204 12:20:46 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:41.204 12:20:46 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:41.204 12:20:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:41.204 12:20:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:41.204 12:20:46 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:41.204 12:20:46 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:41.204 12:20:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:41.204 12:20:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:41.204 12:20:46 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:41.204 1+0 records in 00:06:41.204 1+0 records out 00:06:41.204 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250029 s, 16.4 MB/s 00:06:41.204 12:20:46 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:41.204 12:20:46 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:41.204 12:20:46 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:41.204 12:20:46 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:41.204 12:20:46 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:41.204 12:20:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.204 12:20:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.204 12:20:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:41.462 /dev/nbd1 00:06:41.462 12:20:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:41.462 12:20:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:41.462 12:20:47 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:41.462 12:20:47 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:41.462 12:20:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:41.462 12:20:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:41.462 12:20:47 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:41.462 12:20:47 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:41.462 12:20:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:41.462 12:20:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:41.462 12:20:47 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:41.462 1+0 records in 00:06:41.462 1+0 records out 00:06:41.462 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209381 s, 19.6 MB/s 00:06:41.462 12:20:47 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:41.462 12:20:47 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:41.462 12:20:47 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:41.462 12:20:47 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:41.462 12:20:47 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:41.462 12:20:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.462 12:20:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.462 12:20:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:41.462 12:20:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.462 12:20:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:41.721 12:20:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:41.721 { 00:06:41.721 "nbd_device": "/dev/nbd0", 00:06:41.721 "bdev_name": "Malloc0" 00:06:41.721 }, 00:06:41.721 { 00:06:41.721 "nbd_device": "/dev/nbd1", 00:06:41.721 "bdev_name": "Malloc1" 00:06:41.721 } 00:06:41.721 ]' 00:06:41.721 12:20:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:41.721 { 00:06:41.721 "nbd_device": "/dev/nbd0", 00:06:41.721 "bdev_name": "Malloc0" 00:06:41.721 }, 00:06:41.721 { 00:06:41.721 "nbd_device": "/dev/nbd1", 00:06:41.721 "bdev_name": "Malloc1" 00:06:41.721 } 00:06:41.721 ]' 00:06:41.721 12:20:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:41.721 12:20:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:41.721 /dev/nbd1' 00:06:41.721 12:20:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:41.721 /dev/nbd1' 00:06:41.721 12:20:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:41.721 12:20:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:41.721 12:20:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:41.721 12:20:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:41.721 12:20:47 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:41.721 12:20:47 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:41.721 12:20:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.721 12:20:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:41.721 12:20:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:41.721 12:20:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:41.721 12:20:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:41.721 12:20:47 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:41.721 256+0 records in 00:06:41.721 256+0 records out 00:06:41.721 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105247 s, 99.6 MB/s 00:06:41.721 12:20:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.721 12:20:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:41.721 256+0 records in 00:06:41.721 256+0 records out 00:06:41.721 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135359 s, 77.5 MB/s 00:06:41.721 12:20:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.721 12:20:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:41.980 256+0 records in 00:06:41.980 256+0 records out 00:06:41.980 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147101 s, 71.3 MB/s 00:06:41.980 12:20:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:41.980 12:20:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.980 12:20:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:41.980 12:20:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:41.980 12:20:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:41.980 12:20:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:41.980 12:20:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:41.980 12:20:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:41.980 12:20:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:41.980 12:20:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:41.980 12:20:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:41.980 12:20:47 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:41.980 12:20:47 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:41.980 12:20:47 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.980 12:20:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.980 12:20:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:41.980 12:20:47 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:41.980 12:20:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:41.980 12:20:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:41.980 12:20:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:41.980 12:20:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:41.980 12:20:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:41.980 12:20:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.980 12:20:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.980 12:20:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:41.980 12:20:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:41.980 12:20:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.981 12:20:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:41.981 12:20:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:42.239 12:20:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:42.239 12:20:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:42.239 12:20:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:42.239 12:20:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.239 12:20:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.239 12:20:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:42.239 12:20:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:42.239 12:20:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.239 12:20:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:42.239 12:20:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.239 12:20:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:42.497 12:20:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:42.497 12:20:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:42.497 12:20:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:42.497 12:20:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:42.497 12:20:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:42.497 12:20:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:42.497 12:20:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:42.497 12:20:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:42.497 12:20:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:42.497 12:20:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:42.497 12:20:48 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:42.497 12:20:48 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:42.497 12:20:48 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:42.756 12:20:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:43.014 [2024-11-20 12:20:48.542449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:43.014 [2024-11-20 12:20:48.579190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.014 [2024-11-20 12:20:48.579191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.014 [2024-11-20 12:20:48.620617] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:43.014 [2024-11-20 12:20:48.620656] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:46.298 12:20:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:46.298 12:20:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:46.298 spdk_app_start Round 2 00:06:46.298 12:20:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4188562 /var/tmp/spdk-nbd.sock 00:06:46.298 12:20:51 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 4188562 ']' 00:06:46.298 12:20:51 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:46.298 12:20:51 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.298 12:20:51 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:46.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:46.298 12:20:51 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.298 12:20:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:46.298 12:20:51 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.298 12:20:51 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:46.298 12:20:51 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:46.298 Malloc0 00:06:46.298 12:20:51 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:46.298 Malloc1 00:06:46.298 12:20:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:46.298 12:20:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.298 12:20:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:46.298 12:20:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:46.298 12:20:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.298 12:20:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:46.298 12:20:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:46.298 12:20:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.298 12:20:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:46.298 12:20:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:46.298 12:20:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.298 12:20:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:46.298 12:20:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:46.298 12:20:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:46.298 12:20:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.298 12:20:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:46.558 /dev/nbd0 00:06:46.558 12:20:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:46.558 12:20:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:46.558 12:20:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:46.558 12:20:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:46.558 12:20:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:46.558 12:20:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:46.558 12:20:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:46.558 12:20:52 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:46.558 12:20:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:46.558 12:20:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:46.558 12:20:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:46.558 1+0 records in 00:06:46.558 1+0 records out 00:06:46.558 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000200494 s, 20.4 MB/s 00:06:46.558 12:20:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:46.558 12:20:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:46.558 12:20:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:46.558 12:20:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:46.558 12:20:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:46.558 12:20:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.558 12:20:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.558 12:20:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:46.818 /dev/nbd1 00:06:46.818 12:20:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:46.818 12:20:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:46.818 12:20:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:46.818 12:20:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:46.818 12:20:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:46.818 12:20:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:46.818 12:20:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:46.818 12:20:52 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:46.818 12:20:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:46.818 12:20:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:46.818 12:20:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:46.818 1+0 records in 00:06:46.818 1+0 records out 00:06:46.818 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00018787 s, 21.8 MB/s 00:06:46.818 12:20:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:46.818 12:20:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:46.818 12:20:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:46.818 12:20:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:46.818 12:20:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:46.818 12:20:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.818 12:20:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.818 12:20:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:46.818 12:20:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.818 12:20:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:47.077 12:20:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:47.077 { 00:06:47.077 "nbd_device": "/dev/nbd0", 00:06:47.077 "bdev_name": "Malloc0" 00:06:47.077 }, 00:06:47.077 { 00:06:47.077 "nbd_device": "/dev/nbd1", 00:06:47.077 "bdev_name": "Malloc1" 00:06:47.077 } 00:06:47.077 ]' 00:06:47.077 12:20:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:47.077 { 00:06:47.077 "nbd_device": "/dev/nbd0", 00:06:47.077 "bdev_name": "Malloc0" 00:06:47.077 }, 00:06:47.077 { 00:06:47.077 "nbd_device": "/dev/nbd1", 00:06:47.077 "bdev_name": "Malloc1" 00:06:47.077 } 00:06:47.077 ]' 00:06:47.077 12:20:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:47.077 12:20:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:47.077 /dev/nbd1' 00:06:47.077 12:20:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:47.077 12:20:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:47.077 /dev/nbd1' 00:06:47.077 12:20:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:47.077 12:20:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:47.077 12:20:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:47.077 12:20:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:47.077 12:20:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:47.077 12:20:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.077 12:20:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:47.077 12:20:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:47.077 12:20:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:47.077 12:20:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:47.077 12:20:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:47.077 256+0 records in 00:06:47.077 256+0 records out 00:06:47.077 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106306 s, 98.6 MB/s 00:06:47.077 12:20:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:47.077 12:20:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:47.077 256+0 records in 00:06:47.077 256+0 records out 00:06:47.077 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143243 s, 73.2 MB/s 00:06:47.077 12:20:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:47.077 12:20:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:47.077 256+0 records in 00:06:47.077 256+0 records out 00:06:47.077 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148761 s, 70.5 MB/s 00:06:47.077 12:20:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:47.077 12:20:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.077 12:20:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:47.077 12:20:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:47.077 12:20:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:47.077 12:20:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:47.077 12:20:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:47.077 12:20:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:47.077 12:20:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:47.077 12:20:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:47.077 12:20:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:47.077 12:20:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:47.077 12:20:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:47.077 12:20:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.077 12:20:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.077 12:20:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:47.077 12:20:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:47.077 12:20:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.077 12:20:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:47.335 12:20:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:47.335 12:20:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:47.335 12:20:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:47.335 12:20:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.335 12:20:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.335 12:20:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:47.335 12:20:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:47.335 12:20:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.335 12:20:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.335 12:20:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:47.594 12:20:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:47.594 12:20:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:47.594 12:20:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:47.594 12:20:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.594 12:20:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.594 12:20:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:47.594 12:20:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:47.594 12:20:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.594 12:20:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:47.594 12:20:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.594 12:20:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:47.853 12:20:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:47.853 12:20:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:47.853 12:20:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:47.853 12:20:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:47.853 12:20:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:47.853 12:20:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:47.853 12:20:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:47.853 12:20:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:47.853 12:20:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:47.853 12:20:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:47.853 12:20:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:47.853 12:20:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:47.853 12:20:53 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:48.112 12:20:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:48.112 [2024-11-20 12:20:53.825483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:48.112 [2024-11-20 12:20:53.861525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.112 [2024-11-20 12:20:53.861526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.372 [2024-11-20 12:20:53.902227] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:48.372 [2024-11-20 12:20:53.902265] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:51.659 12:20:56 event.app_repeat -- event/event.sh@38 -- # waitforlisten 4188562 /var/tmp/spdk-nbd.sock 00:06:51.659 12:20:56 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 4188562 ']' 00:06:51.659 12:20:56 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:51.659 12:20:56 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.659 12:20:56 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:51.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:51.659 12:20:56 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.659 12:20:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:51.659 12:20:56 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.659 12:20:56 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:51.659 12:20:56 event.app_repeat -- event/event.sh@39 -- # killprocess 4188562 00:06:51.659 12:20:56 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 4188562 ']' 00:06:51.659 12:20:56 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 4188562 00:06:51.659 12:20:56 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:51.659 12:20:56 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.659 12:20:56 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4188562 00:06:51.659 12:20:56 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:51.659 12:20:56 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:51.660 12:20:56 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4188562' 00:06:51.660 killing process with pid 4188562 00:06:51.660 12:20:56 event.app_repeat -- common/autotest_common.sh@973 -- # kill 4188562 00:06:51.660 12:20:56 event.app_repeat -- common/autotest_common.sh@978 -- # wait 4188562 00:06:51.660 spdk_app_start is called in Round 0. 00:06:51.660 Shutdown signal received, stop current app iteration 00:06:51.660 Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 reinitialization... 00:06:51.660 spdk_app_start is called in Round 1. 00:06:51.660 Shutdown signal received, stop current app iteration 00:06:51.660 Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 reinitialization... 00:06:51.660 spdk_app_start is called in Round 2. 00:06:51.660 Shutdown signal received, stop current app iteration 00:06:51.660 Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 reinitialization... 00:06:51.660 spdk_app_start is called in Round 3. 00:06:51.660 Shutdown signal received, stop current app iteration 00:06:51.660 12:20:57 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:51.660 12:20:57 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:51.660 00:06:51.660 real 0m16.349s 00:06:51.660 user 0m35.924s 00:06:51.660 sys 0m2.526s 00:06:51.660 12:20:57 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.660 12:20:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:51.660 ************************************ 00:06:51.660 END TEST app_repeat 00:06:51.660 ************************************ 00:06:51.660 12:20:57 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:51.660 12:20:57 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:51.660 12:20:57 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.660 12:20:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.660 12:20:57 event -- common/autotest_common.sh@10 -- # set +x 00:06:51.660 ************************************ 00:06:51.660 START TEST cpu_locks 00:06:51.660 ************************************ 00:06:51.660 12:20:57 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:51.660 * Looking for test storage... 00:06:51.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:51.660 12:20:57 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:51.660 12:20:57 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:51.660 12:20:57 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:51.660 12:20:57 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:51.660 12:20:57 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:51.660 12:20:57 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:51.660 12:20:57 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:51.660 12:20:57 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:51.660 12:20:57 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:51.660 12:20:57 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:51.660 12:20:57 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:51.660 12:20:57 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:51.660 12:20:57 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:51.660 12:20:57 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:51.660 12:20:57 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:51.660 12:20:57 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:51.660 12:20:57 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:51.660 12:20:57 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:51.660 12:20:57 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:51.660 12:20:57 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:51.660 12:20:57 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:51.660 12:20:57 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:51.660 12:20:57 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:51.660 12:20:57 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:51.660 12:20:57 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:51.660 12:20:57 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:51.660 12:20:57 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:51.660 12:20:57 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:51.660 12:20:57 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:51.660 12:20:57 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:51.660 12:20:57 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:51.660 12:20:57 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:51.660 12:20:57 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:51.660 12:20:57 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:51.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.660 --rc genhtml_branch_coverage=1 00:06:51.660 --rc genhtml_function_coverage=1 00:06:51.660 --rc genhtml_legend=1 00:06:51.660 --rc geninfo_all_blocks=1 00:06:51.660 --rc geninfo_unexecuted_blocks=1 00:06:51.660 00:06:51.660 ' 00:06:51.660 12:20:57 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:51.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.660 --rc genhtml_branch_coverage=1 00:06:51.660 --rc genhtml_function_coverage=1 00:06:51.660 --rc genhtml_legend=1 00:06:51.660 --rc geninfo_all_blocks=1 00:06:51.660 --rc geninfo_unexecuted_blocks=1 00:06:51.660 00:06:51.660 ' 00:06:51.660 12:20:57 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:51.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.660 --rc genhtml_branch_coverage=1 00:06:51.660 --rc genhtml_function_coverage=1 00:06:51.660 --rc genhtml_legend=1 00:06:51.660 --rc geninfo_all_blocks=1 00:06:51.660 --rc geninfo_unexecuted_blocks=1 00:06:51.660 00:06:51.660 ' 00:06:51.660 12:20:57 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:51.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.660 --rc genhtml_branch_coverage=1 00:06:51.660 --rc genhtml_function_coverage=1 00:06:51.660 --rc genhtml_legend=1 00:06:51.660 --rc geninfo_all_blocks=1 00:06:51.660 --rc geninfo_unexecuted_blocks=1 00:06:51.660 00:06:51.660 ' 00:06:51.661 12:20:57 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:51.661 12:20:57 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:51.661 12:20:57 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:51.661 12:20:57 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:51.661 12:20:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.661 12:20:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.661 12:20:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.661 ************************************ 00:06:51.661 START TEST default_locks 00:06:51.661 ************************************ 00:06:51.661 12:20:57 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:51.661 12:20:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=4191569 00:06:51.661 12:20:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:51.661 12:20:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 4191569 00:06:51.661 12:20:57 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 4191569 ']' 00:06:51.661 12:20:57 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.661 12:20:57 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.661 12:20:57 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.661 12:20:57 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.661 12:20:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.661 [2024-11-20 12:20:57.402282] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:06:51.661 [2024-11-20 12:20:57.402321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4191569 ] 00:06:51.920 [2024-11-20 12:20:57.460193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.920 [2024-11-20 12:20:57.503623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.178 12:20:57 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.178 12:20:57 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:52.178 12:20:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 4191569 00:06:52.178 12:20:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 4191569 00:06:52.178 12:20:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:52.438 lslocks: write error 00:06:52.438 12:20:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 4191569 00:06:52.438 12:20:58 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 4191569 ']' 00:06:52.438 12:20:58 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 4191569 00:06:52.438 12:20:58 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:52.438 12:20:58 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.438 12:20:58 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4191569 00:06:52.438 12:20:58 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:52.438 12:20:58 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:52.438 12:20:58 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4191569' 00:06:52.438 killing process with pid 4191569 00:06:52.438 12:20:58 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 4191569 00:06:52.438 12:20:58 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 4191569 00:06:52.697 12:20:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 4191569 00:06:52.697 12:20:58 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:52.697 12:20:58 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 4191569 00:06:52.697 12:20:58 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:52.697 12:20:58 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.697 12:20:58 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:52.697 12:20:58 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.697 12:20:58 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 4191569 00:06:52.697 12:20:58 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 4191569 ']' 00:06:52.697 12:20:58 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.697 12:20:58 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.697 12:20:58 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.697 12:20:58 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.697 12:20:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (4191569) - No such process 00:06:52.697 ERROR: process (pid: 4191569) is no longer running 00:06:52.697 12:20:58 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.697 12:20:58 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:52.697 12:20:58 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:52.697 12:20:58 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:52.697 12:20:58 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:52.697 12:20:58 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:52.697 12:20:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:52.697 12:20:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:52.697 12:20:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:52.697 12:20:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:52.697 00:06:52.697 real 0m1.071s 00:06:52.697 user 0m1.052s 00:06:52.697 sys 0m0.492s 00:06:52.697 12:20:58 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.697 12:20:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.697 ************************************ 00:06:52.697 END TEST default_locks 00:06:52.697 ************************************ 00:06:52.956 12:20:58 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:52.956 12:20:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.956 12:20:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.956 12:20:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.956 ************************************ 00:06:52.956 START TEST default_locks_via_rpc 00:06:52.956 ************************************ 00:06:52.956 12:20:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:52.956 12:20:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=4191773 00:06:52.956 12:20:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 4191773 00:06:52.956 12:20:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:52.956 12:20:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 4191773 ']' 00:06:52.956 12:20:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.956 12:20:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.956 12:20:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.956 12:20:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.956 12:20:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.957 [2024-11-20 12:20:58.544716] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:06:52.957 [2024-11-20 12:20:58.544755] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4191773 ] 00:06:52.957 [2024-11-20 12:20:58.600108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.957 [2024-11-20 12:20:58.642730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.216 12:20:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.216 12:20:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:53.216 12:20:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:53.216 12:20:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.216 12:20:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.216 12:20:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.216 12:20:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:53.216 12:20:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:53.216 12:20:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:53.216 12:20:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:53.216 12:20:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:53.216 12:20:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.216 12:20:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.216 12:20:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.216 12:20:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 4191773 00:06:53.216 12:20:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 4191773 00:06:53.216 12:20:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:53.475 12:20:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 4191773 00:06:53.475 12:20:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 4191773 ']' 00:06:53.475 12:20:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 4191773 00:06:53.475 12:20:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:53.475 12:20:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:53.475 12:20:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4191773 00:06:53.733 12:20:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:53.733 12:20:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:53.733 12:20:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4191773' 00:06:53.733 killing process with pid 4191773 00:06:53.733 12:20:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 4191773 00:06:53.733 12:20:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 4191773 00:06:53.992 00:06:53.992 real 0m1.039s 00:06:53.992 user 0m1.019s 00:06:53.992 sys 0m0.458s 00:06:53.992 12:20:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.993 12:20:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.993 ************************************ 00:06:53.993 END TEST default_locks_via_rpc 00:06:53.993 ************************************ 00:06:53.993 12:20:59 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:53.993 12:20:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.993 12:20:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.993 12:20:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:53.993 ************************************ 00:06:53.993 START TEST non_locking_app_on_locked_coremask 00:06:53.993 ************************************ 00:06:53.993 12:20:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:53.993 12:20:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=4191876 00:06:53.993 12:20:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 4191876 /var/tmp/spdk.sock 00:06:53.993 12:20:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:53.993 12:20:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 4191876 ']' 00:06:53.993 12:20:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.993 12:20:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.993 12:20:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.993 12:20:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.993 12:20:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.993 [2024-11-20 12:20:59.657564] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:06:53.993 [2024-11-20 12:20:59.657608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4191876 ] 00:06:53.993 [2024-11-20 12:20:59.734262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.252 [2024-11-20 12:20:59.775484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.252 12:21:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.252 12:21:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:54.252 12:21:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=4192093 00:06:54.252 12:21:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 4192093 /var/tmp/spdk2.sock 00:06:54.252 12:21:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:54.252 12:21:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 4192093 ']' 00:06:54.252 12:21:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:54.252 12:21:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.252 12:21:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:54.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:54.252 12:21:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.252 12:21:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.511 [2024-11-20 12:21:00.059721] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:06:54.511 [2024-11-20 12:21:00.059779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4192093 ] 00:06:54.511 [2024-11-20 12:21:00.154953] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:54.511 [2024-11-20 12:21:00.154986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.511 [2024-11-20 12:21:00.239483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.448 12:21:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.448 12:21:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:55.448 12:21:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 4191876 00:06:55.448 12:21:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4191876 00:06:55.448 12:21:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:55.707 lslocks: write error 00:06:55.707 12:21:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 4191876 00:06:55.707 12:21:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 4191876 ']' 00:06:55.707 12:21:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 4191876 00:06:55.707 12:21:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:55.707 12:21:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:55.707 12:21:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4191876 00:06:55.707 12:21:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:55.707 12:21:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:55.707 12:21:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4191876' 00:06:55.707 killing process with pid 4191876 00:06:55.707 12:21:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 4191876 00:06:55.707 12:21:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 4191876 00:06:56.275 12:21:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 4192093 00:06:56.275 12:21:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 4192093 ']' 00:06:56.275 12:21:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 4192093 00:06:56.275 12:21:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:56.534 12:21:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:56.534 12:21:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4192093 00:06:56.534 12:21:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:56.534 12:21:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:56.534 12:21:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4192093' 00:06:56.534 killing process with pid 4192093 00:06:56.534 12:21:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 4192093 00:06:56.534 12:21:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 4192093 00:06:56.794 00:06:56.794 real 0m2.783s 00:06:56.794 user 0m2.930s 00:06:56.794 sys 0m0.923s 00:06:56.794 12:21:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.794 12:21:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.794 ************************************ 00:06:56.794 END TEST non_locking_app_on_locked_coremask 00:06:56.794 ************************************ 00:06:56.794 12:21:02 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:56.794 12:21:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:56.794 12:21:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.794 12:21:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.794 ************************************ 00:06:56.794 START TEST locking_app_on_unlocked_coremask 00:06:56.794 ************************************ 00:06:56.794 12:21:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:56.794 12:21:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=4192554 00:06:56.794 12:21:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 4192554 /var/tmp/spdk.sock 00:06:56.794 12:21:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:56.794 12:21:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 4192554 ']' 00:06:56.794 12:21:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.794 12:21:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.794 12:21:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.794 12:21:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.794 12:21:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.794 [2024-11-20 12:21:02.515597] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:06:56.795 [2024-11-20 12:21:02.515642] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4192554 ] 00:06:57.054 [2024-11-20 12:21:02.591842] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:57.054 [2024-11-20 12:21:02.591868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.054 [2024-11-20 12:21:02.634318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.313 12:21:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.313 12:21:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:57.313 12:21:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=4192713 00:06:57.313 12:21:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 4192713 /var/tmp/spdk2.sock 00:06:57.313 12:21:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:57.313 12:21:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 4192713 ']' 00:06:57.313 12:21:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:57.313 12:21:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.313 12:21:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:57.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:57.313 12:21:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.313 12:21:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.313 [2024-11-20 12:21:02.898037] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:06:57.313 [2024-11-20 12:21:02.898086] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4192713 ] 00:06:57.313 [2024-11-20 12:21:02.984968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.313 [2024-11-20 12:21:03.070705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.250 12:21:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.250 12:21:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:58.250 12:21:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 4192713 00:06:58.250 12:21:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4192713 00:06:58.250 12:21:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:58.509 lslocks: write error 00:06:58.509 12:21:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 4192554 00:06:58.509 12:21:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 4192554 ']' 00:06:58.509 12:21:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 4192554 00:06:58.509 12:21:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:58.509 12:21:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:58.509 12:21:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4192554 00:06:58.509 12:21:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:58.509 12:21:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:58.509 12:21:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4192554' 00:06:58.509 killing process with pid 4192554 00:06:58.509 12:21:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 4192554 00:06:58.509 12:21:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 4192554 00:06:59.077 12:21:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 4192713 00:06:59.077 12:21:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 4192713 ']' 00:06:59.077 12:21:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 4192713 00:06:59.077 12:21:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:59.077 12:21:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:59.077 12:21:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4192713 00:06:59.077 12:21:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:59.077 12:21:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:59.077 12:21:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4192713' 00:06:59.077 killing process with pid 4192713 00:06:59.077 12:21:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 4192713 00:06:59.077 12:21:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 4192713 00:06:59.646 00:06:59.646 real 0m2.651s 00:06:59.646 user 0m2.787s 00:06:59.646 sys 0m0.864s 00:06:59.646 12:21:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.646 12:21:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.646 ************************************ 00:06:59.646 END TEST locking_app_on_unlocked_coremask 00:06:59.646 ************************************ 00:06:59.646 12:21:05 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:59.646 12:21:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:59.646 12:21:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.646 12:21:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:59.646 ************************************ 00:06:59.646 START TEST locking_app_on_locked_coremask 00:06:59.646 ************************************ 00:06:59.646 12:21:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:59.646 12:21:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=4193024 00:06:59.646 12:21:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 4193024 /var/tmp/spdk.sock 00:06:59.646 12:21:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:59.646 12:21:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 4193024 ']' 00:06:59.646 12:21:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.646 12:21:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.646 12:21:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.646 12:21:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.646 12:21:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.646 [2024-11-20 12:21:05.232681] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:06:59.646 [2024-11-20 12:21:05.232721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4193024 ] 00:06:59.646 [2024-11-20 12:21:05.308175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.646 [2024-11-20 12:21:05.350573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.905 12:21:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:59.906 12:21:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:59.906 12:21:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=4193212 00:06:59.906 12:21:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 4193212 /var/tmp/spdk2.sock 00:06:59.906 12:21:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:59.906 12:21:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:59.906 12:21:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 4193212 /var/tmp/spdk2.sock 00:06:59.906 12:21:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:59.906 12:21:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:59.906 12:21:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:59.906 12:21:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:59.906 12:21:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 4193212 /var/tmp/spdk2.sock 00:06:59.906 12:21:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 4193212 ']' 00:06:59.906 12:21:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:59.906 12:21:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.906 12:21:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:59.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:59.906 12:21:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.906 12:21:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.906 [2024-11-20 12:21:05.621879] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:06:59.906 [2024-11-20 12:21:05.621926] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4193212 ] 00:07:00.164 [2024-11-20 12:21:05.712707] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 4193024 has claimed it. 00:07:00.165 [2024-11-20 12:21:05.712741] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:00.732 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (4193212) - No such process 00:07:00.732 ERROR: process (pid: 4193212) is no longer running 00:07:00.732 12:21:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.732 12:21:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:00.732 12:21:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:00.732 12:21:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:00.732 12:21:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:00.732 12:21:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:00.732 12:21:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 4193024 00:07:00.732 12:21:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4193024 00:07:00.732 12:21:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:00.991 lslocks: write error 00:07:00.991 12:21:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 4193024 00:07:00.991 12:21:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 4193024 ']' 00:07:00.991 12:21:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 4193024 00:07:00.991 12:21:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:00.991 12:21:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:00.991 12:21:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4193024 00:07:00.991 12:21:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:00.991 12:21:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:00.991 12:21:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4193024' 00:07:00.991 killing process with pid 4193024 00:07:00.991 12:21:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 4193024 00:07:00.991 12:21:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 4193024 00:07:01.250 00:07:01.250 real 0m1.721s 00:07:01.250 user 0m1.845s 00:07:01.250 sys 0m0.564s 00:07:01.250 12:21:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.250 12:21:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:01.250 ************************************ 00:07:01.250 END TEST locking_app_on_locked_coremask 00:07:01.250 ************************************ 00:07:01.250 12:21:06 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:01.250 12:21:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:01.250 12:21:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.250 12:21:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:01.250 ************************************ 00:07:01.250 START TEST locking_overlapped_coremask 00:07:01.250 ************************************ 00:07:01.250 12:21:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:01.250 12:21:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=4193790 00:07:01.250 12:21:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 4193790 /var/tmp/spdk.sock 00:07:01.250 12:21:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:01.250 12:21:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 4193790 ']' 00:07:01.250 12:21:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.250 12:21:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.250 12:21:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.250 12:21:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.250 12:21:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:01.509 [2024-11-20 12:21:07.022535] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:07:01.509 [2024-11-20 12:21:07.022577] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4193790 ] 00:07:01.509 [2024-11-20 12:21:07.098303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:01.509 [2024-11-20 12:21:07.142646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.509 [2024-11-20 12:21:07.142757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.509 [2024-11-20 12:21:07.142758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.768 12:21:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.768 12:21:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:01.768 12:21:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:01.768 12:21:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=4193857 00:07:01.768 12:21:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 4193857 /var/tmp/spdk2.sock 00:07:01.768 12:21:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:01.768 12:21:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 4193857 /var/tmp/spdk2.sock 00:07:01.768 12:21:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:01.768 12:21:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:01.768 12:21:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:01.768 12:21:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:01.768 12:21:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 4193857 /var/tmp/spdk2.sock 00:07:01.768 12:21:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 4193857 ']' 00:07:01.768 12:21:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:01.768 12:21:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.768 12:21:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:01.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:01.768 12:21:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.768 12:21:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:01.768 [2024-11-20 12:21:07.394392] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:07:01.768 [2024-11-20 12:21:07.394442] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4193857 ] 00:07:01.768 [2024-11-20 12:21:07.486787] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 4193790 has claimed it. 00:07:01.768 [2024-11-20 12:21:07.486827] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:02.335 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (4193857) - No such process 00:07:02.335 ERROR: process (pid: 4193857) is no longer running 00:07:02.335 12:21:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.335 12:21:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:02.335 12:21:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:02.335 12:21:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:02.335 12:21:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:02.335 12:21:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:02.335 12:21:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:02.335 12:21:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:02.335 12:21:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:02.335 12:21:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:02.335 12:21:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 4193790 00:07:02.335 12:21:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 4193790 ']' 00:07:02.335 12:21:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 4193790 00:07:02.335 12:21:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:02.335 12:21:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:02.335 12:21:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4193790 00:07:02.593 12:21:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:02.593 12:21:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:02.593 12:21:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4193790' 00:07:02.593 killing process with pid 4193790 00:07:02.593 12:21:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 4193790 00:07:02.593 12:21:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 4193790 00:07:02.851 00:07:02.851 real 0m1.431s 00:07:02.851 user 0m3.956s 00:07:02.851 sys 0m0.369s 00:07:02.851 12:21:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.851 12:21:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:02.851 ************************************ 00:07:02.851 END TEST locking_overlapped_coremask 00:07:02.851 ************************************ 00:07:02.851 12:21:08 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:02.851 12:21:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:02.851 12:21:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.851 12:21:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.851 ************************************ 00:07:02.851 START TEST locking_overlapped_coremask_via_rpc 00:07:02.851 ************************************ 00:07:02.851 12:21:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:02.851 12:21:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=4194125 00:07:02.851 12:21:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 4194125 /var/tmp/spdk.sock 00:07:02.851 12:21:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:02.851 12:21:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 4194125 ']' 00:07:02.851 12:21:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.851 12:21:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.851 12:21:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.851 12:21:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.851 12:21:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.851 [2024-11-20 12:21:08.528246] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:07:02.851 [2024-11-20 12:21:08.528292] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4194125 ] 00:07:02.851 [2024-11-20 12:21:08.603967] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:02.851 [2024-11-20 12:21:08.603992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:03.108 [2024-11-20 12:21:08.644749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.108 [2024-11-20 12:21:08.644854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.108 [2024-11-20 12:21:08.644855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:03.673 12:21:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.673 12:21:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:03.673 12:21:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=4194188 00:07:03.673 12:21:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 4194188 /var/tmp/spdk2.sock 00:07:03.673 12:21:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:03.673 12:21:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 4194188 ']' 00:07:03.673 12:21:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:03.673 12:21:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.673 12:21:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:03.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:03.673 12:21:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.673 12:21:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.673 [2024-11-20 12:21:09.409681] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:07:03.673 [2024-11-20 12:21:09.409736] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4194188 ] 00:07:03.931 [2024-11-20 12:21:09.504851] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:03.931 [2024-11-20 12:21:09.504882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:03.931 [2024-11-20 12:21:09.592301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:03.931 [2024-11-20 12:21:09.592414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:03.931 [2024-11-20 12:21:09.592416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:04.866 12:21:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.866 12:21:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:04.866 12:21:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:04.866 12:21:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.866 12:21:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.866 12:21:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.867 12:21:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:04.867 12:21:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:04.867 12:21:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:04.867 12:21:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:04.867 12:21:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:04.867 12:21:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:04.867 12:21:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:04.867 12:21:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:04.867 12:21:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.867 12:21:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.867 [2024-11-20 12:21:10.285271] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 4194125 has claimed it. 00:07:04.867 request: 00:07:04.867 { 00:07:04.867 "method": "framework_enable_cpumask_locks", 00:07:04.867 "req_id": 1 00:07:04.867 } 00:07:04.867 Got JSON-RPC error response 00:07:04.867 response: 00:07:04.867 { 00:07:04.867 "code": -32603, 00:07:04.867 "message": "Failed to claim CPU core: 2" 00:07:04.867 } 00:07:04.867 12:21:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:04.867 12:21:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:04.867 12:21:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:04.867 12:21:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:04.867 12:21:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:04.867 12:21:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 4194125 /var/tmp/spdk.sock 00:07:04.867 12:21:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 4194125 ']' 00:07:04.867 12:21:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.867 12:21:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.867 12:21:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.867 12:21:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.867 12:21:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.867 12:21:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.867 12:21:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:04.867 12:21:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 4194188 /var/tmp/spdk2.sock 00:07:04.867 12:21:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 4194188 ']' 00:07:04.867 12:21:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:04.867 12:21:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.867 12:21:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:04.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:04.867 12:21:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.867 12:21:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.126 12:21:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.126 12:21:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:05.126 12:21:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:05.126 12:21:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:05.126 12:21:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:05.126 12:21:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:05.126 00:07:05.126 real 0m2.211s 00:07:05.126 user 0m0.973s 00:07:05.126 sys 0m0.163s 00:07:05.126 12:21:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.126 12:21:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.126 ************************************ 00:07:05.126 END TEST locking_overlapped_coremask_via_rpc 00:07:05.126 ************************************ 00:07:05.126 12:21:10 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:05.126 12:21:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 4194125 ]] 00:07:05.126 12:21:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 4194125 00:07:05.127 12:21:10 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 4194125 ']' 00:07:05.127 12:21:10 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 4194125 00:07:05.127 12:21:10 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:05.127 12:21:10 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:05.127 12:21:10 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4194125 00:07:05.127 12:21:10 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:05.127 12:21:10 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:05.127 12:21:10 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4194125' 00:07:05.127 killing process with pid 4194125 00:07:05.127 12:21:10 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 4194125 00:07:05.127 12:21:10 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 4194125 00:07:05.386 12:21:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 4194188 ]] 00:07:05.386 12:21:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 4194188 00:07:05.386 12:21:11 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 4194188 ']' 00:07:05.386 12:21:11 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 4194188 00:07:05.386 12:21:11 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:05.386 12:21:11 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:05.386 12:21:11 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4194188 00:07:05.386 12:21:11 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:05.386 12:21:11 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:05.386 12:21:11 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4194188' 00:07:05.386 killing process with pid 4194188 00:07:05.386 12:21:11 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 4194188 00:07:05.386 12:21:11 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 4194188 00:07:05.954 12:21:11 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:05.954 12:21:11 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:05.954 12:21:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 4194125 ]] 00:07:05.954 12:21:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 4194125 00:07:05.954 12:21:11 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 4194125 ']' 00:07:05.954 12:21:11 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 4194125 00:07:05.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (4194125) - No such process 00:07:05.954 12:21:11 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 4194125 is not found' 00:07:05.954 Process with pid 4194125 is not found 00:07:05.954 12:21:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 4194188 ]] 00:07:05.954 12:21:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 4194188 00:07:05.954 12:21:11 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 4194188 ']' 00:07:05.954 12:21:11 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 4194188 00:07:05.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (4194188) - No such process 00:07:05.954 12:21:11 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 4194188 is not found' 00:07:05.954 Process with pid 4194188 is not found 00:07:05.954 12:21:11 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:05.954 00:07:05.954 real 0m14.300s 00:07:05.954 user 0m25.791s 00:07:05.954 sys 0m4.794s 00:07:05.954 12:21:11 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.954 12:21:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:05.954 ************************************ 00:07:05.954 END TEST cpu_locks 00:07:05.954 ************************************ 00:07:05.954 00:07:05.954 real 0m39.928s 00:07:05.954 user 1m18.951s 00:07:05.954 sys 0m8.333s 00:07:05.954 12:21:11 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.954 12:21:11 event -- common/autotest_common.sh@10 -- # set +x 00:07:05.954 ************************************ 00:07:05.954 END TEST event 00:07:05.954 ************************************ 00:07:05.954 12:21:11 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:05.954 12:21:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:05.954 12:21:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.954 12:21:11 -- common/autotest_common.sh@10 -- # set +x 00:07:05.954 ************************************ 00:07:05.954 START TEST thread 00:07:05.954 ************************************ 00:07:05.954 12:21:11 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:05.954 * Looking for test storage... 00:07:05.954 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:05.954 12:21:11 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:05.954 12:21:11 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:07:05.954 12:21:11 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:05.955 12:21:11 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:05.955 12:21:11 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.955 12:21:11 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.955 12:21:11 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.955 12:21:11 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.955 12:21:11 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.955 12:21:11 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.955 12:21:11 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.955 12:21:11 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.955 12:21:11 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.955 12:21:11 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:06.214 12:21:11 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:06.214 12:21:11 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:06.214 12:21:11 thread -- scripts/common.sh@345 -- # : 1 00:07:06.214 12:21:11 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:06.214 12:21:11 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.214 12:21:11 thread -- scripts/common.sh@365 -- # decimal 1 00:07:06.214 12:21:11 thread -- scripts/common.sh@353 -- # local d=1 00:07:06.214 12:21:11 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.214 12:21:11 thread -- scripts/common.sh@355 -- # echo 1 00:07:06.214 12:21:11 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.214 12:21:11 thread -- scripts/common.sh@366 -- # decimal 2 00:07:06.214 12:21:11 thread -- scripts/common.sh@353 -- # local d=2 00:07:06.214 12:21:11 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.214 12:21:11 thread -- scripts/common.sh@355 -- # echo 2 00:07:06.214 12:21:11 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.214 12:21:11 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.214 12:21:11 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.214 12:21:11 thread -- scripts/common.sh@368 -- # return 0 00:07:06.214 12:21:11 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.214 12:21:11 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:06.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.214 --rc genhtml_branch_coverage=1 00:07:06.214 --rc genhtml_function_coverage=1 00:07:06.214 --rc genhtml_legend=1 00:07:06.214 --rc geninfo_all_blocks=1 00:07:06.214 --rc geninfo_unexecuted_blocks=1 00:07:06.214 00:07:06.214 ' 00:07:06.214 12:21:11 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:06.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.214 --rc genhtml_branch_coverage=1 00:07:06.214 --rc genhtml_function_coverage=1 00:07:06.214 --rc genhtml_legend=1 00:07:06.214 --rc geninfo_all_blocks=1 00:07:06.214 --rc geninfo_unexecuted_blocks=1 00:07:06.214 00:07:06.214 ' 00:07:06.214 12:21:11 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:06.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.214 --rc genhtml_branch_coverage=1 00:07:06.214 --rc genhtml_function_coverage=1 00:07:06.214 --rc genhtml_legend=1 00:07:06.214 --rc geninfo_all_blocks=1 00:07:06.214 --rc geninfo_unexecuted_blocks=1 00:07:06.214 00:07:06.214 ' 00:07:06.214 12:21:11 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:06.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.214 --rc genhtml_branch_coverage=1 00:07:06.214 --rc genhtml_function_coverage=1 00:07:06.214 --rc genhtml_legend=1 00:07:06.214 --rc geninfo_all_blocks=1 00:07:06.214 --rc geninfo_unexecuted_blocks=1 00:07:06.214 00:07:06.214 ' 00:07:06.214 12:21:11 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:06.214 12:21:11 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:06.214 12:21:11 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.214 12:21:11 thread -- common/autotest_common.sh@10 -- # set +x 00:07:06.214 ************************************ 00:07:06.214 START TEST thread_poller_perf 00:07:06.214 ************************************ 00:07:06.214 12:21:11 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:06.214 [2024-11-20 12:21:11.788105] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:07:06.214 [2024-11-20 12:21:11.788171] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid984 ] 00:07:06.214 [2024-11-20 12:21:11.864093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.214 [2024-11-20 12:21:11.903718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.214 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:07.588 [2024-11-20T11:21:13.354Z] ====================================== 00:07:07.588 [2024-11-20T11:21:13.354Z] busy:2107740512 (cyc) 00:07:07.588 [2024-11-20T11:21:13.354Z] total_run_count: 418000 00:07:07.588 [2024-11-20T11:21:13.354Z] tsc_hz: 2100000000 (cyc) 00:07:07.588 [2024-11-20T11:21:13.354Z] ====================================== 00:07:07.588 [2024-11-20T11:21:13.354Z] poller_cost: 5042 (cyc), 2400 (nsec) 00:07:07.588 00:07:07.588 real 0m1.183s 00:07:07.588 user 0m1.106s 00:07:07.588 sys 0m0.073s 00:07:07.588 12:21:12 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.588 12:21:12 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:07.588 ************************************ 00:07:07.588 END TEST thread_poller_perf 00:07:07.588 ************************************ 00:07:07.588 12:21:12 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:07.588 12:21:12 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:07.588 12:21:12 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.588 12:21:12 thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.588 ************************************ 00:07:07.588 START TEST thread_poller_perf 00:07:07.588 ************************************ 00:07:07.588 12:21:13 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:07.588 [2024-11-20 12:21:13.045052] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:07:07.588 [2024-11-20 12:21:13.045109] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1258 ] 00:07:07.588 [2024-11-20 12:21:13.125236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.588 [2024-11-20 12:21:13.165642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.588 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:08.525 [2024-11-20T11:21:14.292Z] ====================================== 00:07:08.526 [2024-11-20T11:21:14.292Z] busy:2101449820 (cyc) 00:07:08.526 [2024-11-20T11:21:14.292Z] total_run_count: 5614000 00:07:08.526 [2024-11-20T11:21:14.292Z] tsc_hz: 2100000000 (cyc) 00:07:08.526 [2024-11-20T11:21:14.292Z] ====================================== 00:07:08.526 [2024-11-20T11:21:14.292Z] poller_cost: 374 (cyc), 178 (nsec) 00:07:08.526 00:07:08.526 real 0m1.180s 00:07:08.526 user 0m1.091s 00:07:08.526 sys 0m0.085s 00:07:08.526 12:21:14 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.526 12:21:14 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:08.526 ************************************ 00:07:08.526 END TEST thread_poller_perf 00:07:08.526 ************************************ 00:07:08.526 12:21:14 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:08.526 00:07:08.526 real 0m2.686s 00:07:08.526 user 0m2.361s 00:07:08.526 sys 0m0.339s 00:07:08.526 12:21:14 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.526 12:21:14 thread -- common/autotest_common.sh@10 -- # set +x 00:07:08.526 ************************************ 00:07:08.526 END TEST thread 00:07:08.526 ************************************ 00:07:08.526 12:21:14 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:08.526 12:21:14 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:08.526 12:21:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:08.526 12:21:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.526 12:21:14 -- common/autotest_common.sh@10 -- # set +x 00:07:08.817 ************************************ 00:07:08.817 START TEST app_cmdline 00:07:08.817 ************************************ 00:07:08.817 12:21:14 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:08.817 * Looking for test storage... 00:07:08.817 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:08.817 12:21:14 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:08.817 12:21:14 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:08.817 12:21:14 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:08.817 12:21:14 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:08.817 12:21:14 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:08.817 12:21:14 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:08.817 12:21:14 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:08.817 12:21:14 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:08.817 12:21:14 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:08.817 12:21:14 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:08.817 12:21:14 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:08.817 12:21:14 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:08.817 12:21:14 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:08.817 12:21:14 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:08.817 12:21:14 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:08.817 12:21:14 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:08.817 12:21:14 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:08.817 12:21:14 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:08.817 12:21:14 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:08.817 12:21:14 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:08.817 12:21:14 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:08.817 12:21:14 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:08.817 12:21:14 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:08.817 12:21:14 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:08.817 12:21:14 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:08.817 12:21:14 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:08.817 12:21:14 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:08.817 12:21:14 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:08.817 12:21:14 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:08.817 12:21:14 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:08.817 12:21:14 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:08.817 12:21:14 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:08.817 12:21:14 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:08.817 12:21:14 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:08.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.817 --rc genhtml_branch_coverage=1 00:07:08.817 --rc genhtml_function_coverage=1 00:07:08.817 --rc genhtml_legend=1 00:07:08.817 --rc geninfo_all_blocks=1 00:07:08.817 --rc geninfo_unexecuted_blocks=1 00:07:08.817 00:07:08.817 ' 00:07:08.817 12:21:14 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:08.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.817 --rc genhtml_branch_coverage=1 00:07:08.818 --rc genhtml_function_coverage=1 00:07:08.818 --rc genhtml_legend=1 00:07:08.818 --rc geninfo_all_blocks=1 00:07:08.818 --rc geninfo_unexecuted_blocks=1 00:07:08.818 00:07:08.818 ' 00:07:08.818 12:21:14 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:08.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.818 --rc genhtml_branch_coverage=1 00:07:08.818 --rc genhtml_function_coverage=1 00:07:08.818 --rc genhtml_legend=1 00:07:08.818 --rc geninfo_all_blocks=1 00:07:08.818 --rc geninfo_unexecuted_blocks=1 00:07:08.818 00:07:08.818 ' 00:07:08.818 12:21:14 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:08.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.818 --rc genhtml_branch_coverage=1 00:07:08.818 --rc genhtml_function_coverage=1 00:07:08.818 --rc genhtml_legend=1 00:07:08.818 --rc geninfo_all_blocks=1 00:07:08.818 --rc geninfo_unexecuted_blocks=1 00:07:08.818 00:07:08.818 ' 00:07:08.818 12:21:14 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:08.818 12:21:14 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1609 00:07:08.818 12:21:14 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:08.818 12:21:14 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1609 00:07:08.818 12:21:14 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1609 ']' 00:07:08.818 12:21:14 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.818 12:21:14 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:08.818 12:21:14 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.818 12:21:14 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:08.818 12:21:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:08.818 [2024-11-20 12:21:14.535511] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:07:08.818 [2024-11-20 12:21:14.535558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1609 ] 00:07:09.096 [2024-11-20 12:21:14.611239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.096 [2024-11-20 12:21:14.652788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.360 12:21:14 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:09.360 12:21:14 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:09.360 12:21:14 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:09.360 { 00:07:09.360 "version": "SPDK v25.01-pre git sha1 92fb22519", 00:07:09.360 "fields": { 00:07:09.360 "major": 25, 00:07:09.360 "minor": 1, 00:07:09.360 "patch": 0, 00:07:09.360 "suffix": "-pre", 00:07:09.360 "commit": "92fb22519" 00:07:09.360 } 00:07:09.360 } 00:07:09.360 12:21:15 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:09.360 12:21:15 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:09.360 12:21:15 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:09.360 12:21:15 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:09.360 12:21:15 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:09.360 12:21:15 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.360 12:21:15 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:09.360 12:21:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:09.360 12:21:15 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:09.360 12:21:15 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.360 12:21:15 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:09.360 12:21:15 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:09.360 12:21:15 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:09.360 12:21:15 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:09.360 12:21:15 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:09.360 12:21:15 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:09.360 12:21:15 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:09.361 12:21:15 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:09.361 12:21:15 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:09.361 12:21:15 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:09.361 12:21:15 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:09.361 12:21:15 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:09.619 12:21:15 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:09.620 12:21:15 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:09.620 request: 00:07:09.620 { 00:07:09.620 "method": "env_dpdk_get_mem_stats", 00:07:09.620 "req_id": 1 00:07:09.620 } 00:07:09.620 Got JSON-RPC error response 00:07:09.620 response: 00:07:09.620 { 00:07:09.620 "code": -32601, 00:07:09.620 "message": "Method not found" 00:07:09.620 } 00:07:09.620 12:21:15 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:09.620 12:21:15 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:09.620 12:21:15 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:09.620 12:21:15 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:09.620 12:21:15 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1609 00:07:09.620 12:21:15 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1609 ']' 00:07:09.620 12:21:15 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1609 00:07:09.620 12:21:15 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:09.620 12:21:15 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:09.620 12:21:15 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1609 00:07:09.620 12:21:15 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:09.620 12:21:15 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:09.620 12:21:15 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1609' 00:07:09.620 killing process with pid 1609 00:07:09.620 12:21:15 app_cmdline -- common/autotest_common.sh@973 -- # kill 1609 00:07:09.620 12:21:15 app_cmdline -- common/autotest_common.sh@978 -- # wait 1609 00:07:10.188 00:07:10.188 real 0m1.337s 00:07:10.188 user 0m1.577s 00:07:10.188 sys 0m0.440s 00:07:10.188 12:21:15 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.188 12:21:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:10.188 ************************************ 00:07:10.188 END TEST app_cmdline 00:07:10.188 ************************************ 00:07:10.188 12:21:15 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:10.188 12:21:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:10.188 12:21:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.188 12:21:15 -- common/autotest_common.sh@10 -- # set +x 00:07:10.188 ************************************ 00:07:10.188 START TEST version 00:07:10.188 ************************************ 00:07:10.188 12:21:15 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:10.188 * Looking for test storage... 00:07:10.188 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:10.188 12:21:15 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:10.188 12:21:15 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:10.188 12:21:15 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:10.188 12:21:15 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:10.188 12:21:15 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:10.188 12:21:15 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:10.188 12:21:15 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:10.188 12:21:15 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.188 12:21:15 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:10.188 12:21:15 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:10.188 12:21:15 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:10.188 12:21:15 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:10.188 12:21:15 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:10.188 12:21:15 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:10.188 12:21:15 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:10.188 12:21:15 version -- scripts/common.sh@344 -- # case "$op" in 00:07:10.188 12:21:15 version -- scripts/common.sh@345 -- # : 1 00:07:10.188 12:21:15 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:10.188 12:21:15 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.188 12:21:15 version -- scripts/common.sh@365 -- # decimal 1 00:07:10.188 12:21:15 version -- scripts/common.sh@353 -- # local d=1 00:07:10.188 12:21:15 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:10.188 12:21:15 version -- scripts/common.sh@355 -- # echo 1 00:07:10.188 12:21:15 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:10.188 12:21:15 version -- scripts/common.sh@366 -- # decimal 2 00:07:10.188 12:21:15 version -- scripts/common.sh@353 -- # local d=2 00:07:10.188 12:21:15 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.188 12:21:15 version -- scripts/common.sh@355 -- # echo 2 00:07:10.188 12:21:15 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:10.188 12:21:15 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:10.188 12:21:15 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:10.188 12:21:15 version -- scripts/common.sh@368 -- # return 0 00:07:10.188 12:21:15 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.188 12:21:15 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:10.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.188 --rc genhtml_branch_coverage=1 00:07:10.188 --rc genhtml_function_coverage=1 00:07:10.188 --rc genhtml_legend=1 00:07:10.188 --rc geninfo_all_blocks=1 00:07:10.188 --rc geninfo_unexecuted_blocks=1 00:07:10.188 00:07:10.188 ' 00:07:10.188 12:21:15 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:10.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.188 --rc genhtml_branch_coverage=1 00:07:10.188 --rc genhtml_function_coverage=1 00:07:10.188 --rc genhtml_legend=1 00:07:10.188 --rc geninfo_all_blocks=1 00:07:10.188 --rc geninfo_unexecuted_blocks=1 00:07:10.188 00:07:10.188 ' 00:07:10.188 12:21:15 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:10.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.188 --rc genhtml_branch_coverage=1 00:07:10.188 --rc genhtml_function_coverage=1 00:07:10.188 --rc genhtml_legend=1 00:07:10.188 --rc geninfo_all_blocks=1 00:07:10.188 --rc geninfo_unexecuted_blocks=1 00:07:10.188 00:07:10.188 ' 00:07:10.188 12:21:15 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:10.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.188 --rc genhtml_branch_coverage=1 00:07:10.188 --rc genhtml_function_coverage=1 00:07:10.188 --rc genhtml_legend=1 00:07:10.188 --rc geninfo_all_blocks=1 00:07:10.188 --rc geninfo_unexecuted_blocks=1 00:07:10.188 00:07:10.188 ' 00:07:10.188 12:21:15 version -- app/version.sh@17 -- # get_header_version major 00:07:10.189 12:21:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:10.189 12:21:15 version -- app/version.sh@14 -- # cut -f2 00:07:10.189 12:21:15 version -- app/version.sh@14 -- # tr -d '"' 00:07:10.189 12:21:15 version -- app/version.sh@17 -- # major=25 00:07:10.189 12:21:15 version -- app/version.sh@18 -- # get_header_version minor 00:07:10.189 12:21:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:10.189 12:21:15 version -- app/version.sh@14 -- # cut -f2 00:07:10.189 12:21:15 version -- app/version.sh@14 -- # tr -d '"' 00:07:10.189 12:21:15 version -- app/version.sh@18 -- # minor=1 00:07:10.189 12:21:15 version -- app/version.sh@19 -- # get_header_version patch 00:07:10.189 12:21:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:10.189 12:21:15 version -- app/version.sh@14 -- # cut -f2 00:07:10.189 12:21:15 version -- app/version.sh@14 -- # tr -d '"' 00:07:10.189 12:21:15 version -- app/version.sh@19 -- # patch=0 00:07:10.189 12:21:15 version -- app/version.sh@20 -- # get_header_version suffix 00:07:10.189 12:21:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:10.189 12:21:15 version -- app/version.sh@14 -- # cut -f2 00:07:10.189 12:21:15 version -- app/version.sh@14 -- # tr -d '"' 00:07:10.189 12:21:15 version -- app/version.sh@20 -- # suffix=-pre 00:07:10.189 12:21:15 version -- app/version.sh@22 -- # version=25.1 00:07:10.189 12:21:15 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:10.189 12:21:15 version -- app/version.sh@28 -- # version=25.1rc0 00:07:10.189 12:21:15 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:10.189 12:21:15 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:10.448 12:21:15 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:10.448 12:21:15 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:10.448 00:07:10.448 real 0m0.238s 00:07:10.448 user 0m0.158s 00:07:10.448 sys 0m0.123s 00:07:10.448 12:21:15 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.448 12:21:15 version -- common/autotest_common.sh@10 -- # set +x 00:07:10.448 ************************************ 00:07:10.448 END TEST version 00:07:10.448 ************************************ 00:07:10.448 12:21:15 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:10.448 12:21:15 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:10.448 12:21:15 -- spdk/autotest.sh@194 -- # uname -s 00:07:10.448 12:21:15 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:10.448 12:21:15 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:10.448 12:21:15 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:10.448 12:21:15 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:10.448 12:21:15 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:10.448 12:21:15 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:10.448 12:21:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:10.448 12:21:15 -- common/autotest_common.sh@10 -- # set +x 00:07:10.448 12:21:16 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:10.448 12:21:16 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:10.448 12:21:16 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:10.448 12:21:16 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:10.448 12:21:16 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:07:10.448 12:21:16 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:07:10.448 12:21:16 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:10.448 12:21:16 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:10.448 12:21:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.448 12:21:16 -- common/autotest_common.sh@10 -- # set +x 00:07:10.448 ************************************ 00:07:10.448 START TEST nvmf_tcp 00:07:10.448 ************************************ 00:07:10.448 12:21:16 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:10.448 * Looking for test storage... 00:07:10.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:10.448 12:21:16 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:10.448 12:21:16 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:07:10.448 12:21:16 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:10.708 12:21:16 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:10.708 12:21:16 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:10.708 12:21:16 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:10.708 12:21:16 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:10.708 12:21:16 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.708 12:21:16 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:10.708 12:21:16 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:10.708 12:21:16 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:10.708 12:21:16 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:10.708 12:21:16 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:10.708 12:21:16 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:10.708 12:21:16 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:10.708 12:21:16 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:10.708 12:21:16 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:10.708 12:21:16 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:10.708 12:21:16 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.708 12:21:16 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:10.708 12:21:16 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:10.708 12:21:16 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:10.708 12:21:16 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:10.708 12:21:16 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:10.708 12:21:16 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:10.708 12:21:16 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:10.708 12:21:16 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.708 12:21:16 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:10.708 12:21:16 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:10.708 12:21:16 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:10.708 12:21:16 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:10.708 12:21:16 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:10.708 12:21:16 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.708 12:21:16 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:10.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.708 --rc genhtml_branch_coverage=1 00:07:10.708 --rc genhtml_function_coverage=1 00:07:10.708 --rc genhtml_legend=1 00:07:10.708 --rc geninfo_all_blocks=1 00:07:10.708 --rc geninfo_unexecuted_blocks=1 00:07:10.708 00:07:10.708 ' 00:07:10.708 12:21:16 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:10.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.708 --rc genhtml_branch_coverage=1 00:07:10.708 --rc genhtml_function_coverage=1 00:07:10.708 --rc genhtml_legend=1 00:07:10.708 --rc geninfo_all_blocks=1 00:07:10.708 --rc geninfo_unexecuted_blocks=1 00:07:10.708 00:07:10.708 ' 00:07:10.708 12:21:16 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:10.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.708 --rc genhtml_branch_coverage=1 00:07:10.708 --rc genhtml_function_coverage=1 00:07:10.708 --rc genhtml_legend=1 00:07:10.708 --rc geninfo_all_blocks=1 00:07:10.708 --rc geninfo_unexecuted_blocks=1 00:07:10.708 00:07:10.708 ' 00:07:10.708 12:21:16 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:10.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.708 --rc genhtml_branch_coverage=1 00:07:10.708 --rc genhtml_function_coverage=1 00:07:10.708 --rc genhtml_legend=1 00:07:10.708 --rc geninfo_all_blocks=1 00:07:10.708 --rc geninfo_unexecuted_blocks=1 00:07:10.708 00:07:10.708 ' 00:07:10.708 12:21:16 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:10.708 12:21:16 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:10.708 12:21:16 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:10.708 12:21:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:10.708 12:21:16 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.708 12:21:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:10.708 ************************************ 00:07:10.708 START TEST nvmf_target_core 00:07:10.708 ************************************ 00:07:10.708 12:21:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:10.708 * Looking for test storage... 00:07:10.708 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:10.708 12:21:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:10.708 12:21:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:07:10.708 12:21:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:10.708 12:21:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:10.708 12:21:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:10.708 12:21:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:10.708 12:21:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:10.708 12:21:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.708 12:21:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:10.708 12:21:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:10.708 12:21:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:10.708 12:21:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:10.708 12:21:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:10.708 12:21:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:10.708 12:21:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:10.708 12:21:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:10.708 12:21:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:10.708 12:21:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:10.708 12:21:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.708 12:21:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:10.709 12:21:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:10.709 12:21:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:10.709 12:21:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:10.709 12:21:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:10.709 12:21:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:10.709 12:21:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:10.709 12:21:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.709 12:21:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:10.709 12:21:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:10.709 12:21:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:10.709 12:21:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:10.709 12:21:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:10.709 12:21:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.709 12:21:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:10.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.709 --rc genhtml_branch_coverage=1 00:07:10.709 --rc genhtml_function_coverage=1 00:07:10.709 --rc genhtml_legend=1 00:07:10.709 --rc geninfo_all_blocks=1 00:07:10.709 --rc geninfo_unexecuted_blocks=1 00:07:10.709 00:07:10.709 ' 00:07:10.709 12:21:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:10.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.709 --rc genhtml_branch_coverage=1 00:07:10.709 --rc genhtml_function_coverage=1 00:07:10.709 --rc genhtml_legend=1 00:07:10.709 --rc geninfo_all_blocks=1 00:07:10.709 --rc geninfo_unexecuted_blocks=1 00:07:10.709 00:07:10.709 ' 00:07:10.709 12:21:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:10.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.709 --rc genhtml_branch_coverage=1 00:07:10.709 --rc genhtml_function_coverage=1 00:07:10.709 --rc genhtml_legend=1 00:07:10.709 --rc geninfo_all_blocks=1 00:07:10.709 --rc geninfo_unexecuted_blocks=1 00:07:10.709 00:07:10.709 ' 00:07:10.709 12:21:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:10.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.709 --rc genhtml_branch_coverage=1 00:07:10.709 --rc genhtml_function_coverage=1 00:07:10.709 --rc genhtml_legend=1 00:07:10.709 --rc geninfo_all_blocks=1 00:07:10.709 --rc geninfo_unexecuted_blocks=1 00:07:10.709 00:07:10.709 ' 00:07:10.709 12:21:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:10.709 12:21:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:10.709 12:21:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:10.709 12:21:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:10.709 12:21:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:10.709 12:21:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:10.709 12:21:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:10.709 12:21:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:10.709 12:21:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:10.709 12:21:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:10.709 12:21:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:10.709 12:21:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:10.969 12:21:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:10.969 12:21:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:10.969 12:21:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:10.969 12:21:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:10.969 12:21:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:10.969 12:21:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:10.969 12:21:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:10.969 12:21:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:10.969 12:21:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:10.969 12:21:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:10.969 12:21:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:10.969 12:21:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:10.969 12:21:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:10.969 12:21:16 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.969 12:21:16 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.969 12:21:16 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.969 12:21:16 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:10.969 12:21:16 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.969 12:21:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:10.969 12:21:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:10.969 12:21:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:10.969 12:21:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:10.969 12:21:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:10.970 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:10.970 ************************************ 00:07:10.970 START TEST nvmf_abort 00:07:10.970 ************************************ 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:10.970 * Looking for test storage... 00:07:10.970 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:10.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.970 --rc genhtml_branch_coverage=1 00:07:10.970 --rc genhtml_function_coverage=1 00:07:10.970 --rc genhtml_legend=1 00:07:10.970 --rc geninfo_all_blocks=1 00:07:10.970 --rc geninfo_unexecuted_blocks=1 00:07:10.970 00:07:10.970 ' 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:10.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.970 --rc genhtml_branch_coverage=1 00:07:10.970 --rc genhtml_function_coverage=1 00:07:10.970 --rc genhtml_legend=1 00:07:10.970 --rc geninfo_all_blocks=1 00:07:10.970 --rc geninfo_unexecuted_blocks=1 00:07:10.970 00:07:10.970 ' 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:10.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.970 --rc genhtml_branch_coverage=1 00:07:10.970 --rc genhtml_function_coverage=1 00:07:10.970 --rc genhtml_legend=1 00:07:10.970 --rc geninfo_all_blocks=1 00:07:10.970 --rc geninfo_unexecuted_blocks=1 00:07:10.970 00:07:10.970 ' 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:10.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.970 --rc genhtml_branch_coverage=1 00:07:10.970 --rc genhtml_function_coverage=1 00:07:10.970 --rc genhtml_legend=1 00:07:10.970 --rc geninfo_all_blocks=1 00:07:10.970 --rc geninfo_unexecuted_blocks=1 00:07:10.970 00:07:10.970 ' 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.970 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:10.971 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.971 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:10.971 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:10.971 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:10.971 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:10.971 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:10.971 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:10.971 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:10.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:10.971 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:10.971 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:10.971 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:10.971 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:10.971 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:10.971 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:10.971 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:10.971 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:10.971 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:10.971 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:10.971 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:10.971 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:10.971 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:10.971 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.230 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:11.230 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:11.230 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:11.230 12:21:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:17.802 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:17.802 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:17.802 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:17.802 Found net devices under 0000:86:00.0: cvl_0_0 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:17.803 Found net devices under 0000:86:00.1: cvl_0_1 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:17.803 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:17.803 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.493 ms 00:07:17.803 00:07:17.803 --- 10.0.0.2 ping statistics --- 00:07:17.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:17.803 rtt min/avg/max/mdev = 0.493/0.493/0.493/0.000 ms 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:17.803 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:17.803 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:07:17.803 00:07:17.803 --- 10.0.0.1 ping statistics --- 00:07:17.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:17.803 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=5492 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 5492 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 5492 ']' 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:17.803 12:21:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:17.803 [2024-11-20 12:21:22.783850] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:07:17.803 [2024-11-20 12:21:22.783899] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:17.803 [2024-11-20 12:21:22.864148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:17.803 [2024-11-20 12:21:22.907513] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:17.803 [2024-11-20 12:21:22.907551] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:17.803 [2024-11-20 12:21:22.907558] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:17.803 [2024-11-20 12:21:22.907564] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:17.803 [2024-11-20 12:21:22.907569] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:17.803 [2024-11-20 12:21:22.908994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:17.803 [2024-11-20 12:21:22.909103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.803 [2024-11-20 12:21:22.909104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:18.062 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.062 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:07:18.062 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:18.062 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:18.062 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:18.062 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:18.062 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:18.062 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.062 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:18.062 [2024-11-20 12:21:23.674007] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:18.062 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.062 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:18.062 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.062 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:18.062 Malloc0 00:07:18.062 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.062 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:18.062 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.062 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:18.062 Delay0 00:07:18.062 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.062 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:18.062 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.062 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:18.062 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.062 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:18.062 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.062 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:18.062 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.062 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:18.062 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.062 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:18.062 [2024-11-20 12:21:23.753054] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:18.062 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.062 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:18.062 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.062 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:18.062 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.062 12:21:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:18.320 [2024-11-20 12:21:23.890901] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:20.850 [2024-11-20 12:21:26.037379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49d90 is same with the state(6) to be set 00:07:20.850 Initializing NVMe Controllers 00:07:20.850 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:20.850 controller IO queue size 128 less than required 00:07:20.850 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:20.850 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:20.850 Initialization complete. Launching workers. 00:07:20.850 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37139 00:07:20.850 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37200, failed to submit 62 00:07:20.850 success 37143, unsuccessful 57, failed 0 00:07:20.850 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:20.850 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.850 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:20.850 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.850 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:20.850 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:20.850 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:20.850 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:20.850 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:20.850 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:20.851 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:20.851 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:20.851 rmmod nvme_tcp 00:07:20.851 rmmod nvme_fabrics 00:07:20.851 rmmod nvme_keyring 00:07:20.851 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:20.851 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:20.851 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:20.851 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 5492 ']' 00:07:20.851 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 5492 00:07:20.851 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 5492 ']' 00:07:20.851 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 5492 00:07:20.851 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:07:20.851 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:20.851 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 5492 00:07:20.851 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:20.851 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:20.851 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 5492' 00:07:20.851 killing process with pid 5492 00:07:20.851 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 5492 00:07:20.851 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 5492 00:07:20.851 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:20.851 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:20.851 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:20.851 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:07:20.851 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:07:20.851 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:20.851 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:07:20.851 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:20.851 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:20.851 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:20.851 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:20.851 12:21:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:22.758 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:22.758 00:07:22.758 real 0m11.903s 00:07:22.758 user 0m13.925s 00:07:22.758 sys 0m5.519s 00:07:22.758 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.758 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:22.758 ************************************ 00:07:22.758 END TEST nvmf_abort 00:07:22.758 ************************************ 00:07:22.758 12:21:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:22.758 12:21:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:22.758 12:21:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.758 12:21:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:22.758 ************************************ 00:07:22.758 START TEST nvmf_ns_hotplug_stress 00:07:22.758 ************************************ 00:07:22.758 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:23.018 * Looking for test storage... 00:07:23.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:23.018 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:23.018 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:07:23.018 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:23.018 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:23.018 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:23.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.019 --rc genhtml_branch_coverage=1 00:07:23.019 --rc genhtml_function_coverage=1 00:07:23.019 --rc genhtml_legend=1 00:07:23.019 --rc geninfo_all_blocks=1 00:07:23.019 --rc geninfo_unexecuted_blocks=1 00:07:23.019 00:07:23.019 ' 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:23.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.019 --rc genhtml_branch_coverage=1 00:07:23.019 --rc genhtml_function_coverage=1 00:07:23.019 --rc genhtml_legend=1 00:07:23.019 --rc geninfo_all_blocks=1 00:07:23.019 --rc geninfo_unexecuted_blocks=1 00:07:23.019 00:07:23.019 ' 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:23.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.019 --rc genhtml_branch_coverage=1 00:07:23.019 --rc genhtml_function_coverage=1 00:07:23.019 --rc genhtml_legend=1 00:07:23.019 --rc geninfo_all_blocks=1 00:07:23.019 --rc geninfo_unexecuted_blocks=1 00:07:23.019 00:07:23.019 ' 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:23.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.019 --rc genhtml_branch_coverage=1 00:07:23.019 --rc genhtml_function_coverage=1 00:07:23.019 --rc genhtml_legend=1 00:07:23.019 --rc geninfo_all_blocks=1 00:07:23.019 --rc geninfo_unexecuted_blocks=1 00:07:23.019 00:07:23.019 ' 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:23.019 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:23.019 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:23.020 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:23.020 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:23.020 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:23.020 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:23.020 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:23.020 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:23.020 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:23.020 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:23.020 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:23.020 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:23.020 12:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:29.587 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:29.587 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:29.587 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:29.588 Found net devices under 0000:86:00.0: cvl_0_0 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:29.588 Found net devices under 0000:86:00.1: cvl_0_1 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:29.588 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:29.588 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.388 ms 00:07:29.588 00:07:29.588 --- 10.0.0.2 ping statistics --- 00:07:29.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.588 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:29.588 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:29.588 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:07:29.588 00:07:29.588 --- 10.0.0.1 ping statistics --- 00:07:29.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.588 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=9732 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 9732 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 9732 ']' 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:29.588 12:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:29.588 [2024-11-20 12:21:34.804545] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:07:29.588 [2024-11-20 12:21:34.804588] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:29.588 [2024-11-20 12:21:34.882991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:29.588 [2024-11-20 12:21:34.923971] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:29.588 [2024-11-20 12:21:34.924007] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:29.588 [2024-11-20 12:21:34.924013] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:29.588 [2024-11-20 12:21:34.924020] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:29.588 [2024-11-20 12:21:34.924024] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:29.588 [2024-11-20 12:21:34.925494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.588 [2024-11-20 12:21:34.925600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.588 [2024-11-20 12:21:34.925601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:29.588 12:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.588 12:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:07:29.588 12:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:29.588 12:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:29.588 12:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:29.588 12:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:29.588 12:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:29.588 12:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:29.588 [2024-11-20 12:21:35.218098] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:29.588 12:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:29.847 12:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:29.847 [2024-11-20 12:21:35.599439] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:30.105 12:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:30.105 12:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:30.363 Malloc0 00:07:30.363 12:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:30.623 Delay0 00:07:30.624 12:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.882 12:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:30.882 NULL1 00:07:30.882 12:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:31.140 12:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=10013 00:07:31.140 12:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:31.140 12:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:07:31.140 12:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.398 12:21:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.656 12:21:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:31.656 12:21:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:31.913 true 00:07:31.913 12:21:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:07:31.913 12:21:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.913 12:21:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.170 12:21:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:32.170 12:21:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:32.428 true 00:07:32.428 12:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:07:32.428 12:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.686 12:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.944 12:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:32.944 12:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:32.944 true 00:07:32.944 12:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:07:32.944 12:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.201 12:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.460 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:33.460 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:33.717 true 00:07:33.717 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:07:33.717 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.975 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.975 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:33.975 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:34.234 true 00:07:34.234 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:07:34.234 12:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.492 12:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.750 12:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:34.750 12:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:35.007 true 00:07:35.007 12:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:07:35.007 12:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.266 12:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.266 12:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:35.266 12:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:35.523 true 00:07:35.523 12:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:07:35.523 12:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.781 12:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.040 12:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:36.040 12:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:36.040 true 00:07:36.298 12:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:07:36.299 12:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.299 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.557 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:36.557 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:36.815 true 00:07:36.815 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:07:36.815 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.073 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.331 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:37.331 12:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:37.331 true 00:07:37.331 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:07:37.331 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.589 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.848 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:37.848 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:38.105 true 00:07:38.105 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:07:38.106 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.364 12:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.364 12:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:38.364 12:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:38.623 true 00:07:38.623 12:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:07:38.623 12:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.881 12:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.139 12:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:39.139 12:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:39.396 true 00:07:39.396 12:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:07:39.396 12:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.655 12:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.655 12:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:39.655 12:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:39.912 true 00:07:39.912 12:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:07:39.912 12:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.170 12:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.428 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:40.428 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:40.686 true 00:07:40.686 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:07:40.686 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.686 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.943 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:40.943 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:41.201 true 00:07:41.201 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:07:41.201 12:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.458 12:21:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.717 12:21:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:41.717 12:21:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:41.717 true 00:07:41.717 12:21:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:07:41.717 12:21:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.975 12:21:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.233 12:21:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:42.233 12:21:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:42.490 true 00:07:42.490 12:21:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:07:42.490 12:21:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.748 12:21:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.748 12:21:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:42.748 12:21:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:43.005 true 00:07:43.005 12:21:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:07:43.005 12:21:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.263 12:21:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.524 12:21:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:43.524 12:21:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:43.786 true 00:07:43.786 12:21:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:07:43.786 12:21:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.786 12:21:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.044 12:21:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:44.044 12:21:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:44.302 true 00:07:44.302 12:21:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:07:44.302 12:21:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.561 12:21:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.820 12:21:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:44.820 12:21:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:44.820 true 00:07:45.079 12:21:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:07:45.079 12:21:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.079 12:21:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.338 12:21:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:45.338 12:21:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:45.596 true 00:07:45.596 12:21:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:07:45.596 12:21:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.855 12:21:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.114 12:21:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:46.114 12:21:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:46.114 true 00:07:46.114 12:21:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:07:46.114 12:21:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.372 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.631 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:46.631 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:46.890 true 00:07:46.890 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:07:46.890 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.149 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.149 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:47.149 12:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:47.408 true 00:07:47.408 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:07:47.408 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.666 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.925 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:47.925 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:48.184 true 00:07:48.184 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:07:48.184 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.443 12:21:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.443 12:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:48.443 12:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:48.701 true 00:07:48.701 12:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:07:48.701 12:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.960 12:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.219 12:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:49.219 12:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:49.219 true 00:07:49.219 12:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:07:49.219 12:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.479 12:21:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.738 12:21:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:49.738 12:21:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:49.997 true 00:07:49.997 12:21:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:07:49.997 12:21:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.255 12:21:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.514 12:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:07:50.514 12:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:07:50.514 true 00:07:50.514 12:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:07:50.514 12:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.772 12:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.030 12:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:07:51.030 12:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:07:51.288 true 00:07:51.288 12:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:07:51.288 12:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.546 12:21:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.805 12:21:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:07:51.805 12:21:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:07:51.805 true 00:07:51.805 12:21:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:07:51.805 12:21:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.064 12:21:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.323 12:21:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:07:52.323 12:21:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:07:52.582 true 00:07:52.582 12:21:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:07:52.582 12:21:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.841 12:21:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.841 12:21:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:07:52.841 12:21:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:07:53.099 true 00:07:53.099 12:21:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:07:53.099 12:21:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.358 12:21:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.615 12:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:07:53.615 12:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:07:53.874 true 00:07:53.874 12:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:07:53.874 12:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.874 12:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.132 12:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:07:54.132 12:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:07:54.391 true 00:07:54.391 12:22:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:07:54.391 12:22:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.649 12:22:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.908 12:22:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:07:54.908 12:22:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:07:54.908 true 00:07:55.167 12:22:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:07:55.167 12:22:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.167 12:22:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.425 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:07:55.425 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:07:55.684 true 00:07:55.684 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:07:55.684 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.943 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.201 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:07:56.201 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:07:56.201 true 00:07:56.201 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:07:56.201 12:22:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.460 12:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.719 12:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:07:56.719 12:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:07:56.978 true 00:07:56.978 12:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:07:56.978 12:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.239 12:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.239 12:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:07:57.239 12:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:07:57.522 true 00:07:57.522 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:07:57.522 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.831 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.140 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:07:58.140 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:07:58.141 true 00:07:58.141 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:07:58.141 12:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.399 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.658 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:07:58.659 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:07:58.917 true 00:07:58.917 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:07:58.917 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.917 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.175 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:07:59.175 12:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:07:59.433 true 00:07:59.433 12:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:07:59.433 12:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.692 12:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.949 12:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:07:59.949 12:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:07:59.949 true 00:08:00.220 12:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:08:00.220 12:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.220 12:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.478 12:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:08:00.478 12:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:08:00.736 true 00:08:00.736 12:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:08:00.736 12:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.994 12:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.253 12:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:08:01.253 12:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:08:01.253 true 00:08:01.253 12:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:08:01.253 12:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.511 Initializing NVMe Controllers 00:08:01.511 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:01.511 Controller IO queue size 128, less than required. 00:08:01.511 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:01.511 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:01.511 Initialization complete. Launching workers. 00:08:01.511 ======================================================== 00:08:01.511 Latency(us) 00:08:01.511 Device Information : IOPS MiB/s Average min max 00:08:01.511 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 27540.24 13.45 4647.58 2105.78 8716.81 00:08:01.511 ======================================================== 00:08:01.511 Total : 27540.24 13.45 4647.58 2105.78 8716.81 00:08:01.511 00:08:01.511 12:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.769 12:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:08:01.769 12:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:08:02.028 true 00:08:02.028 12:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 10013 00:08:02.028 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (10013) - No such process 00:08:02.028 12:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 10013 00:08:02.028 12:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.287 12:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:02.287 12:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:02.287 12:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:02.287 12:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:02.287 12:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:02.287 12:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:02.546 null0 00:08:02.546 12:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:02.546 12:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:02.546 12:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:02.805 null1 00:08:02.805 12:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:02.805 12:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:02.805 12:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:02.805 null2 00:08:03.065 12:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:03.065 12:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:03.065 12:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:03.065 null3 00:08:03.065 12:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:03.065 12:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:03.065 12:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:03.325 null4 00:08:03.325 12:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:03.325 12:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:03.325 12:22:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:03.583 null5 00:08:03.583 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:03.583 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:03.583 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:03.842 null6 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:03.842 null7 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 15683 15684 15686 15688 15690 15693 15694 15696 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.842 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:04.101 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:04.101 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:04.102 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.102 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:04.102 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:04.102 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:04.102 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:04.102 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:04.361 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.361 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.361 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:04.361 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.361 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.361 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:04.361 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.361 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.361 12:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:04.361 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.361 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.361 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:04.361 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.361 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.361 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.361 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:04.361 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.362 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:04.362 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.362 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.362 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:04.362 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.362 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.362 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:04.620 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:04.620 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:04.620 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:04.620 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.620 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:04.620 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:04.620 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:04.620 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:04.880 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.880 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.880 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:04.880 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.880 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.880 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:04.880 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.880 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.880 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:04.880 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.880 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.880 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:04.880 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.880 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.880 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:04.880 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.880 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.880 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:04.880 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.880 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.880 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:04.880 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.880 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.880 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:04.880 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:04.880 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:04.880 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:04.880 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:04.880 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:04.880 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:05.140 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:05.140 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.140 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.140 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.140 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:05.140 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.140 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.140 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:05.140 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.140 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.140 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:05.140 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.140 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.140 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:05.140 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.140 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.140 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:05.140 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.140 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.140 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:05.140 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.140 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.140 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.140 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.140 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:05.140 12:22:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:05.399 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:05.399 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:05.399 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:05.399 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:05.399 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:05.399 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.399 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:05.399 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:05.658 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.658 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.658 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:05.658 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.658 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.658 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:05.658 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.658 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.658 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:05.658 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.658 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.658 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:05.658 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.658 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.658 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:05.658 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.658 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.658 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:05.658 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.658 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.658 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:05.658 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.658 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.658 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:05.918 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:05.918 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:05.918 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:05.918 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.918 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:05.918 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:05.918 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:05.918 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:05.918 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.918 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.918 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:05.918 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.918 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.918 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.918 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.918 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:05.918 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:05.918 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.918 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.918 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:05.918 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.918 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.918 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:05.918 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.918 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.918 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:05.918 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.918 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.918 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:05.918 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.918 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.918 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:06.177 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.177 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:06.177 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:06.177 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:06.177 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:06.177 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:06.177 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:06.177 12:22:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:06.436 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.436 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.436 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:06.436 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.436 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.436 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:06.436 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.436 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.436 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:06.436 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.436 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.436 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:06.436 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.436 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.436 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:06.436 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.436 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.436 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:06.436 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.436 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.436 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:06.436 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.436 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.436 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:06.695 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:06.696 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:06.696 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:06.696 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:06.696 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.696 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:06.696 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:06.696 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:06.696 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.696 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.696 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:06.696 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.696 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.696 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:06.955 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.955 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.955 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:06.955 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.955 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.955 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:06.955 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.955 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.955 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:06.955 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.955 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.955 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:06.955 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.955 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.955 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:06.955 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.955 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.955 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:06.955 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.955 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:06.955 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:06.955 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:06.955 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:06.955 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:06.955 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:06.955 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:07.215 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.215 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.215 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:07.215 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.215 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.215 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.215 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:07.215 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.215 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:07.215 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.215 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.215 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:07.215 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.215 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.215 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:07.215 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.215 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.215 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:07.215 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.215 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.215 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:07.215 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.215 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.215 12:22:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:07.474 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:07.474 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.474 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:07.474 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:07.474 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:07.474 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:07.474 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:07.474 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:07.734 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.734 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.734 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:07.734 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.734 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.734 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:07.734 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.734 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.734 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:07.734 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.734 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.734 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:07.734 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.734 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.734 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:07.734 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.734 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.734 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:07.734 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.734 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.734 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:07.734 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.734 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.734 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:07.734 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:07.734 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:07.734 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:07.734 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.734 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:07.734 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:07.734 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:07.993 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:07.993 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.993 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.993 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.993 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.993 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.993 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.993 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.993 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.993 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.994 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.994 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.994 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.994 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.994 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.994 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.994 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.994 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:07.994 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:07.994 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:07.994 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:07.994 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:07.994 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:07.994 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:07.994 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:07.994 rmmod nvme_tcp 00:08:07.994 rmmod nvme_fabrics 00:08:07.994 rmmod nvme_keyring 00:08:07.994 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:07.994 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:08:07.994 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:08:07.994 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 9732 ']' 00:08:07.994 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 9732 00:08:07.994 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 9732 ']' 00:08:07.994 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 9732 00:08:07.994 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:08:08.253 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:08.253 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 9732 00:08:08.253 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:08.253 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:08.253 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 9732' 00:08:08.253 killing process with pid 9732 00:08:08.253 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 9732 00:08:08.253 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 9732 00:08:08.253 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:08.253 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:08.253 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:08.253 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:08:08.253 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:08:08.253 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:08:08.253 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:08.253 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:08.253 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:08.253 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.253 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:08.253 12:22:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.792 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:10.792 00:08:10.792 real 0m47.520s 00:08:10.793 user 3m21.713s 00:08:10.793 sys 0m17.126s 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:10.793 ************************************ 00:08:10.793 END TEST nvmf_ns_hotplug_stress 00:08:10.793 ************************************ 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:10.793 ************************************ 00:08:10.793 START TEST nvmf_delete_subsystem 00:08:10.793 ************************************ 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:10.793 * Looking for test storage... 00:08:10.793 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:10.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.793 --rc genhtml_branch_coverage=1 00:08:10.793 --rc genhtml_function_coverage=1 00:08:10.793 --rc genhtml_legend=1 00:08:10.793 --rc geninfo_all_blocks=1 00:08:10.793 --rc geninfo_unexecuted_blocks=1 00:08:10.793 00:08:10.793 ' 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:10.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.793 --rc genhtml_branch_coverage=1 00:08:10.793 --rc genhtml_function_coverage=1 00:08:10.793 --rc genhtml_legend=1 00:08:10.793 --rc geninfo_all_blocks=1 00:08:10.793 --rc geninfo_unexecuted_blocks=1 00:08:10.793 00:08:10.793 ' 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:10.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.793 --rc genhtml_branch_coverage=1 00:08:10.793 --rc genhtml_function_coverage=1 00:08:10.793 --rc genhtml_legend=1 00:08:10.793 --rc geninfo_all_blocks=1 00:08:10.793 --rc geninfo_unexecuted_blocks=1 00:08:10.793 00:08:10.793 ' 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:10.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.793 --rc genhtml_branch_coverage=1 00:08:10.793 --rc genhtml_function_coverage=1 00:08:10.793 --rc genhtml_legend=1 00:08:10.793 --rc geninfo_all_blocks=1 00:08:10.793 --rc geninfo_unexecuted_blocks=1 00:08:10.793 00:08:10.793 ' 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.793 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:10.794 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.794 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:10.794 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:10.794 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:10.794 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:10.794 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:10.794 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:10.794 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:10.794 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:10.794 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:10.794 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:10.794 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:10.794 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:10.794 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:10.794 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:10.794 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:10.794 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:10.794 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:10.794 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.794 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:10.794 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.794 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:10.794 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:10.794 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:10.794 12:22:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:17.368 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:17.368 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:17.368 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:17.369 Found net devices under 0000:86:00.0: cvl_0_0 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:17.369 Found net devices under 0000:86:00.1: cvl_0_1 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:17.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:17.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.410 ms 00:08:17.369 00:08:17.369 --- 10.0.0.2 ping statistics --- 00:08:17.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.369 rtt min/avg/max/mdev = 0.410/0.410/0.410/0.000 ms 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:17.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:17.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:08:17.369 00:08:17.369 --- 10.0.0.1 ping statistics --- 00:08:17.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.369 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=20090 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 20090 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 20090 ']' 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:17.369 [2024-11-20 12:22:22.396017] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:08:17.369 [2024-11-20 12:22:22.396061] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:17.369 [2024-11-20 12:22:22.460030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:17.369 [2024-11-20 12:22:22.498494] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:17.369 [2024-11-20 12:22:22.498548] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:17.369 [2024-11-20 12:22:22.498556] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:17.369 [2024-11-20 12:22:22.498562] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:17.369 [2024-11-20 12:22:22.498567] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:17.369 [2024-11-20 12:22:22.499791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.369 [2024-11-20 12:22:22.499792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:17.369 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:17.370 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.370 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:17.370 [2024-11-20 12:22:22.646460] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:17.370 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.370 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:17.370 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.370 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:17.370 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.370 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:17.370 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.370 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:17.370 [2024-11-20 12:22:22.666672] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:17.370 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.370 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:17.370 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.370 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:17.370 NULL1 00:08:17.370 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.370 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:17.370 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.370 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:17.370 Delay0 00:08:17.370 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.370 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:17.370 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.370 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:17.370 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.370 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=20111 00:08:17.370 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:17.370 12:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:17.370 [2024-11-20 12:22:22.777546] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:19.278 12:22:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:19.278 12:22:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.278 12:22:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 starting I/O failed: -6 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 starting I/O failed: -6 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 starting I/O failed: -6 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 starting I/O failed: -6 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 starting I/O failed: -6 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 starting I/O failed: -6 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 starting I/O failed: -6 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 starting I/O failed: -6 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 starting I/O failed: -6 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 starting I/O failed: -6 00:08:19.278 [2024-11-20 12:22:24.984116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfe2c0 is same with the state(6) to be set 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 [2024-11-20 12:22:24.984686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfe860 is same with the state(6) to be set 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 starting I/O failed: -6 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 starting I/O failed: -6 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 starting I/O failed: -6 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 starting I/O failed: -6 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 starting I/O failed: -6 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 starting I/O failed: -6 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 starting I/O failed: -6 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 starting I/O failed: -6 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.278 starting I/O failed: -6 00:08:19.278 Write completed with error (sct=0, sc=8) 00:08:19.278 Read completed with error (sct=0, sc=8) 00:08:19.279 Read completed with error (sct=0, sc=8) 00:08:19.279 Read completed with error (sct=0, sc=8) 00:08:19.279 starting I/O failed: -6 00:08:19.279 Write completed with error (sct=0, sc=8) 00:08:19.279 Read completed with error (sct=0, sc=8) 00:08:19.279 [2024-11-20 12:22:24.987051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f77a400d4b0 is same with the state(6) to be set 00:08:20.214 [2024-11-20 12:22:25.955107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcff9a0 is same with the state(6) to be set 00:08:20.472 Write completed with error (sct=0, sc=8) 00:08:20.472 Read completed with error (sct=0, sc=8) 00:08:20.472 Read completed with error (sct=0, sc=8) 00:08:20.472 Read completed with error (sct=0, sc=8) 00:08:20.472 Read completed with error (sct=0, sc=8) 00:08:20.472 Read completed with error (sct=0, sc=8) 00:08:20.472 Write completed with error (sct=0, sc=8) 00:08:20.472 Read completed with error (sct=0, sc=8) 00:08:20.472 Read completed with error (sct=0, sc=8) 00:08:20.472 Write completed with error (sct=0, sc=8) 00:08:20.472 Read completed with error (sct=0, sc=8) 00:08:20.472 Read completed with error (sct=0, sc=8) 00:08:20.472 Write completed with error (sct=0, sc=8) 00:08:20.472 Write completed with error (sct=0, sc=8) 00:08:20.472 Write completed with error (sct=0, sc=8) 00:08:20.472 Read completed with error (sct=0, sc=8) 00:08:20.472 [2024-11-20 12:22:25.987502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfe680 is same with the state(6) to be set 00:08:20.472 Read completed with error (sct=0, sc=8) 00:08:20.472 Read completed with error (sct=0, sc=8) 00:08:20.472 Read completed with error (sct=0, sc=8) 00:08:20.472 Read completed with error (sct=0, sc=8) 00:08:20.472 Read completed with error (sct=0, sc=8) 00:08:20.472 Read completed with error (sct=0, sc=8) 00:08:20.472 Read completed with error (sct=0, sc=8) 00:08:20.472 Read completed with error (sct=0, sc=8) 00:08:20.472 Read completed with error (sct=0, sc=8) 00:08:20.472 Write completed with error (sct=0, sc=8) 00:08:20.472 Read completed with error (sct=0, sc=8) 00:08:20.472 Read completed with error (sct=0, sc=8) 00:08:20.472 Write completed with error (sct=0, sc=8) 00:08:20.472 Read completed with error (sct=0, sc=8) 00:08:20.472 Read completed with error (sct=0, sc=8) 00:08:20.472 Write completed with error (sct=0, sc=8) 00:08:20.472 Write completed with error (sct=0, sc=8) 00:08:20.472 Write completed with error (sct=0, sc=8) 00:08:20.472 Read completed with error (sct=0, sc=8) 00:08:20.472 Write completed with error (sct=0, sc=8) 00:08:20.472 Write completed with error (sct=0, sc=8) 00:08:20.472 Write completed with error (sct=0, sc=8) 00:08:20.472 Read completed with error (sct=0, sc=8) 00:08:20.472 Write completed with error (sct=0, sc=8) 00:08:20.472 [2024-11-20 12:22:25.989651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f77a400d020 is same with the state(6) to be set 00:08:20.472 Write completed with error (sct=0, sc=8) 00:08:20.472 Read completed with error (sct=0, sc=8) 00:08:20.472 Write completed with error (sct=0, sc=8) 00:08:20.472 Write completed with error (sct=0, sc=8) 00:08:20.472 Read completed with error (sct=0, sc=8) 00:08:20.472 Read completed with error (sct=0, sc=8) 00:08:20.473 Write completed with error (sct=0, sc=8) 00:08:20.473 Write completed with error (sct=0, sc=8) 00:08:20.473 Read completed with error (sct=0, sc=8) 00:08:20.473 Read completed with error (sct=0, sc=8) 00:08:20.473 Read completed with error (sct=0, sc=8) 00:08:20.473 Write completed with error (sct=0, sc=8) 00:08:20.473 Read completed with error (sct=0, sc=8) 00:08:20.473 Read completed with error (sct=0, sc=8) 00:08:20.473 Write completed with error (sct=0, sc=8) 00:08:20.473 Write completed with error (sct=0, sc=8) 00:08:20.473 Read completed with error (sct=0, sc=8) 00:08:20.473 Read completed with error (sct=0, sc=8) 00:08:20.473 Read completed with error (sct=0, sc=8) 00:08:20.473 Read completed with error (sct=0, sc=8) 00:08:20.473 Write completed with error (sct=0, sc=8) 00:08:20.473 Read completed with error (sct=0, sc=8) 00:08:20.473 Write completed with error (sct=0, sc=8) 00:08:20.473 Read completed with error (sct=0, sc=8) 00:08:20.473 [2024-11-20 12:22:25.989806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f77a400d7e0 is same with the state(6) to be set 00:08:20.473 Write completed with error (sct=0, sc=8) 00:08:20.473 Write completed with error (sct=0, sc=8) 00:08:20.473 Read completed with error (sct=0, sc=8) 00:08:20.473 Read completed with error (sct=0, sc=8) 00:08:20.473 Read completed with error (sct=0, sc=8) 00:08:20.473 Read completed with error (sct=0, sc=8) 00:08:20.473 Read completed with error (sct=0, sc=8) 00:08:20.473 Write completed with error (sct=0, sc=8) 00:08:20.473 Read completed with error (sct=0, sc=8) 00:08:20.473 Write completed with error (sct=0, sc=8) 00:08:20.473 Read completed with error (sct=0, sc=8) 00:08:20.473 Read completed with error (sct=0, sc=8) 00:08:20.473 Read completed with error (sct=0, sc=8) 00:08:20.473 Read completed with error (sct=0, sc=8) 00:08:20.473 Read completed with error (sct=0, sc=8) 00:08:20.473 Write completed with error (sct=0, sc=8) 00:08:20.473 Read completed with error (sct=0, sc=8) 00:08:20.473 Read completed with error (sct=0, sc=8) 00:08:20.473 Read completed with error (sct=0, sc=8) 00:08:20.473 Read completed with error (sct=0, sc=8) 00:08:20.473 Read completed with error (sct=0, sc=8) 00:08:20.473 Read completed with error (sct=0, sc=8) 00:08:20.473 Write completed with error (sct=0, sc=8) 00:08:20.473 Read completed with error (sct=0, sc=8) 00:08:20.473 [2024-11-20 12:22:25.990355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f77a4000c40 is same with the state(6) to be set 00:08:20.473 Initializing NVMe Controllers 00:08:20.473 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:20.473 Controller IO queue size 128, less than required. 00:08:20.473 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:20.473 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:20.473 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:20.473 Initialization complete. Launching workers. 00:08:20.473 ======================================================== 00:08:20.473 Latency(us) 00:08:20.473 Device Information : IOPS MiB/s Average min max 00:08:20.473 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 154.70 0.08 878437.51 257.90 1007622.90 00:08:20.473 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 156.69 0.08 1045928.30 1443.29 2001251.73 00:08:20.473 ======================================================== 00:08:20.473 Total : 311.38 0.15 962718.02 257.90 2001251.73 00:08:20.473 00:08:20.473 [2024-11-20 12:22:25.990878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcff9a0 (9): Bad file descriptor 00:08:20.473 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:20.473 12:22:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.473 12:22:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:20.473 12:22:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 20111 00:08:20.473 12:22:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:21.040 12:22:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:21.040 12:22:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 20111 00:08:21.040 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (20111) - No such process 00:08:21.040 12:22:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 20111 00:08:21.040 12:22:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:08:21.040 12:22:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 20111 00:08:21.040 12:22:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:08:21.040 12:22:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:21.040 12:22:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:08:21.041 12:22:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:21.041 12:22:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 20111 00:08:21.041 12:22:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:08:21.041 12:22:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:21.041 12:22:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:21.041 12:22:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:21.041 12:22:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:21.041 12:22:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.041 12:22:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:21.041 12:22:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.041 12:22:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:21.041 12:22:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.041 12:22:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:21.041 [2024-11-20 12:22:26.521107] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:21.041 12:22:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.041 12:22:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.041 12:22:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.041 12:22:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:21.041 12:22:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.041 12:22:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=20806 00:08:21.041 12:22:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:21.041 12:22:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:21.041 12:22:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 20806 00:08:21.041 12:22:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:21.041 [2024-11-20 12:22:26.610680] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:21.300 12:22:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:21.300 12:22:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 20806 00:08:21.300 12:22:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:21.867 12:22:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:21.867 12:22:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 20806 00:08:21.867 12:22:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:22.434 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:22.434 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 20806 00:08:22.434 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:23.002 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:23.002 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 20806 00:08:23.002 12:22:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:23.574 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:23.574 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 20806 00:08:23.574 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:23.832 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:23.832 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 20806 00:08:23.832 12:22:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:24.090 Initializing NVMe Controllers 00:08:24.090 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:24.090 Controller IO queue size 128, less than required. 00:08:24.090 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:24.090 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:24.090 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:24.090 Initialization complete. Launching workers. 00:08:24.090 ======================================================== 00:08:24.090 Latency(us) 00:08:24.090 Device Information : IOPS MiB/s Average min max 00:08:24.091 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002192.77 1000128.20 1006150.34 00:08:24.091 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004004.38 1000183.10 1010217.29 00:08:24.091 ======================================================== 00:08:24.091 Total : 256.00 0.12 1003098.58 1000128.20 1010217.29 00:08:24.091 00:08:24.349 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:24.349 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 20806 00:08:24.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (20806) - No such process 00:08:24.349 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 20806 00:08:24.349 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:24.349 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:24.349 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:24.349 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:24.349 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:24.349 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:24.349 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:24.349 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:24.349 rmmod nvme_tcp 00:08:24.349 rmmod nvme_fabrics 00:08:24.349 rmmod nvme_keyring 00:08:24.608 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:24.608 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:24.608 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:24.609 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 20090 ']' 00:08:24.609 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 20090 00:08:24.609 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 20090 ']' 00:08:24.609 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 20090 00:08:24.609 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:08:24.609 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:24.609 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 20090 00:08:24.609 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:24.609 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:24.609 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 20090' 00:08:24.609 killing process with pid 20090 00:08:24.609 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 20090 00:08:24.609 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 20090 00:08:24.609 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:24.609 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:24.609 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:24.609 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:24.609 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:08:24.609 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:24.609 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:08:24.609 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:24.609 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:24.609 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.609 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:24.609 12:22:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:27.146 00:08:27.146 real 0m16.319s 00:08:27.146 user 0m29.431s 00:08:27.146 sys 0m5.609s 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:27.146 ************************************ 00:08:27.146 END TEST nvmf_delete_subsystem 00:08:27.146 ************************************ 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:27.146 ************************************ 00:08:27.146 START TEST nvmf_host_management 00:08:27.146 ************************************ 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:27.146 * Looking for test storage... 00:08:27.146 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:27.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.146 --rc genhtml_branch_coverage=1 00:08:27.146 --rc genhtml_function_coverage=1 00:08:27.146 --rc genhtml_legend=1 00:08:27.146 --rc geninfo_all_blocks=1 00:08:27.146 --rc geninfo_unexecuted_blocks=1 00:08:27.146 00:08:27.146 ' 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:27.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.146 --rc genhtml_branch_coverage=1 00:08:27.146 --rc genhtml_function_coverage=1 00:08:27.146 --rc genhtml_legend=1 00:08:27.146 --rc geninfo_all_blocks=1 00:08:27.146 --rc geninfo_unexecuted_blocks=1 00:08:27.146 00:08:27.146 ' 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:27.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.146 --rc genhtml_branch_coverage=1 00:08:27.146 --rc genhtml_function_coverage=1 00:08:27.146 --rc genhtml_legend=1 00:08:27.146 --rc geninfo_all_blocks=1 00:08:27.146 --rc geninfo_unexecuted_blocks=1 00:08:27.146 00:08:27.146 ' 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:27.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.146 --rc genhtml_branch_coverage=1 00:08:27.146 --rc genhtml_function_coverage=1 00:08:27.146 --rc genhtml_legend=1 00:08:27.146 --rc geninfo_all_blocks=1 00:08:27.146 --rc geninfo_unexecuted_blocks=1 00:08:27.146 00:08:27.146 ' 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:27.146 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:27.147 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:27.147 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:27.147 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:27.147 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:27.147 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:27.147 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:27.147 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:27.147 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:27.147 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.147 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.147 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.147 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:27.147 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.147 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:27.147 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:27.147 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:27.147 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:27.147 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:27.147 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:27.147 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:27.147 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:27.147 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:27.147 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:27.147 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:27.147 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:27.147 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:27.147 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:27.147 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:27.147 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:27.147 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:27.147 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:27.147 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:27.147 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.147 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:27.147 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.147 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:27.147 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:27.147 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:27.147 12:22:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:33.720 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:33.720 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:33.720 Found net devices under 0000:86:00.0: cvl_0_0 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:33.720 Found net devices under 0000:86:00.1: cvl_0_1 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:33.720 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:33.721 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:33.721 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:33.721 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:33.721 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:33.721 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.421 ms 00:08:33.721 00:08:33.721 --- 10.0.0.2 ping statistics --- 00:08:33.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.721 rtt min/avg/max/mdev = 0.421/0.421/0.421/0.000 ms 00:08:33.721 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:33.721 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:33.721 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:08:33.721 00:08:33.721 --- 10.0.0.1 ping statistics --- 00:08:33.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.721 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:08:33.721 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:33.721 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:08:33.721 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:33.721 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:33.721 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:33.721 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:33.721 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:33.721 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:33.721 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:33.721 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:33.721 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:33.721 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:33.721 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:33.721 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:33.721 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:33.721 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=25033 00:08:33.721 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:33.721 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 25033 00:08:33.721 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 25033 ']' 00:08:33.721 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.721 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:33.721 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.721 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:33.721 12:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:33.721 [2024-11-20 12:22:38.812169] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:08:33.721 [2024-11-20 12:22:38.812230] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:33.721 [2024-11-20 12:22:38.891316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:33.721 [2024-11-20 12:22:38.931454] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:33.721 [2024-11-20 12:22:38.931495] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:33.721 [2024-11-20 12:22:38.931502] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:33.721 [2024-11-20 12:22:38.931509] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:33.721 [2024-11-20 12:22:38.931514] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:33.721 [2024-11-20 12:22:38.932907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:33.721 [2024-11-20 12:22:38.933017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:33.721 [2024-11-20 12:22:38.933099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.721 [2024-11-20 12:22:38.933100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:33.721 12:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:33.721 12:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:33.721 12:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:33.721 12:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:33.721 12:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:33.721 12:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:33.721 12:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:33.721 12:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.721 12:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:33.721 [2024-11-20 12:22:39.077352] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:33.721 12:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.721 12:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:33.721 12:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:33.721 12:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:33.721 12:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:33.721 12:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:33.721 12:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:33.721 12:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.721 12:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:33.721 Malloc0 00:08:33.721 [2024-11-20 12:22:39.158048] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:33.721 12:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.721 12:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:33.721 12:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:33.721 12:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:33.721 12:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=25078 00:08:33.721 12:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 25078 /var/tmp/bdevperf.sock 00:08:33.721 12:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 25078 ']' 00:08:33.721 12:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:33.721 12:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:33.721 12:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:33.721 12:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:33.721 12:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:33.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:33.721 12:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:33.721 12:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:33.721 12:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:33.721 12:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:33.721 12:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:33.721 12:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:33.721 { 00:08:33.721 "params": { 00:08:33.721 "name": "Nvme$subsystem", 00:08:33.721 "trtype": "$TEST_TRANSPORT", 00:08:33.721 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:33.721 "adrfam": "ipv4", 00:08:33.721 "trsvcid": "$NVMF_PORT", 00:08:33.721 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:33.721 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:33.721 "hdgst": ${hdgst:-false}, 00:08:33.721 "ddgst": ${ddgst:-false} 00:08:33.721 }, 00:08:33.721 "method": "bdev_nvme_attach_controller" 00:08:33.721 } 00:08:33.721 EOF 00:08:33.721 )") 00:08:33.721 12:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:33.721 12:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:33.721 12:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:33.721 12:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:33.721 "params": { 00:08:33.721 "name": "Nvme0", 00:08:33.721 "trtype": "tcp", 00:08:33.721 "traddr": "10.0.0.2", 00:08:33.721 "adrfam": "ipv4", 00:08:33.721 "trsvcid": "4420", 00:08:33.721 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:33.721 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:33.721 "hdgst": false, 00:08:33.721 "ddgst": false 00:08:33.721 }, 00:08:33.722 "method": "bdev_nvme_attach_controller" 00:08:33.722 }' 00:08:33.722 [2024-11-20 12:22:39.254075] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:08:33.722 [2024-11-20 12:22:39.254117] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid25078 ] 00:08:33.722 [2024-11-20 12:22:39.329249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.722 [2024-11-20 12:22:39.369958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.980 Running I/O for 10 seconds... 00:08:34.550 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:34.550 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:34.550 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:34.550 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.550 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:34.550 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.550 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:34.550 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:34.550 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:34.550 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:34.550 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:34.550 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:34.550 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:34.550 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:34.550 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:34.550 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:34.550 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.550 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:34.550 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.550 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1219 00:08:34.550 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1219 -ge 100 ']' 00:08:34.550 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:34.550 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:34.550 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:34.550 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:34.550 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.550 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:34.550 [2024-11-20 12:22:40.179730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.550 [2024-11-20 12:22:40.179764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.550 [2024-11-20 12:22:40.179779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.550 [2024-11-20 12:22:40.179787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.550 [2024-11-20 12:22:40.179796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.550 [2024-11-20 12:22:40.179804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.550 [2024-11-20 12:22:40.179812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.550 [2024-11-20 12:22:40.179819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.550 [2024-11-20 12:22:40.179828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.550 [2024-11-20 12:22:40.179835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.550 [2024-11-20 12:22:40.179843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.550 [2024-11-20 12:22:40.179850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.550 [2024-11-20 12:22:40.179858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.550 [2024-11-20 12:22:40.179864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.550 [2024-11-20 12:22:40.179872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.550 [2024-11-20 12:22:40.179879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.550 [2024-11-20 12:22:40.179888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.550 [2024-11-20 12:22:40.179897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.550 [2024-11-20 12:22:40.179914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.550 [2024-11-20 12:22:40.179921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.550 [2024-11-20 12:22:40.179929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.551 [2024-11-20 12:22:40.179936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.551 [2024-11-20 12:22:40.179944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.551 [2024-11-20 12:22:40.179950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.551 [2024-11-20 12:22:40.179958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.551 [2024-11-20 12:22:40.179964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.551 [2024-11-20 12:22:40.179972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.551 [2024-11-20 12:22:40.179978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.551 [2024-11-20 12:22:40.179986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.551 [2024-11-20 12:22:40.179992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.551 [2024-11-20 12:22:40.180000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.551 [2024-11-20 12:22:40.180007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.551 [2024-11-20 12:22:40.180016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.551 [2024-11-20 12:22:40.180023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.551 [2024-11-20 12:22:40.180031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.551 [2024-11-20 12:22:40.180038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.551 [2024-11-20 12:22:40.180046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.551 [2024-11-20 12:22:40.180052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.551 [2024-11-20 12:22:40.180060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.551 [2024-11-20 12:22:40.180066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.551 [2024-11-20 12:22:40.180074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.551 [2024-11-20 12:22:40.180080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.551 [2024-11-20 12:22:40.180088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.551 [2024-11-20 12:22:40.180096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.551 [2024-11-20 12:22:40.180104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.551 [2024-11-20 12:22:40.180111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.551 [2024-11-20 12:22:40.180119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.551 [2024-11-20 12:22:40.180125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.551 [2024-11-20 12:22:40.180133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.551 [2024-11-20 12:22:40.180139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.551 [2024-11-20 12:22:40.180146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.551 [2024-11-20 12:22:40.180152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.551 [2024-11-20 12:22:40.180160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.551 [2024-11-20 12:22:40.180167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.551 [2024-11-20 12:22:40.180175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.551 [2024-11-20 12:22:40.180181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.551 [2024-11-20 12:22:40.180190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.551 [2024-11-20 12:22:40.180196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.551 [2024-11-20 12:22:40.180209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.551 [2024-11-20 12:22:40.180216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.551 [2024-11-20 12:22:40.180224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.551 [2024-11-20 12:22:40.180230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.551 [2024-11-20 12:22:40.180238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.551 [2024-11-20 12:22:40.180245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.551 [2024-11-20 12:22:40.180253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.551 [2024-11-20 12:22:40.180259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.551 [2024-11-20 12:22:40.180267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.551 [2024-11-20 12:22:40.180275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.551 [2024-11-20 12:22:40.180285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.551 [2024-11-20 12:22:40.180292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.551 [2024-11-20 12:22:40.180299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.551 [2024-11-20 12:22:40.180306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.551 [2024-11-20 12:22:40.180314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.551 [2024-11-20 12:22:40.180322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.551 [2024-11-20 12:22:40.180331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.551 [2024-11-20 12:22:40.180337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.551 [2024-11-20 12:22:40.180345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.551 [2024-11-20 12:22:40.180351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.551 [2024-11-20 12:22:40.180359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.551 [2024-11-20 12:22:40.180365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.551 [2024-11-20 12:22:40.180373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.551 [2024-11-20 12:22:40.180380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.551 [2024-11-20 12:22:40.180389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.551 [2024-11-20 12:22:40.180395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.551 [2024-11-20 12:22:40.180403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.551 [2024-11-20 12:22:40.180409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.551 [2024-11-20 12:22:40.180417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.551 [2024-11-20 12:22:40.180423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.551 [2024-11-20 12:22:40.180431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.551 [2024-11-20 12:22:40.180437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.551 [2024-11-20 12:22:40.180446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.551 [2024-11-20 12:22:40.180453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.551 [2024-11-20 12:22:40.180461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.551 [2024-11-20 12:22:40.180473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.551 [2024-11-20 12:22:40.180481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.551 [2024-11-20 12:22:40.180487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.551 [2024-11-20 12:22:40.180496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.551 [2024-11-20 12:22:40.180502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.551 [2024-11-20 12:22:40.180511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.551 [2024-11-20 12:22:40.180518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.552 [2024-11-20 12:22:40.180525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.552 [2024-11-20 12:22:40.180532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.552 [2024-11-20 12:22:40.180540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.552 [2024-11-20 12:22:40.180548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.552 [2024-11-20 12:22:40.180557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.552 [2024-11-20 12:22:40.180563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.552 [2024-11-20 12:22:40.180571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.552 [2024-11-20 12:22:40.180578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.552 [2024-11-20 12:22:40.180586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.552 [2024-11-20 12:22:40.180592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.552 [2024-11-20 12:22:40.180600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.552 [2024-11-20 12:22:40.180606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.552 [2024-11-20 12:22:40.180614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.552 [2024-11-20 12:22:40.180620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.552 [2024-11-20 12:22:40.180629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.552 [2024-11-20 12:22:40.180635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.552 [2024-11-20 12:22:40.180643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.552 [2024-11-20 12:22:40.180649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.552 [2024-11-20 12:22:40.180658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.552 [2024-11-20 12:22:40.180665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.552 [2024-11-20 12:22:40.180672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.552 [2024-11-20 12:22:40.180679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.552 [2024-11-20 12:22:40.180688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.552 [2024-11-20 12:22:40.180695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.552 [2024-11-20 12:22:40.180703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.552 [2024-11-20 12:22:40.180710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.552 [2024-11-20 12:22:40.180717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.552 [2024-11-20 12:22:40.180724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.552 [2024-11-20 12:22:40.180823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:34.552 [2024-11-20 12:22:40.180834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.552 [2024-11-20 12:22:40.180842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:34.552 [2024-11-20 12:22:40.180849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.552 [2024-11-20 12:22:40.180856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:34.552 [2024-11-20 12:22:40.180863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.552 [2024-11-20 12:22:40.180870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:34.552 [2024-11-20 12:22:40.180876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.552 [2024-11-20 12:22:40.180883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1348500 is same with the state(6) to be set 00:08:34.552 [2024-11-20 12:22:40.181756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:34.552 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.552 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:34.552 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.552 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:34.552 task offset: 32768 on job bdev=Nvme0n1 fails 00:08:34.552 00:08:34.552 Latency(us) 00:08:34.552 [2024-11-20T11:22:40.318Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:34.552 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:34.552 Job: Nvme0n1 ended in about 0.64 seconds with error 00:08:34.552 Verification LBA range: start 0x0 length 0x400 00:08:34.552 Nvme0n1 : 0.64 1984.65 124.04 99.23 0.00 30109.91 1630.60 26963.38 00:08:34.552 [2024-11-20T11:22:40.318Z] =================================================================================================================== 00:08:34.552 [2024-11-20T11:22:40.318Z] Total : 1984.65 124.04 99.23 0.00 30109.91 1630.60 26963.38 00:08:34.552 [2024-11-20 12:22:40.184138] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:34.552 [2024-11-20 12:22:40.184162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1348500 (9): Bad file descriptor 00:08:34.552 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.552 12:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:34.552 [2024-11-20 12:22:40.276475] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:35.490 12:22:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 25078 00:08:35.490 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (25078) - No such process 00:08:35.490 12:22:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:35.490 12:22:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:35.491 12:22:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:35.491 12:22:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:35.491 12:22:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:35.491 12:22:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:35.491 12:22:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:35.491 12:22:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:35.491 { 00:08:35.491 "params": { 00:08:35.491 "name": "Nvme$subsystem", 00:08:35.491 "trtype": "$TEST_TRANSPORT", 00:08:35.491 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:35.491 "adrfam": "ipv4", 00:08:35.491 "trsvcid": "$NVMF_PORT", 00:08:35.491 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:35.491 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:35.491 "hdgst": ${hdgst:-false}, 00:08:35.491 "ddgst": ${ddgst:-false} 00:08:35.491 }, 00:08:35.491 "method": "bdev_nvme_attach_controller" 00:08:35.491 } 00:08:35.491 EOF 00:08:35.491 )") 00:08:35.491 12:22:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:35.491 12:22:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:35.491 12:22:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:35.491 12:22:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:35.491 "params": { 00:08:35.491 "name": "Nvme0", 00:08:35.491 "trtype": "tcp", 00:08:35.491 "traddr": "10.0.0.2", 00:08:35.491 "adrfam": "ipv4", 00:08:35.491 "trsvcid": "4420", 00:08:35.491 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:35.491 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:35.491 "hdgst": false, 00:08:35.491 "ddgst": false 00:08:35.491 }, 00:08:35.491 "method": "bdev_nvme_attach_controller" 00:08:35.491 }' 00:08:35.491 [2024-11-20 12:22:41.246924] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:08:35.491 [2024-11-20 12:22:41.246975] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid25541 ] 00:08:35.750 [2024-11-20 12:22:41.323054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.750 [2024-11-20 12:22:41.362491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.010 Running I/O for 1 seconds... 00:08:36.947 2016.00 IOPS, 126.00 MiB/s 00:08:36.947 Latency(us) 00:08:36.947 [2024-11-20T11:22:42.713Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:36.947 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:36.947 Verification LBA range: start 0x0 length 0x400 00:08:36.947 Nvme0n1 : 1.01 2056.42 128.53 0.00 0.00 30526.25 2543.42 26713.72 00:08:36.947 [2024-11-20T11:22:42.713Z] =================================================================================================================== 00:08:36.947 [2024-11-20T11:22:42.713Z] Total : 2056.42 128.53 0.00 0.00 30526.25 2543.42 26713.72 00:08:37.206 12:22:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:37.206 12:22:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:37.206 12:22:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:37.206 12:22:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:37.206 12:22:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:37.206 12:22:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:37.206 12:22:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:37.206 12:22:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:37.206 12:22:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:37.206 12:22:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:37.206 12:22:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:37.206 rmmod nvme_tcp 00:08:37.206 rmmod nvme_fabrics 00:08:37.206 rmmod nvme_keyring 00:08:37.206 12:22:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:37.206 12:22:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:37.206 12:22:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:37.206 12:22:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 25033 ']' 00:08:37.206 12:22:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 25033 00:08:37.206 12:22:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 25033 ']' 00:08:37.206 12:22:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 25033 00:08:37.206 12:22:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:37.206 12:22:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:37.206 12:22:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 25033 00:08:37.206 12:22:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:37.206 12:22:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:37.207 12:22:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 25033' 00:08:37.207 killing process with pid 25033 00:08:37.207 12:22:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 25033 00:08:37.207 12:22:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 25033 00:08:37.467 [2024-11-20 12:22:43.024151] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:37.467 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:37.467 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:37.467 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:37.467 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:37.467 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:37.467 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:37.467 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:37.467 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:37.467 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:37.467 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.467 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:37.467 12:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.375 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:39.375 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:39.375 00:08:39.375 real 0m12.643s 00:08:39.375 user 0m20.624s 00:08:39.375 sys 0m5.728s 00:08:39.375 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.375 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.375 ************************************ 00:08:39.375 END TEST nvmf_host_management 00:08:39.375 ************************************ 00:08:39.633 12:22:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:39.633 12:22:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:39.633 12:22:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.633 12:22:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:39.633 ************************************ 00:08:39.633 START TEST nvmf_lvol 00:08:39.634 ************************************ 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:39.634 * Looking for test storage... 00:08:39.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:39.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.634 --rc genhtml_branch_coverage=1 00:08:39.634 --rc genhtml_function_coverage=1 00:08:39.634 --rc genhtml_legend=1 00:08:39.634 --rc geninfo_all_blocks=1 00:08:39.634 --rc geninfo_unexecuted_blocks=1 00:08:39.634 00:08:39.634 ' 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:39.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.634 --rc genhtml_branch_coverage=1 00:08:39.634 --rc genhtml_function_coverage=1 00:08:39.634 --rc genhtml_legend=1 00:08:39.634 --rc geninfo_all_blocks=1 00:08:39.634 --rc geninfo_unexecuted_blocks=1 00:08:39.634 00:08:39.634 ' 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:39.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.634 --rc genhtml_branch_coverage=1 00:08:39.634 --rc genhtml_function_coverage=1 00:08:39.634 --rc genhtml_legend=1 00:08:39.634 --rc geninfo_all_blocks=1 00:08:39.634 --rc geninfo_unexecuted_blocks=1 00:08:39.634 00:08:39.634 ' 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:39.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.634 --rc genhtml_branch_coverage=1 00:08:39.634 --rc genhtml_function_coverage=1 00:08:39.634 --rc genhtml_legend=1 00:08:39.634 --rc geninfo_all_blocks=1 00:08:39.634 --rc geninfo_unexecuted_blocks=1 00:08:39.634 00:08:39.634 ' 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:39.634 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:39.894 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:39.894 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:39.894 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:39.894 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:39.894 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:39.894 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:39.894 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:39.894 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:39.894 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:39.894 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:39.894 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:39.894 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.894 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.894 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.894 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:39.894 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.894 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:39.894 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:39.894 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:39.894 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:39.894 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:39.894 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:39.894 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:39.894 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:39.894 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:39.894 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:39.894 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:39.894 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:39.894 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:39.894 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:39.894 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:39.894 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:39.894 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:39.894 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:39.894 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:39.894 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:39.894 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:39.894 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:39.894 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.894 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:39.894 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.894 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:39.895 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:39.895 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:39.895 12:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:46.468 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:46.468 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:46.468 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:46.468 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:46.468 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:46.468 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:46.468 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:46.468 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:46.468 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:46.468 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:46.468 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:46.468 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:46.468 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:46.468 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:46.468 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:46.468 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:46.468 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:46.468 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:46.468 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:46.468 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:46.468 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:46.468 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:46.468 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:46.468 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:46.468 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:46.468 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:46.468 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:46.468 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:46.468 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:46.468 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:46.468 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:46.468 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:46.468 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:46.468 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:46.468 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:46.468 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:46.468 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:46.468 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:46.468 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:46.468 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:46.468 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:46.468 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:46.468 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:46.468 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:46.468 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:46.468 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:46.468 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:46.469 Found net devices under 0000:86:00.0: cvl_0_0 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:46.469 Found net devices under 0000:86:00.1: cvl_0_1 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:46.469 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:46.469 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.450 ms 00:08:46.469 00:08:46.469 --- 10.0.0.2 ping statistics --- 00:08:46.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.469 rtt min/avg/max/mdev = 0.450/0.450/0.450/0.000 ms 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:46.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:46.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:08:46.469 00:08:46.469 --- 10.0.0.1 ping statistics --- 00:08:46.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.469 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=29319 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 29319 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 29319 ']' 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:46.469 [2024-11-20 12:22:51.504895] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:08:46.469 [2024-11-20 12:22:51.504946] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.469 [2024-11-20 12:22:51.584250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:46.469 [2024-11-20 12:22:51.624597] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:46.469 [2024-11-20 12:22:51.624639] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:46.469 [2024-11-20 12:22:51.624645] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:46.469 [2024-11-20 12:22:51.624651] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:46.469 [2024-11-20 12:22:51.624655] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:46.469 [2024-11-20 12:22:51.626188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.469 [2024-11-20 12:22:51.626081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.469 [2024-11-20 12:22:51.626189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:46.469 [2024-11-20 12:22:51.934647] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:46.469 12:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:46.469 12:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:46.469 12:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:46.728 12:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:46.728 12:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:46.988 12:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:47.247 12:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=48694e51-a624-45c2-ba4b-dd61de1f793d 00:08:47.247 12:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 48694e51-a624-45c2-ba4b-dd61de1f793d lvol 20 00:08:47.506 12:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=dae91ea5-bfca-49ec-8687-8499862eb965 00:08:47.506 12:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:47.506 12:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 dae91ea5-bfca-49ec-8687-8499862eb965 00:08:47.767 12:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:48.050 [2024-11-20 12:22:53.589659] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:48.050 12:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:48.050 12:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=29807 00:08:48.050 12:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:48.050 12:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:49.041 12:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot dae91ea5-bfca-49ec-8687-8499862eb965 MY_SNAPSHOT 00:08:49.299 12:22:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=42a29184-6396-4f23-9373-fa232be6ef60 00:08:49.299 12:22:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize dae91ea5-bfca-49ec-8687-8499862eb965 30 00:08:49.558 12:22:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 42a29184-6396-4f23-9373-fa232be6ef60 MY_CLONE 00:08:49.817 12:22:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=188aa031-971f-4a2d-85bf-7aa98ec6fc54 00:08:49.817 12:22:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 188aa031-971f-4a2d-85bf-7aa98ec6fc54 00:08:50.386 12:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 29807 00:08:58.504 Initializing NVMe Controllers 00:08:58.504 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:58.504 Controller IO queue size 128, less than required. 00:08:58.504 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:58.504 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:58.504 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:58.504 Initialization complete. Launching workers. 00:08:58.504 ======================================================== 00:08:58.504 Latency(us) 00:08:58.504 Device Information : IOPS MiB/s Average min max 00:08:58.504 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12428.45 48.55 10300.94 2095.63 55630.39 00:08:58.504 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12508.75 48.86 10233.87 827.87 48548.49 00:08:58.504 ======================================================== 00:08:58.504 Total : 24937.20 97.41 10267.30 827.87 55630.39 00:08:58.504 00:08:58.504 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:58.763 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete dae91ea5-bfca-49ec-8687-8499862eb965 00:08:59.021 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 48694e51-a624-45c2-ba4b-dd61de1f793d 00:08:59.021 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:59.021 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:59.021 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:59.021 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:59.021 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:59.280 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:59.280 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:59.280 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:59.280 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:59.280 rmmod nvme_tcp 00:08:59.280 rmmod nvme_fabrics 00:08:59.280 rmmod nvme_keyring 00:08:59.280 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:59.280 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:59.280 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:59.280 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 29319 ']' 00:08:59.280 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 29319 00:08:59.280 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 29319 ']' 00:08:59.280 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 29319 00:08:59.280 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:59.280 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:59.280 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 29319 00:08:59.280 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:59.280 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:59.280 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 29319' 00:08:59.280 killing process with pid 29319 00:08:59.280 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 29319 00:08:59.281 12:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 29319 00:08:59.540 12:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:59.540 12:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:59.540 12:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:59.540 12:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:59.540 12:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:59.540 12:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:59.540 12:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:59.540 12:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:59.540 12:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:59.540 12:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.540 12:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:59.540 12:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.445 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:01.445 00:09:01.445 real 0m21.976s 00:09:01.445 user 1m2.895s 00:09:01.445 sys 0m7.756s 00:09:01.445 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.445 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:01.445 ************************************ 00:09:01.445 END TEST nvmf_lvol 00:09:01.445 ************************************ 00:09:01.704 12:23:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:01.704 12:23:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:01.704 12:23:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:01.704 12:23:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:01.704 ************************************ 00:09:01.704 START TEST nvmf_lvs_grow 00:09:01.704 ************************************ 00:09:01.704 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:01.704 * Looking for test storage... 00:09:01.704 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:01.704 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:01.704 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:09:01.704 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:01.704 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:01.704 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:01.704 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:01.704 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:01.704 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:09:01.704 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:09:01.704 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:09:01.704 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:09:01.704 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:09:01.704 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:09:01.704 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:09:01.704 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:01.704 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:09:01.704 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:09:01.704 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:01.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.705 --rc genhtml_branch_coverage=1 00:09:01.705 --rc genhtml_function_coverage=1 00:09:01.705 --rc genhtml_legend=1 00:09:01.705 --rc geninfo_all_blocks=1 00:09:01.705 --rc geninfo_unexecuted_blocks=1 00:09:01.705 00:09:01.705 ' 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:01.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.705 --rc genhtml_branch_coverage=1 00:09:01.705 --rc genhtml_function_coverage=1 00:09:01.705 --rc genhtml_legend=1 00:09:01.705 --rc geninfo_all_blocks=1 00:09:01.705 --rc geninfo_unexecuted_blocks=1 00:09:01.705 00:09:01.705 ' 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:01.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.705 --rc genhtml_branch_coverage=1 00:09:01.705 --rc genhtml_function_coverage=1 00:09:01.705 --rc genhtml_legend=1 00:09:01.705 --rc geninfo_all_blocks=1 00:09:01.705 --rc geninfo_unexecuted_blocks=1 00:09:01.705 00:09:01.705 ' 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:01.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.705 --rc genhtml_branch_coverage=1 00:09:01.705 --rc genhtml_function_coverage=1 00:09:01.705 --rc genhtml_legend=1 00:09:01.705 --rc geninfo_all_blocks=1 00:09:01.705 --rc geninfo_unexecuted_blocks=1 00:09:01.705 00:09:01.705 ' 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:01.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:01.705 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:01.965 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:01.965 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:01.965 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:01.965 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:01.965 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:01.965 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:01.965 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:01.965 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:01.965 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:01.965 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:01.965 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.965 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:01.965 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.965 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:01.965 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:01.965 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:09:01.965 12:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:08.536 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:08.536 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:09:08.536 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:08.536 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:08.536 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:08.536 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:08.536 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:08.536 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:09:08.536 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:08.536 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:09:08.536 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:09:08.536 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:09:08.536 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:09:08.536 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:09:08.536 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:09:08.536 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:08.536 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:08.536 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:08.536 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:08.536 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:08.536 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:08.536 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:08.536 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:08.536 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:08.536 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:08.536 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:08.536 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:08.536 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:08.536 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:08.536 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:08.536 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:08.536 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:08.536 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:08.536 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:08.536 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:08.536 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:08.536 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:08.536 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:08.536 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.536 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.536 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:08.536 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:08.537 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:08.537 Found net devices under 0000:86:00.0: cvl_0_0 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:08.537 Found net devices under 0000:86:00.1: cvl_0_1 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:08.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:08.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.460 ms 00:09:08.537 00:09:08.537 --- 10.0.0.2 ping statistics --- 00:09:08.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.537 rtt min/avg/max/mdev = 0.460/0.460/0.460/0.000 ms 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:08.537 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:08.537 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:09:08.537 00:09:08.537 --- 10.0.0.1 ping statistics --- 00:09:08.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.537 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=35201 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 35201 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 35201 ']' 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:08.537 [2024-11-20 12:23:13.554997] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:09:08.537 [2024-11-20 12:23:13.555036] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.537 [2024-11-20 12:23:13.634012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.537 [2024-11-20 12:23:13.674568] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:08.537 [2024-11-20 12:23:13.674606] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:08.537 [2024-11-20 12:23:13.674613] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:08.537 [2024-11-20 12:23:13.674619] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:08.537 [2024-11-20 12:23:13.674624] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:08.537 [2024-11-20 12:23:13.675197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:08.537 12:23:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:08.537 [2024-11-20 12:23:13.978503] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:08.538 12:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:08.538 12:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:08.538 12:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.538 12:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:08.538 ************************************ 00:09:08.538 START TEST lvs_grow_clean 00:09:08.538 ************************************ 00:09:08.538 12:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:09:08.538 12:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:08.538 12:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:08.538 12:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:08.538 12:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:08.538 12:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:08.538 12:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:08.538 12:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:08.538 12:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:08.538 12:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:08.538 12:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:08.538 12:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:08.797 12:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=69cc5301-4d4d-4379-ba76-3406a1591266 00:09:08.797 12:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 69cc5301-4d4d-4379-ba76-3406a1591266 00:09:08.797 12:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:09.057 12:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:09.057 12:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:09.057 12:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 69cc5301-4d4d-4379-ba76-3406a1591266 lvol 150 00:09:09.315 12:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=aa552b7d-4ba1-41de-ba1a-3f5e35add6b2 00:09:09.315 12:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:09.315 12:23:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:09.315 [2024-11-20 12:23:15.016794] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:09.315 [2024-11-20 12:23:15.016845] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:09.315 true 00:09:09.315 12:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 69cc5301-4d4d-4379-ba76-3406a1591266 00:09:09.315 12:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:09.575 12:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:09.575 12:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:09.834 12:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 aa552b7d-4ba1-41de-ba1a-3f5e35add6b2 00:09:09.834 12:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:10.093 [2024-11-20 12:23:15.742991] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:10.093 12:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:10.352 12:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=35678 00:09:10.352 12:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:10.352 12:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:10.352 12:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 35678 /var/tmp/bdevperf.sock 00:09:10.352 12:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 35678 ']' 00:09:10.352 12:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:10.352 12:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:10.352 12:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:10.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:10.352 12:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:10.352 12:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:10.352 [2024-11-20 12:23:15.986404] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:09:10.352 [2024-11-20 12:23:15.986451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid35678 ] 00:09:10.352 [2024-11-20 12:23:16.059798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.352 [2024-11-20 12:23:16.101474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:10.611 12:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:10.611 12:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:09:10.611 12:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:10.869 Nvme0n1 00:09:10.869 12:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:11.128 [ 00:09:11.128 { 00:09:11.128 "name": "Nvme0n1", 00:09:11.128 "aliases": [ 00:09:11.128 "aa552b7d-4ba1-41de-ba1a-3f5e35add6b2" 00:09:11.128 ], 00:09:11.128 "product_name": "NVMe disk", 00:09:11.128 "block_size": 4096, 00:09:11.128 "num_blocks": 38912, 00:09:11.128 "uuid": "aa552b7d-4ba1-41de-ba1a-3f5e35add6b2", 00:09:11.128 "numa_id": 1, 00:09:11.128 "assigned_rate_limits": { 00:09:11.128 "rw_ios_per_sec": 0, 00:09:11.128 "rw_mbytes_per_sec": 0, 00:09:11.128 "r_mbytes_per_sec": 0, 00:09:11.128 "w_mbytes_per_sec": 0 00:09:11.128 }, 00:09:11.128 "claimed": false, 00:09:11.128 "zoned": false, 00:09:11.128 "supported_io_types": { 00:09:11.128 "read": true, 00:09:11.128 "write": true, 00:09:11.128 "unmap": true, 00:09:11.128 "flush": true, 00:09:11.128 "reset": true, 00:09:11.128 "nvme_admin": true, 00:09:11.128 "nvme_io": true, 00:09:11.128 "nvme_io_md": false, 00:09:11.128 "write_zeroes": true, 00:09:11.128 "zcopy": false, 00:09:11.128 "get_zone_info": false, 00:09:11.128 "zone_management": false, 00:09:11.128 "zone_append": false, 00:09:11.128 "compare": true, 00:09:11.128 "compare_and_write": true, 00:09:11.128 "abort": true, 00:09:11.128 "seek_hole": false, 00:09:11.128 "seek_data": false, 00:09:11.128 "copy": true, 00:09:11.128 "nvme_iov_md": false 00:09:11.128 }, 00:09:11.128 "memory_domains": [ 00:09:11.128 { 00:09:11.128 "dma_device_id": "system", 00:09:11.128 "dma_device_type": 1 00:09:11.128 } 00:09:11.128 ], 00:09:11.128 "driver_specific": { 00:09:11.128 "nvme": [ 00:09:11.128 { 00:09:11.128 "trid": { 00:09:11.128 "trtype": "TCP", 00:09:11.128 "adrfam": "IPv4", 00:09:11.128 "traddr": "10.0.0.2", 00:09:11.128 "trsvcid": "4420", 00:09:11.128 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:11.128 }, 00:09:11.128 "ctrlr_data": { 00:09:11.128 "cntlid": 1, 00:09:11.128 "vendor_id": "0x8086", 00:09:11.128 "model_number": "SPDK bdev Controller", 00:09:11.128 "serial_number": "SPDK0", 00:09:11.128 "firmware_revision": "25.01", 00:09:11.128 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:11.128 "oacs": { 00:09:11.128 "security": 0, 00:09:11.128 "format": 0, 00:09:11.128 "firmware": 0, 00:09:11.128 "ns_manage": 0 00:09:11.128 }, 00:09:11.128 "multi_ctrlr": true, 00:09:11.128 "ana_reporting": false 00:09:11.128 }, 00:09:11.129 "vs": { 00:09:11.129 "nvme_version": "1.3" 00:09:11.129 }, 00:09:11.129 "ns_data": { 00:09:11.129 "id": 1, 00:09:11.129 "can_share": true 00:09:11.129 } 00:09:11.129 } 00:09:11.129 ], 00:09:11.129 "mp_policy": "active_passive" 00:09:11.129 } 00:09:11.129 } 00:09:11.129 ] 00:09:11.129 12:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=35715 00:09:11.129 12:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:11.129 12:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:11.129 Running I/O for 10 seconds... 00:09:12.066 Latency(us) 00:09:12.066 [2024-11-20T11:23:17.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.066 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:12.066 Nvme0n1 : 1.00 23565.00 92.05 0.00 0.00 0.00 0.00 0.00 00:09:12.066 [2024-11-20T11:23:17.832Z] =================================================================================================================== 00:09:12.066 [2024-11-20T11:23:17.832Z] Total : 23565.00 92.05 0.00 0.00 0.00 0.00 0.00 00:09:12.066 00:09:13.002 12:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 69cc5301-4d4d-4379-ba76-3406a1591266 00:09:13.260 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:13.260 Nvme0n1 : 2.00 23653.00 92.39 0.00 0.00 0.00 0.00 0.00 00:09:13.260 [2024-11-20T11:23:19.026Z] =================================================================================================================== 00:09:13.260 [2024-11-20T11:23:19.026Z] Total : 23653.00 92.39 0.00 0.00 0.00 0.00 0.00 00:09:13.260 00:09:13.260 true 00:09:13.260 12:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 69cc5301-4d4d-4379-ba76-3406a1591266 00:09:13.260 12:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:13.519 12:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:13.519 12:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:13.519 12:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 35715 00:09:14.085 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.085 Nvme0n1 : 3.00 23711.33 92.62 0.00 0.00 0.00 0.00 0.00 00:09:14.085 [2024-11-20T11:23:19.851Z] =================================================================================================================== 00:09:14.085 [2024-11-20T11:23:19.851Z] Total : 23711.33 92.62 0.00 0.00 0.00 0.00 0.00 00:09:14.085 00:09:15.461 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:15.461 Nvme0n1 : 4.00 23706.75 92.60 0.00 0.00 0.00 0.00 0.00 00:09:15.461 [2024-11-20T11:23:21.227Z] =================================================================================================================== 00:09:15.461 [2024-11-20T11:23:21.227Z] Total : 23706.75 92.60 0.00 0.00 0.00 0.00 0.00 00:09:15.461 00:09:16.397 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:16.397 Nvme0n1 : 5.00 23757.40 92.80 0.00 0.00 0.00 0.00 0.00 00:09:16.397 [2024-11-20T11:23:22.163Z] =================================================================================================================== 00:09:16.397 [2024-11-20T11:23:22.163Z] Total : 23757.40 92.80 0.00 0.00 0.00 0.00 0.00 00:09:16.397 00:09:17.333 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:17.333 Nvme0n1 : 6.00 23801.33 92.97 0.00 0.00 0.00 0.00 0.00 00:09:17.333 [2024-11-20T11:23:23.099Z] =================================================================================================================== 00:09:17.333 [2024-11-20T11:23:23.099Z] Total : 23801.33 92.97 0.00 0.00 0.00 0.00 0.00 00:09:17.333 00:09:18.266 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:18.267 Nvme0n1 : 7.00 23823.57 93.06 0.00 0.00 0.00 0.00 0.00 00:09:18.267 [2024-11-20T11:23:24.033Z] =================================================================================================================== 00:09:18.267 [2024-11-20T11:23:24.033Z] Total : 23823.57 93.06 0.00 0.00 0.00 0.00 0.00 00:09:18.267 00:09:19.201 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.201 Nvme0n1 : 8.00 23850.62 93.17 0.00 0.00 0.00 0.00 0.00 00:09:19.201 [2024-11-20T11:23:24.967Z] =================================================================================================================== 00:09:19.201 [2024-11-20T11:23:24.967Z] Total : 23850.62 93.17 0.00 0.00 0.00 0.00 0.00 00:09:19.201 00:09:20.136 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.136 Nvme0n1 : 9.00 23835.22 93.11 0.00 0.00 0.00 0.00 0.00 00:09:20.136 [2024-11-20T11:23:25.902Z] =================================================================================================================== 00:09:20.136 [2024-11-20T11:23:25.902Z] Total : 23835.22 93.11 0.00 0.00 0.00 0.00 0.00 00:09:20.136 00:09:21.513 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.513 Nvme0n1 : 10.00 23853.00 93.18 0.00 0.00 0.00 0.00 0.00 00:09:21.513 [2024-11-20T11:23:27.279Z] =================================================================================================================== 00:09:21.513 [2024-11-20T11:23:27.279Z] Total : 23853.00 93.18 0.00 0.00 0.00 0.00 0.00 00:09:21.513 00:09:21.513 00:09:21.513 Latency(us) 00:09:21.513 [2024-11-20T11:23:27.279Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:21.513 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.513 Nvme0n1 : 10.00 23852.67 93.17 0.00 0.00 5363.15 2371.78 11109.91 00:09:21.513 [2024-11-20T11:23:27.279Z] =================================================================================================================== 00:09:21.513 [2024-11-20T11:23:27.279Z] Total : 23852.67 93.17 0.00 0.00 5363.15 2371.78 11109.91 00:09:21.513 { 00:09:21.513 "results": [ 00:09:21.513 { 00:09:21.513 "job": "Nvme0n1", 00:09:21.513 "core_mask": "0x2", 00:09:21.513 "workload": "randwrite", 00:09:21.513 "status": "finished", 00:09:21.513 "queue_depth": 128, 00:09:21.513 "io_size": 4096, 00:09:21.513 "runtime": 10.002821, 00:09:21.513 "iops": 23852.67116146535, 00:09:21.513 "mibps": 93.17449672447403, 00:09:21.513 "io_failed": 0, 00:09:21.513 "io_timeout": 0, 00:09:21.513 "avg_latency_us": 5363.145331543483, 00:09:21.513 "min_latency_us": 2371.7790476190476, 00:09:21.513 "max_latency_us": 11109.91238095238 00:09:21.513 } 00:09:21.513 ], 00:09:21.513 "core_count": 1 00:09:21.513 } 00:09:21.513 12:23:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 35678 00:09:21.513 12:23:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 35678 ']' 00:09:21.513 12:23:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 35678 00:09:21.513 12:23:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:21.513 12:23:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:21.513 12:23:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 35678 00:09:21.513 12:23:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:21.513 12:23:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:21.513 12:23:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 35678' 00:09:21.513 killing process with pid 35678 00:09:21.513 12:23:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 35678 00:09:21.513 Received shutdown signal, test time was about 10.000000 seconds 00:09:21.513 00:09:21.513 Latency(us) 00:09:21.513 [2024-11-20T11:23:27.279Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:21.513 [2024-11-20T11:23:27.279Z] =================================================================================================================== 00:09:21.513 [2024-11-20T11:23:27.279Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:21.513 12:23:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 35678 00:09:21.513 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:21.513 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:21.772 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 69cc5301-4d4d-4379-ba76-3406a1591266 00:09:21.772 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:22.031 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:22.031 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:22.031 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:22.289 [2024-11-20 12:23:27.827921] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:22.289 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 69cc5301-4d4d-4379-ba76-3406a1591266 00:09:22.290 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:22.290 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 69cc5301-4d4d-4379-ba76-3406a1591266 00:09:22.290 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:22.290 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:22.290 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:22.290 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:22.290 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:22.290 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:22.290 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:22.290 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:22.290 12:23:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 69cc5301-4d4d-4379-ba76-3406a1591266 00:09:22.290 request: 00:09:22.290 { 00:09:22.290 "uuid": "69cc5301-4d4d-4379-ba76-3406a1591266", 00:09:22.290 "method": "bdev_lvol_get_lvstores", 00:09:22.290 "req_id": 1 00:09:22.290 } 00:09:22.290 Got JSON-RPC error response 00:09:22.290 response: 00:09:22.290 { 00:09:22.290 "code": -19, 00:09:22.290 "message": "No such device" 00:09:22.290 } 00:09:22.290 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:22.290 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:22.290 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:22.290 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:22.290 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:22.549 aio_bdev 00:09:22.549 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev aa552b7d-4ba1-41de-ba1a-3f5e35add6b2 00:09:22.549 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=aa552b7d-4ba1-41de-ba1a-3f5e35add6b2 00:09:22.549 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:22.549 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:22.549 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:22.549 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:22.549 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:22.808 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b aa552b7d-4ba1-41de-ba1a-3f5e35add6b2 -t 2000 00:09:23.068 [ 00:09:23.068 { 00:09:23.068 "name": "aa552b7d-4ba1-41de-ba1a-3f5e35add6b2", 00:09:23.068 "aliases": [ 00:09:23.068 "lvs/lvol" 00:09:23.068 ], 00:09:23.068 "product_name": "Logical Volume", 00:09:23.068 "block_size": 4096, 00:09:23.068 "num_blocks": 38912, 00:09:23.068 "uuid": "aa552b7d-4ba1-41de-ba1a-3f5e35add6b2", 00:09:23.068 "assigned_rate_limits": { 00:09:23.068 "rw_ios_per_sec": 0, 00:09:23.068 "rw_mbytes_per_sec": 0, 00:09:23.068 "r_mbytes_per_sec": 0, 00:09:23.068 "w_mbytes_per_sec": 0 00:09:23.068 }, 00:09:23.068 "claimed": false, 00:09:23.068 "zoned": false, 00:09:23.068 "supported_io_types": { 00:09:23.068 "read": true, 00:09:23.068 "write": true, 00:09:23.068 "unmap": true, 00:09:23.068 "flush": false, 00:09:23.068 "reset": true, 00:09:23.068 "nvme_admin": false, 00:09:23.068 "nvme_io": false, 00:09:23.068 "nvme_io_md": false, 00:09:23.068 "write_zeroes": true, 00:09:23.068 "zcopy": false, 00:09:23.068 "get_zone_info": false, 00:09:23.068 "zone_management": false, 00:09:23.068 "zone_append": false, 00:09:23.068 "compare": false, 00:09:23.068 "compare_and_write": false, 00:09:23.068 "abort": false, 00:09:23.068 "seek_hole": true, 00:09:23.068 "seek_data": true, 00:09:23.068 "copy": false, 00:09:23.068 "nvme_iov_md": false 00:09:23.068 }, 00:09:23.068 "driver_specific": { 00:09:23.068 "lvol": { 00:09:23.068 "lvol_store_uuid": "69cc5301-4d4d-4379-ba76-3406a1591266", 00:09:23.068 "base_bdev": "aio_bdev", 00:09:23.068 "thin_provision": false, 00:09:23.068 "num_allocated_clusters": 38, 00:09:23.068 "snapshot": false, 00:09:23.068 "clone": false, 00:09:23.068 "esnap_clone": false 00:09:23.068 } 00:09:23.068 } 00:09:23.068 } 00:09:23.068 ] 00:09:23.068 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:23.068 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 69cc5301-4d4d-4379-ba76-3406a1591266 00:09:23.068 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:23.068 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:23.327 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 69cc5301-4d4d-4379-ba76-3406a1591266 00:09:23.327 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:23.327 12:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:23.327 12:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete aa552b7d-4ba1-41de-ba1a-3f5e35add6b2 00:09:23.591 12:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 69cc5301-4d4d-4379-ba76-3406a1591266 00:09:23.850 12:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:23.850 12:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:24.109 00:09:24.109 real 0m15.586s 00:09:24.109 user 0m15.130s 00:09:24.109 sys 0m1.521s 00:09:24.109 12:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.109 12:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:24.109 ************************************ 00:09:24.109 END TEST lvs_grow_clean 00:09:24.109 ************************************ 00:09:24.109 12:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:24.109 12:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:24.109 12:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.109 12:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:24.109 ************************************ 00:09:24.109 START TEST lvs_grow_dirty 00:09:24.109 ************************************ 00:09:24.109 12:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:24.109 12:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:24.109 12:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:24.109 12:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:24.109 12:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:24.110 12:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:24.110 12:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:24.110 12:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:24.110 12:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:24.110 12:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:24.368 12:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:24.368 12:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:24.368 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=128b043d-8202-4f51-814e-39da609e88e4 00:09:24.368 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 128b043d-8202-4f51-814e-39da609e88e4 00:09:24.368 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:24.627 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:24.628 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:24.628 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 128b043d-8202-4f51-814e-39da609e88e4 lvol 150 00:09:24.886 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=f51e0667-5eb6-4955-b21e-2de3a2148a0e 00:09:24.886 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:24.886 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:25.144 [2024-11-20 12:23:30.678160] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:25.144 [2024-11-20 12:23:30.678230] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:25.144 true 00:09:25.144 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 128b043d-8202-4f51-814e-39da609e88e4 00:09:25.144 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:25.144 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:25.144 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:25.402 12:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f51e0667-5eb6-4955-b21e-2de3a2148a0e 00:09:25.662 12:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:25.921 [2024-11-20 12:23:31.440430] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:25.921 12:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:25.921 12:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=38301 00:09:25.921 12:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:25.921 12:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:25.921 12:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 38301 /var/tmp/bdevperf.sock 00:09:25.921 12:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 38301 ']' 00:09:25.921 12:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:25.921 12:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:25.921 12:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:25.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:25.921 12:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:25.921 12:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:26.180 [2024-11-20 12:23:31.684680] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:09:26.180 [2024-11-20 12:23:31.684725] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid38301 ] 00:09:26.180 [2024-11-20 12:23:31.759039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.180 [2024-11-20 12:23:31.798896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.180 12:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:26.180 12:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:26.180 12:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:26.439 Nvme0n1 00:09:26.439 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:26.698 [ 00:09:26.698 { 00:09:26.698 "name": "Nvme0n1", 00:09:26.698 "aliases": [ 00:09:26.698 "f51e0667-5eb6-4955-b21e-2de3a2148a0e" 00:09:26.698 ], 00:09:26.698 "product_name": "NVMe disk", 00:09:26.698 "block_size": 4096, 00:09:26.698 "num_blocks": 38912, 00:09:26.698 "uuid": "f51e0667-5eb6-4955-b21e-2de3a2148a0e", 00:09:26.698 "numa_id": 1, 00:09:26.698 "assigned_rate_limits": { 00:09:26.698 "rw_ios_per_sec": 0, 00:09:26.698 "rw_mbytes_per_sec": 0, 00:09:26.698 "r_mbytes_per_sec": 0, 00:09:26.698 "w_mbytes_per_sec": 0 00:09:26.698 }, 00:09:26.698 "claimed": false, 00:09:26.698 "zoned": false, 00:09:26.698 "supported_io_types": { 00:09:26.698 "read": true, 00:09:26.698 "write": true, 00:09:26.698 "unmap": true, 00:09:26.698 "flush": true, 00:09:26.698 "reset": true, 00:09:26.698 "nvme_admin": true, 00:09:26.698 "nvme_io": true, 00:09:26.698 "nvme_io_md": false, 00:09:26.698 "write_zeroes": true, 00:09:26.698 "zcopy": false, 00:09:26.698 "get_zone_info": false, 00:09:26.698 "zone_management": false, 00:09:26.698 "zone_append": false, 00:09:26.698 "compare": true, 00:09:26.698 "compare_and_write": true, 00:09:26.699 "abort": true, 00:09:26.699 "seek_hole": false, 00:09:26.699 "seek_data": false, 00:09:26.699 "copy": true, 00:09:26.699 "nvme_iov_md": false 00:09:26.699 }, 00:09:26.699 "memory_domains": [ 00:09:26.699 { 00:09:26.699 "dma_device_id": "system", 00:09:26.699 "dma_device_type": 1 00:09:26.699 } 00:09:26.699 ], 00:09:26.699 "driver_specific": { 00:09:26.699 "nvme": [ 00:09:26.699 { 00:09:26.699 "trid": { 00:09:26.699 "trtype": "TCP", 00:09:26.699 "adrfam": "IPv4", 00:09:26.699 "traddr": "10.0.0.2", 00:09:26.699 "trsvcid": "4420", 00:09:26.699 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:26.699 }, 00:09:26.699 "ctrlr_data": { 00:09:26.699 "cntlid": 1, 00:09:26.699 "vendor_id": "0x8086", 00:09:26.699 "model_number": "SPDK bdev Controller", 00:09:26.699 "serial_number": "SPDK0", 00:09:26.699 "firmware_revision": "25.01", 00:09:26.699 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:26.699 "oacs": { 00:09:26.699 "security": 0, 00:09:26.699 "format": 0, 00:09:26.699 "firmware": 0, 00:09:26.699 "ns_manage": 0 00:09:26.699 }, 00:09:26.699 "multi_ctrlr": true, 00:09:26.699 "ana_reporting": false 00:09:26.699 }, 00:09:26.699 "vs": { 00:09:26.699 "nvme_version": "1.3" 00:09:26.699 }, 00:09:26.699 "ns_data": { 00:09:26.699 "id": 1, 00:09:26.699 "can_share": true 00:09:26.699 } 00:09:26.699 } 00:09:26.699 ], 00:09:26.699 "mp_policy": "active_passive" 00:09:26.699 } 00:09:26.699 } 00:09:26.699 ] 00:09:26.699 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=38424 00:09:26.699 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:26.699 12:23:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:26.958 Running I/O for 10 seconds... 00:09:27.894 Latency(us) 00:09:27.894 [2024-11-20T11:23:33.660Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:27.894 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:27.894 Nvme0n1 : 1.00 23466.00 91.66 0.00 0.00 0.00 0.00 0.00 00:09:27.894 [2024-11-20T11:23:33.660Z] =================================================================================================================== 00:09:27.894 [2024-11-20T11:23:33.660Z] Total : 23466.00 91.66 0.00 0.00 0.00 0.00 0.00 00:09:27.894 00:09:28.832 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 128b043d-8202-4f51-814e-39da609e88e4 00:09:28.832 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:28.832 Nvme0n1 : 2.00 23528.50 91.91 0.00 0.00 0.00 0.00 0.00 00:09:28.832 [2024-11-20T11:23:34.598Z] =================================================================================================================== 00:09:28.832 [2024-11-20T11:23:34.598Z] Total : 23528.50 91.91 0.00 0.00 0.00 0.00 0.00 00:09:28.832 00:09:29.091 true 00:09:29.091 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 128b043d-8202-4f51-814e-39da609e88e4 00:09:29.091 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:29.091 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:29.091 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:29.091 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 38424 00:09:30.028 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:30.028 Nvme0n1 : 3.00 23540.33 91.95 0.00 0.00 0.00 0.00 0.00 00:09:30.028 [2024-11-20T11:23:35.794Z] =================================================================================================================== 00:09:30.028 [2024-11-20T11:23:35.794Z] Total : 23540.33 91.95 0.00 0.00 0.00 0.00 0.00 00:09:30.028 00:09:30.964 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:30.964 Nvme0n1 : 4.00 23629.25 92.30 0.00 0.00 0.00 0.00 0.00 00:09:30.964 [2024-11-20T11:23:36.730Z] =================================================================================================================== 00:09:30.964 [2024-11-20T11:23:36.730Z] Total : 23629.25 92.30 0.00 0.00 0.00 0.00 0.00 00:09:30.964 00:09:31.900 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.900 Nvme0n1 : 5.00 23683.80 92.51 0.00 0.00 0.00 0.00 0.00 00:09:31.900 [2024-11-20T11:23:37.666Z] =================================================================================================================== 00:09:31.900 [2024-11-20T11:23:37.666Z] Total : 23683.80 92.51 0.00 0.00 0.00 0.00 0.00 00:09:31.900 00:09:32.836 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:32.836 Nvme0n1 : 6.00 23710.50 92.62 0.00 0.00 0.00 0.00 0.00 00:09:32.836 [2024-11-20T11:23:38.602Z] =================================================================================================================== 00:09:32.836 [2024-11-20T11:23:38.602Z] Total : 23710.50 92.62 0.00 0.00 0.00 0.00 0.00 00:09:32.836 00:09:33.773 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:33.773 Nvme0n1 : 7.00 23739.00 92.73 0.00 0.00 0.00 0.00 0.00 00:09:33.773 [2024-11-20T11:23:39.539Z] =================================================================================================================== 00:09:33.773 [2024-11-20T11:23:39.539Z] Total : 23739.00 92.73 0.00 0.00 0.00 0.00 0.00 00:09:33.773 00:09:35.151 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:35.151 Nvme0n1 : 8.00 23715.88 92.64 0.00 0.00 0.00 0.00 0.00 00:09:35.151 [2024-11-20T11:23:40.917Z] =================================================================================================================== 00:09:35.151 [2024-11-20T11:23:40.917Z] Total : 23715.88 92.64 0.00 0.00 0.00 0.00 0.00 00:09:35.151 00:09:36.085 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:36.085 Nvme0n1 : 9.00 23693.56 92.55 0.00 0.00 0.00 0.00 0.00 00:09:36.085 [2024-11-20T11:23:41.851Z] =================================================================================================================== 00:09:36.085 [2024-11-20T11:23:41.851Z] Total : 23693.56 92.55 0.00 0.00 0.00 0.00 0.00 00:09:36.085 00:09:37.024 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:37.024 Nvme0n1 : 10.00 23721.60 92.66 0.00 0.00 0.00 0.00 0.00 00:09:37.024 [2024-11-20T11:23:42.790Z] =================================================================================================================== 00:09:37.024 [2024-11-20T11:23:42.790Z] Total : 23721.60 92.66 0.00 0.00 0.00 0.00 0.00 00:09:37.024 00:09:37.024 00:09:37.024 Latency(us) 00:09:37.024 [2024-11-20T11:23:42.790Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:37.024 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:37.024 Nvme0n1 : 10.01 23722.05 92.66 0.00 0.00 5392.91 2559.02 11297.16 00:09:37.024 [2024-11-20T11:23:42.790Z] =================================================================================================================== 00:09:37.024 [2024-11-20T11:23:42.790Z] Total : 23722.05 92.66 0.00 0.00 5392.91 2559.02 11297.16 00:09:37.024 { 00:09:37.024 "results": [ 00:09:37.024 { 00:09:37.024 "job": "Nvme0n1", 00:09:37.024 "core_mask": "0x2", 00:09:37.024 "workload": "randwrite", 00:09:37.024 "status": "finished", 00:09:37.024 "queue_depth": 128, 00:09:37.024 "io_size": 4096, 00:09:37.024 "runtime": 10.005205, 00:09:37.024 "iops": 23722.05267158444, 00:09:37.024 "mibps": 92.66426824837671, 00:09:37.024 "io_failed": 0, 00:09:37.024 "io_timeout": 0, 00:09:37.024 "avg_latency_us": 5392.912827031851, 00:09:37.024 "min_latency_us": 2559.024761904762, 00:09:37.024 "max_latency_us": 11297.158095238095 00:09:37.024 } 00:09:37.024 ], 00:09:37.024 "core_count": 1 00:09:37.024 } 00:09:37.024 12:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 38301 00:09:37.024 12:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 38301 ']' 00:09:37.024 12:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 38301 00:09:37.024 12:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:37.024 12:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:37.024 12:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 38301 00:09:37.024 12:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:37.024 12:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:37.024 12:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 38301' 00:09:37.024 killing process with pid 38301 00:09:37.024 12:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 38301 00:09:37.024 Received shutdown signal, test time was about 10.000000 seconds 00:09:37.024 00:09:37.024 Latency(us) 00:09:37.024 [2024-11-20T11:23:42.790Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:37.024 [2024-11-20T11:23:42.790Z] =================================================================================================================== 00:09:37.024 [2024-11-20T11:23:42.790Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:37.024 12:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 38301 00:09:37.024 12:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:37.283 12:23:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:37.542 12:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 128b043d-8202-4f51-814e-39da609e88e4 00:09:37.542 12:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:37.800 12:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:37.800 12:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:37.800 12:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 35201 00:09:37.800 12:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 35201 00:09:37.800 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 35201 Killed "${NVMF_APP[@]}" "$@" 00:09:37.800 12:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:37.800 12:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:37.800 12:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:37.800 12:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:37.800 12:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:37.801 12:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=40201 00:09:37.801 12:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 40201 00:09:37.801 12:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:37.801 12:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 40201 ']' 00:09:37.801 12:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.801 12:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:37.801 12:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.801 12:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:37.801 12:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:37.801 [2024-11-20 12:23:43.428644] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:09:37.801 [2024-11-20 12:23:43.428691] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:37.801 [2024-11-20 12:23:43.508264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.801 [2024-11-20 12:23:43.546869] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:37.801 [2024-11-20 12:23:43.546904] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:37.801 [2024-11-20 12:23:43.546912] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:37.801 [2024-11-20 12:23:43.546918] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:37.801 [2024-11-20 12:23:43.546922] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:37.801 [2024-11-20 12:23:43.547511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.060 12:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:38.060 12:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:38.060 12:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:38.060 12:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:38.060 12:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:38.060 12:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:38.060 12:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:38.319 [2024-11-20 12:23:43.857730] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:38.319 [2024-11-20 12:23:43.857823] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:38.319 [2024-11-20 12:23:43.857849] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:38.319 12:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:38.319 12:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev f51e0667-5eb6-4955-b21e-2de3a2148a0e 00:09:38.319 12:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=f51e0667-5eb6-4955-b21e-2de3a2148a0e 00:09:38.319 12:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:38.319 12:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:38.319 12:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:38.319 12:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:38.319 12:23:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:38.319 12:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f51e0667-5eb6-4955-b21e-2de3a2148a0e -t 2000 00:09:38.577 [ 00:09:38.577 { 00:09:38.577 "name": "f51e0667-5eb6-4955-b21e-2de3a2148a0e", 00:09:38.577 "aliases": [ 00:09:38.577 "lvs/lvol" 00:09:38.577 ], 00:09:38.578 "product_name": "Logical Volume", 00:09:38.578 "block_size": 4096, 00:09:38.578 "num_blocks": 38912, 00:09:38.578 "uuid": "f51e0667-5eb6-4955-b21e-2de3a2148a0e", 00:09:38.578 "assigned_rate_limits": { 00:09:38.578 "rw_ios_per_sec": 0, 00:09:38.578 "rw_mbytes_per_sec": 0, 00:09:38.578 "r_mbytes_per_sec": 0, 00:09:38.578 "w_mbytes_per_sec": 0 00:09:38.578 }, 00:09:38.578 "claimed": false, 00:09:38.578 "zoned": false, 00:09:38.578 "supported_io_types": { 00:09:38.578 "read": true, 00:09:38.578 "write": true, 00:09:38.578 "unmap": true, 00:09:38.578 "flush": false, 00:09:38.578 "reset": true, 00:09:38.578 "nvme_admin": false, 00:09:38.578 "nvme_io": false, 00:09:38.578 "nvme_io_md": false, 00:09:38.578 "write_zeroes": true, 00:09:38.578 "zcopy": false, 00:09:38.578 "get_zone_info": false, 00:09:38.578 "zone_management": false, 00:09:38.578 "zone_append": false, 00:09:38.578 "compare": false, 00:09:38.578 "compare_and_write": false, 00:09:38.578 "abort": false, 00:09:38.578 "seek_hole": true, 00:09:38.578 "seek_data": true, 00:09:38.578 "copy": false, 00:09:38.578 "nvme_iov_md": false 00:09:38.578 }, 00:09:38.578 "driver_specific": { 00:09:38.578 "lvol": { 00:09:38.578 "lvol_store_uuid": "128b043d-8202-4f51-814e-39da609e88e4", 00:09:38.578 "base_bdev": "aio_bdev", 00:09:38.578 "thin_provision": false, 00:09:38.578 "num_allocated_clusters": 38, 00:09:38.578 "snapshot": false, 00:09:38.578 "clone": false, 00:09:38.578 "esnap_clone": false 00:09:38.578 } 00:09:38.578 } 00:09:38.578 } 00:09:38.578 ] 00:09:38.578 12:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:38.578 12:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 128b043d-8202-4f51-814e-39da609e88e4 00:09:38.578 12:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:38.856 12:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:38.856 12:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 128b043d-8202-4f51-814e-39da609e88e4 00:09:38.856 12:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:39.166 12:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:39.166 12:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:39.166 [2024-11-20 12:23:44.822887] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:39.166 12:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 128b043d-8202-4f51-814e-39da609e88e4 00:09:39.166 12:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:39.166 12:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 128b043d-8202-4f51-814e-39da609e88e4 00:09:39.166 12:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:39.166 12:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:39.166 12:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:39.166 12:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:39.166 12:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:39.166 12:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:39.166 12:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:39.166 12:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:39.166 12:23:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 128b043d-8202-4f51-814e-39da609e88e4 00:09:39.448 request: 00:09:39.448 { 00:09:39.448 "uuid": "128b043d-8202-4f51-814e-39da609e88e4", 00:09:39.448 "method": "bdev_lvol_get_lvstores", 00:09:39.448 "req_id": 1 00:09:39.448 } 00:09:39.448 Got JSON-RPC error response 00:09:39.448 response: 00:09:39.448 { 00:09:39.448 "code": -19, 00:09:39.448 "message": "No such device" 00:09:39.448 } 00:09:39.448 12:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:39.448 12:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:39.448 12:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:39.448 12:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:39.448 12:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:39.740 aio_bdev 00:09:39.740 12:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f51e0667-5eb6-4955-b21e-2de3a2148a0e 00:09:39.740 12:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=f51e0667-5eb6-4955-b21e-2de3a2148a0e 00:09:39.740 12:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:39.740 12:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:39.740 12:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:39.740 12:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:39.740 12:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:39.740 12:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f51e0667-5eb6-4955-b21e-2de3a2148a0e -t 2000 00:09:39.999 [ 00:09:39.999 { 00:09:39.999 "name": "f51e0667-5eb6-4955-b21e-2de3a2148a0e", 00:09:39.999 "aliases": [ 00:09:39.999 "lvs/lvol" 00:09:39.999 ], 00:09:39.999 "product_name": "Logical Volume", 00:09:39.999 "block_size": 4096, 00:09:39.999 "num_blocks": 38912, 00:09:39.999 "uuid": "f51e0667-5eb6-4955-b21e-2de3a2148a0e", 00:09:39.999 "assigned_rate_limits": { 00:09:39.999 "rw_ios_per_sec": 0, 00:09:39.999 "rw_mbytes_per_sec": 0, 00:09:39.999 "r_mbytes_per_sec": 0, 00:09:39.999 "w_mbytes_per_sec": 0 00:09:39.999 }, 00:09:39.999 "claimed": false, 00:09:39.999 "zoned": false, 00:09:39.999 "supported_io_types": { 00:09:39.999 "read": true, 00:09:39.999 "write": true, 00:09:39.999 "unmap": true, 00:09:39.999 "flush": false, 00:09:39.999 "reset": true, 00:09:39.999 "nvme_admin": false, 00:09:39.999 "nvme_io": false, 00:09:39.999 "nvme_io_md": false, 00:09:39.999 "write_zeroes": true, 00:09:39.999 "zcopy": false, 00:09:39.999 "get_zone_info": false, 00:09:39.999 "zone_management": false, 00:09:40.000 "zone_append": false, 00:09:40.000 "compare": false, 00:09:40.000 "compare_and_write": false, 00:09:40.000 "abort": false, 00:09:40.000 "seek_hole": true, 00:09:40.000 "seek_data": true, 00:09:40.000 "copy": false, 00:09:40.000 "nvme_iov_md": false 00:09:40.000 }, 00:09:40.000 "driver_specific": { 00:09:40.000 "lvol": { 00:09:40.000 "lvol_store_uuid": "128b043d-8202-4f51-814e-39da609e88e4", 00:09:40.000 "base_bdev": "aio_bdev", 00:09:40.000 "thin_provision": false, 00:09:40.000 "num_allocated_clusters": 38, 00:09:40.000 "snapshot": false, 00:09:40.000 "clone": false, 00:09:40.000 "esnap_clone": false 00:09:40.000 } 00:09:40.000 } 00:09:40.000 } 00:09:40.000 ] 00:09:40.000 12:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:40.000 12:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 128b043d-8202-4f51-814e-39da609e88e4 00:09:40.000 12:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:40.259 12:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:40.259 12:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 128b043d-8202-4f51-814e-39da609e88e4 00:09:40.259 12:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:40.259 12:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:40.259 12:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f51e0667-5eb6-4955-b21e-2de3a2148a0e 00:09:40.518 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 128b043d-8202-4f51-814e-39da609e88e4 00:09:40.777 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:41.036 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:41.036 00:09:41.036 real 0m16.887s 00:09:41.036 user 0m43.666s 00:09:41.036 sys 0m3.791s 00:09:41.036 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.036 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:41.036 ************************************ 00:09:41.036 END TEST lvs_grow_dirty 00:09:41.036 ************************************ 00:09:41.036 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:41.036 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:41.036 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:41.036 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:41.036 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:41.036 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:41.036 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:41.036 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:41.036 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:41.036 nvmf_trace.0 00:09:41.036 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:41.036 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:41.036 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:41.036 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:41.036 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:41.036 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:41.036 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:41.036 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:41.036 rmmod nvme_tcp 00:09:41.036 rmmod nvme_fabrics 00:09:41.036 rmmod nvme_keyring 00:09:41.036 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:41.036 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:41.036 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:41.036 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 40201 ']' 00:09:41.036 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 40201 00:09:41.036 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 40201 ']' 00:09:41.036 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 40201 00:09:41.036 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:41.036 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:41.036 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 40201 00:09:41.036 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:41.036 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:41.036 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 40201' 00:09:41.036 killing process with pid 40201 00:09:41.036 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 40201 00:09:41.036 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 40201 00:09:41.296 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:41.296 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:41.296 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:41.296 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:41.296 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:41.296 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:41.296 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:41.296 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:41.296 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:41.296 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.296 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:41.296 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.831 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:43.831 00:09:43.831 real 0m41.766s 00:09:43.831 user 1m4.466s 00:09:43.831 sys 0m10.219s 00:09:43.831 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.831 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:43.832 ************************************ 00:09:43.832 END TEST nvmf_lvs_grow 00:09:43.832 ************************************ 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:43.832 ************************************ 00:09:43.832 START TEST nvmf_bdev_io_wait 00:09:43.832 ************************************ 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:43.832 * Looking for test storage... 00:09:43.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:43.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.832 --rc genhtml_branch_coverage=1 00:09:43.832 --rc genhtml_function_coverage=1 00:09:43.832 --rc genhtml_legend=1 00:09:43.832 --rc geninfo_all_blocks=1 00:09:43.832 --rc geninfo_unexecuted_blocks=1 00:09:43.832 00:09:43.832 ' 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:43.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.832 --rc genhtml_branch_coverage=1 00:09:43.832 --rc genhtml_function_coverage=1 00:09:43.832 --rc genhtml_legend=1 00:09:43.832 --rc geninfo_all_blocks=1 00:09:43.832 --rc geninfo_unexecuted_blocks=1 00:09:43.832 00:09:43.832 ' 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:43.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.832 --rc genhtml_branch_coverage=1 00:09:43.832 --rc genhtml_function_coverage=1 00:09:43.832 --rc genhtml_legend=1 00:09:43.832 --rc geninfo_all_blocks=1 00:09:43.832 --rc geninfo_unexecuted_blocks=1 00:09:43.832 00:09:43.832 ' 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:43.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.832 --rc genhtml_branch_coverage=1 00:09:43.832 --rc genhtml_function_coverage=1 00:09:43.832 --rc genhtml_legend=1 00:09:43.832 --rc geninfo_all_blocks=1 00:09:43.832 --rc geninfo_unexecuted_blocks=1 00:09:43.832 00:09:43.832 ' 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:43.832 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:43.833 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:43.833 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:43.833 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:43.833 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:43.833 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:43.833 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:43.833 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:43.833 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:43.833 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:43.833 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:43.833 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:43.833 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:43.833 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:43.833 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:43.833 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:43.833 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:43.833 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:43.833 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.833 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:43.833 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.833 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:43.833 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:43.833 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:43.833 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:50.405 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:50.405 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:50.405 Found net devices under 0000:86:00.0: cvl_0_0 00:09:50.405 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:50.406 Found net devices under 0000:86:00.1: cvl_0_1 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:50.406 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:50.406 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.489 ms 00:09:50.406 00:09:50.406 --- 10.0.0.2 ping statistics --- 00:09:50.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.406 rtt min/avg/max/mdev = 0.489/0.489/0.489/0.000 ms 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:50.406 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:50.406 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:09:50.406 00:09:50.406 --- 10.0.0.1 ping statistics --- 00:09:50.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.406 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=44471 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 44471 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 44471 ']' 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:50.406 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:50.406 [2024-11-20 12:23:55.402796] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:09:50.406 [2024-11-20 12:23:55.402843] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:50.406 [2024-11-20 12:23:55.467066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:50.406 [2024-11-20 12:23:55.510444] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:50.406 [2024-11-20 12:23:55.510480] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:50.406 [2024-11-20 12:23:55.510487] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:50.406 [2024-11-20 12:23:55.510492] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:50.406 [2024-11-20 12:23:55.510498] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:50.406 [2024-11-20 12:23:55.511972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:50.406 [2024-11-20 12:23:55.512078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:50.407 [2024-11-20 12:23:55.512184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.407 [2024-11-20 12:23:55.512185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:50.407 [2024-11-20 12:23:55.659782] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:50.407 Malloc0 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:50.407 [2024-11-20 12:23:55.715045] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=44504 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=44506 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:50.407 { 00:09:50.407 "params": { 00:09:50.407 "name": "Nvme$subsystem", 00:09:50.407 "trtype": "$TEST_TRANSPORT", 00:09:50.407 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:50.407 "adrfam": "ipv4", 00:09:50.407 "trsvcid": "$NVMF_PORT", 00:09:50.407 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:50.407 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:50.407 "hdgst": ${hdgst:-false}, 00:09:50.407 "ddgst": ${ddgst:-false} 00:09:50.407 }, 00:09:50.407 "method": "bdev_nvme_attach_controller" 00:09:50.407 } 00:09:50.407 EOF 00:09:50.407 )") 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=44508 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:50.407 { 00:09:50.407 "params": { 00:09:50.407 "name": "Nvme$subsystem", 00:09:50.407 "trtype": "$TEST_TRANSPORT", 00:09:50.407 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:50.407 "adrfam": "ipv4", 00:09:50.407 "trsvcid": "$NVMF_PORT", 00:09:50.407 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:50.407 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:50.407 "hdgst": ${hdgst:-false}, 00:09:50.407 "ddgst": ${ddgst:-false} 00:09:50.407 }, 00:09:50.407 "method": "bdev_nvme_attach_controller" 00:09:50.407 } 00:09:50.407 EOF 00:09:50.407 )") 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=44511 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:50.407 { 00:09:50.407 "params": { 00:09:50.407 "name": "Nvme$subsystem", 00:09:50.407 "trtype": "$TEST_TRANSPORT", 00:09:50.407 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:50.407 "adrfam": "ipv4", 00:09:50.407 "trsvcid": "$NVMF_PORT", 00:09:50.407 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:50.407 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:50.407 "hdgst": ${hdgst:-false}, 00:09:50.407 "ddgst": ${ddgst:-false} 00:09:50.407 }, 00:09:50.407 "method": "bdev_nvme_attach_controller" 00:09:50.407 } 00:09:50.407 EOF 00:09:50.407 )") 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:50.407 { 00:09:50.407 "params": { 00:09:50.407 "name": "Nvme$subsystem", 00:09:50.407 "trtype": "$TEST_TRANSPORT", 00:09:50.407 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:50.407 "adrfam": "ipv4", 00:09:50.407 "trsvcid": "$NVMF_PORT", 00:09:50.407 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:50.407 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:50.407 "hdgst": ${hdgst:-false}, 00:09:50.407 "ddgst": ${ddgst:-false} 00:09:50.407 }, 00:09:50.407 "method": "bdev_nvme_attach_controller" 00:09:50.407 } 00:09:50.407 EOF 00:09:50.407 )") 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:50.407 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 44504 00:09:50.408 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:50.408 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:50.408 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:50.408 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:50.408 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:50.408 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:50.408 "params": { 00:09:50.408 "name": "Nvme1", 00:09:50.408 "trtype": "tcp", 00:09:50.408 "traddr": "10.0.0.2", 00:09:50.408 "adrfam": "ipv4", 00:09:50.408 "trsvcid": "4420", 00:09:50.408 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:50.408 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:50.408 "hdgst": false, 00:09:50.408 "ddgst": false 00:09:50.408 }, 00:09:50.408 "method": "bdev_nvme_attach_controller" 00:09:50.408 }' 00:09:50.408 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:50.408 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:50.408 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:50.408 "params": { 00:09:50.408 "name": "Nvme1", 00:09:50.408 "trtype": "tcp", 00:09:50.408 "traddr": "10.0.0.2", 00:09:50.408 "adrfam": "ipv4", 00:09:50.408 "trsvcid": "4420", 00:09:50.408 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:50.408 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:50.408 "hdgst": false, 00:09:50.408 "ddgst": false 00:09:50.408 }, 00:09:50.408 "method": "bdev_nvme_attach_controller" 00:09:50.408 }' 00:09:50.408 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:50.408 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:50.408 "params": { 00:09:50.408 "name": "Nvme1", 00:09:50.408 "trtype": "tcp", 00:09:50.408 "traddr": "10.0.0.2", 00:09:50.408 "adrfam": "ipv4", 00:09:50.408 "trsvcid": "4420", 00:09:50.408 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:50.408 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:50.408 "hdgst": false, 00:09:50.408 "ddgst": false 00:09:50.408 }, 00:09:50.408 "method": "bdev_nvme_attach_controller" 00:09:50.408 }' 00:09:50.408 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:50.408 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:50.408 "params": { 00:09:50.408 "name": "Nvme1", 00:09:50.408 "trtype": "tcp", 00:09:50.408 "traddr": "10.0.0.2", 00:09:50.408 "adrfam": "ipv4", 00:09:50.408 "trsvcid": "4420", 00:09:50.408 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:50.408 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:50.408 "hdgst": false, 00:09:50.408 "ddgst": false 00:09:50.408 }, 00:09:50.408 "method": "bdev_nvme_attach_controller" 00:09:50.408 }' 00:09:50.408 [2024-11-20 12:23:55.768718] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:09:50.408 [2024-11-20 12:23:55.768768] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:50.408 [2024-11-20 12:23:55.769230] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:09:50.408 [2024-11-20 12:23:55.769273] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:50.408 [2024-11-20 12:23:55.769640] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:09:50.408 [2024-11-20 12:23:55.769682] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:50.408 [2024-11-20 12:23:55.770123] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:09:50.408 [2024-11-20 12:23:55.770162] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:50.408 [2024-11-20 12:23:55.962483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.408 [2024-11-20 12:23:56.003190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:50.408 [2024-11-20 12:23:56.054245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.408 [2024-11-20 12:23:56.096707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:50.408 [2024-11-20 12:23:56.151904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.667 [2024-11-20 12:23:56.204332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:50.667 [2024-11-20 12:23:56.205677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.667 [2024-11-20 12:23:56.248194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:50.667 Running I/O for 1 seconds... 00:09:50.667 Running I/O for 1 seconds... 00:09:50.667 Running I/O for 1 seconds... 00:09:50.667 Running I/O for 1 seconds... 00:09:51.604 11737.00 IOPS, 45.85 MiB/s 00:09:51.604 Latency(us) 00:09:51.604 [2024-11-20T11:23:57.370Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:51.604 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:51.604 Nvme1n1 : 1.01 11785.34 46.04 0.00 0.00 10821.55 5898.24 15416.56 00:09:51.604 [2024-11-20T11:23:57.370Z] =================================================================================================================== 00:09:51.604 [2024-11-20T11:23:57.370Z] Total : 11785.34 46.04 0.00 0.00 10821.55 5898.24 15416.56 00:09:51.604 10775.00 IOPS, 42.09 MiB/s 00:09:51.604 Latency(us) 00:09:51.604 [2024-11-20T11:23:57.370Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:51.604 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:51.604 Nvme1n1 : 1.01 10844.90 42.36 0.00 0.00 11765.82 4525.10 20472.20 00:09:51.604 [2024-11-20T11:23:57.370Z] =================================================================================================================== 00:09:51.604 [2024-11-20T11:23:57.370Z] Total : 10844.90 42.36 0.00 0.00 11765.82 4525.10 20472.20 00:09:51.867 10093.00 IOPS, 39.43 MiB/s 00:09:51.867 Latency(us) 00:09:51.867 [2024-11-20T11:23:57.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:51.867 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:51.867 Nvme1n1 : 1.01 10181.26 39.77 0.00 0.00 12539.40 3760.52 23468.13 00:09:51.867 [2024-11-20T11:23:57.633Z] =================================================================================================================== 00:09:51.867 [2024-11-20T11:23:57.633Z] Total : 10181.26 39.77 0.00 0.00 12539.40 3760.52 23468.13 00:09:51.867 244384.00 IOPS, 954.62 MiB/s 00:09:51.867 Latency(us) 00:09:51.867 [2024-11-20T11:23:57.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:51.867 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:51.867 Nvme1n1 : 1.00 244014.51 953.18 0.00 0.00 521.48 224.30 1497.97 00:09:51.867 [2024-11-20T11:23:57.633Z] =================================================================================================================== 00:09:51.867 [2024-11-20T11:23:57.633Z] Total : 244014.51 953.18 0.00 0.00 521.48 224.30 1497.97 00:09:51.867 12:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 44506 00:09:51.867 12:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 44508 00:09:51.867 12:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 44511 00:09:51.867 12:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:51.867 12:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.867 12:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:51.867 12:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.867 12:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:51.867 12:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:51.867 12:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:51.867 12:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:51.867 12:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:51.867 12:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:51.867 12:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:51.867 12:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:51.867 rmmod nvme_tcp 00:09:51.867 rmmod nvme_fabrics 00:09:51.867 rmmod nvme_keyring 00:09:51.867 12:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:51.867 12:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:51.867 12:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:51.867 12:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 44471 ']' 00:09:51.867 12:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 44471 00:09:51.867 12:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 44471 ']' 00:09:51.867 12:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 44471 00:09:51.867 12:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:51.867 12:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:51.867 12:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 44471 00:09:52.129 12:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:52.129 12:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:52.129 12:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 44471' 00:09:52.129 killing process with pid 44471 00:09:52.129 12:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 44471 00:09:52.129 12:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 44471 00:09:52.129 12:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:52.129 12:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:52.129 12:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:52.129 12:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:52.129 12:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:52.129 12:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:52.129 12:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:52.129 12:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:52.129 12:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:52.129 12:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.129 12:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:52.129 12:23:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:54.667 12:23:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:54.667 00:09:54.667 real 0m10.752s 00:09:54.667 user 0m15.620s 00:09:54.667 sys 0m6.260s 00:09:54.667 12:23:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.667 12:23:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:54.667 ************************************ 00:09:54.667 END TEST nvmf_bdev_io_wait 00:09:54.667 ************************************ 00:09:54.667 12:23:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:54.667 12:23:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:54.667 12:23:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.667 12:23:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:54.667 ************************************ 00:09:54.667 START TEST nvmf_queue_depth 00:09:54.667 ************************************ 00:09:54.667 12:23:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:54.667 * Looking for test storage... 00:09:54.667 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:54.667 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:54.667 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:09:54.667 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:54.667 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:54.667 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:54.667 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:54.667 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:54.667 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:54.667 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:54.667 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:54.667 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:54.667 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:54.667 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:54.667 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:54.667 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:54.667 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:54.667 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:54.667 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:54.667 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:54.667 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:54.667 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:54.667 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:54.667 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:54.667 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:54.667 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:54.667 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:54.667 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:54.667 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:54.667 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:54.667 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:54.667 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:54.667 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:54.667 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:54.667 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:54.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.667 --rc genhtml_branch_coverage=1 00:09:54.667 --rc genhtml_function_coverage=1 00:09:54.667 --rc genhtml_legend=1 00:09:54.667 --rc geninfo_all_blocks=1 00:09:54.667 --rc geninfo_unexecuted_blocks=1 00:09:54.667 00:09:54.667 ' 00:09:54.667 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:54.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.667 --rc genhtml_branch_coverage=1 00:09:54.667 --rc genhtml_function_coverage=1 00:09:54.667 --rc genhtml_legend=1 00:09:54.667 --rc geninfo_all_blocks=1 00:09:54.667 --rc geninfo_unexecuted_blocks=1 00:09:54.667 00:09:54.667 ' 00:09:54.667 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:54.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.667 --rc genhtml_branch_coverage=1 00:09:54.667 --rc genhtml_function_coverage=1 00:09:54.668 --rc genhtml_legend=1 00:09:54.668 --rc geninfo_all_blocks=1 00:09:54.668 --rc geninfo_unexecuted_blocks=1 00:09:54.668 00:09:54.668 ' 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:54.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.668 --rc genhtml_branch_coverage=1 00:09:54.668 --rc genhtml_function_coverage=1 00:09:54.668 --rc genhtml_legend=1 00:09:54.668 --rc geninfo_all_blocks=1 00:09:54.668 --rc geninfo_unexecuted_blocks=1 00:09:54.668 00:09:54.668 ' 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:54.668 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:54.668 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:01.239 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:01.239 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:10:01.239 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:01.239 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:01.239 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:01.239 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:01.239 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:01.239 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:10:01.239 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:01.239 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:01.240 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:01.240 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:01.240 Found net devices under 0000:86:00.0: cvl_0_0 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:01.240 Found net devices under 0000:86:00.1: cvl_0_1 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:01.240 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:01.240 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:01.240 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:01.240 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:01.240 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:01.240 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:01.240 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:10:01.240 00:10:01.240 --- 10.0.0.2 ping statistics --- 00:10:01.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:01.240 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:10:01.240 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:01.240 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:01.240 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:10:01.240 00:10:01.240 --- 10.0.0.1 ping statistics --- 00:10:01.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:01.240 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:10:01.240 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:01.240 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:10:01.240 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:01.240 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:01.240 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=48842 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 48842 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 48842 ']' 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:01.241 [2024-11-20 12:24:06.188478] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:10:01.241 [2024-11-20 12:24:06.188522] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:01.241 [2024-11-20 12:24:06.269533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.241 [2024-11-20 12:24:06.310187] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:01.241 [2024-11-20 12:24:06.310227] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:01.241 [2024-11-20 12:24:06.310234] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:01.241 [2024-11-20 12:24:06.310240] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:01.241 [2024-11-20 12:24:06.310245] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:01.241 [2024-11-20 12:24:06.310824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:01.241 [2024-11-20 12:24:06.449593] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:01.241 Malloc0 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:01.241 [2024-11-20 12:24:06.491679] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=48927 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 48927 /var/tmp/bdevperf.sock 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 48927 ']' 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:01.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:01.241 [2024-11-20 12:24:06.539899] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:10:01.241 [2024-11-20 12:24:06.539941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid48927 ] 00:10:01.241 [2024-11-20 12:24:06.613875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.241 [2024-11-20 12:24:06.656302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:01.241 NVMe0n1 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.241 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:01.502 Running I/O for 10 seconds... 00:10:03.376 12262.00 IOPS, 47.90 MiB/s [2024-11-20T11:24:10.077Z] 12281.00 IOPS, 47.97 MiB/s [2024-11-20T11:24:11.455Z] 12286.00 IOPS, 47.99 MiB/s [2024-11-20T11:24:12.393Z] 12302.00 IOPS, 48.05 MiB/s [2024-11-20T11:24:13.331Z] 12408.00 IOPS, 48.47 MiB/s [2024-11-20T11:24:14.269Z] 12439.33 IOPS, 48.59 MiB/s [2024-11-20T11:24:15.206Z] 12434.00 IOPS, 48.57 MiB/s [2024-11-20T11:24:16.143Z] 12499.12 IOPS, 48.82 MiB/s [2024-11-20T11:24:17.079Z] 12502.22 IOPS, 48.84 MiB/s [2024-11-20T11:24:17.337Z] 12527.30 IOPS, 48.93 MiB/s 00:10:11.572 Latency(us) 00:10:11.572 [2024-11-20T11:24:17.338Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:11.572 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:11.572 Verification LBA range: start 0x0 length 0x4000 00:10:11.572 NVMe0n1 : 10.06 12554.55 49.04 0.00 0.00 81271.07 17850.76 53677.10 00:10:11.572 [2024-11-20T11:24:17.338Z] =================================================================================================================== 00:10:11.572 [2024-11-20T11:24:17.338Z] Total : 12554.55 49.04 0.00 0.00 81271.07 17850.76 53677.10 00:10:11.572 { 00:10:11.572 "results": [ 00:10:11.572 { 00:10:11.572 "job": "NVMe0n1", 00:10:11.572 "core_mask": "0x1", 00:10:11.572 "workload": "verify", 00:10:11.572 "status": "finished", 00:10:11.572 "verify_range": { 00:10:11.572 "start": 0, 00:10:11.572 "length": 16384 00:10:11.572 }, 00:10:11.572 "queue_depth": 1024, 00:10:11.572 "io_size": 4096, 00:10:11.572 "runtime": 10.057471, 00:10:11.572 "iops": 12554.547758576684, 00:10:11.572 "mibps": 49.04120218194017, 00:10:11.572 "io_failed": 0, 00:10:11.572 "io_timeout": 0, 00:10:11.572 "avg_latency_us": 81271.07049911997, 00:10:11.572 "min_latency_us": 17850.758095238096, 00:10:11.572 "max_latency_us": 53677.10476190476 00:10:11.572 } 00:10:11.572 ], 00:10:11.572 "core_count": 1 00:10:11.572 } 00:10:11.572 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 48927 00:10:11.572 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 48927 ']' 00:10:11.572 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 48927 00:10:11.572 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:11.572 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:11.572 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 48927 00:10:11.572 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:11.572 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:11.572 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 48927' 00:10:11.572 killing process with pid 48927 00:10:11.572 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 48927 00:10:11.572 Received shutdown signal, test time was about 10.000000 seconds 00:10:11.572 00:10:11.572 Latency(us) 00:10:11.572 [2024-11-20T11:24:17.338Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:11.572 [2024-11-20T11:24:17.338Z] =================================================================================================================== 00:10:11.572 [2024-11-20T11:24:17.338Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:11.572 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 48927 00:10:11.831 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:11.831 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:11.831 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:11.831 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:11.831 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:11.831 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:11.831 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:11.831 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:11.831 rmmod nvme_tcp 00:10:11.831 rmmod nvme_fabrics 00:10:11.831 rmmod nvme_keyring 00:10:11.831 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:11.831 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:11.831 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:11.831 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 48842 ']' 00:10:11.831 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 48842 00:10:11.831 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 48842 ']' 00:10:11.831 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 48842 00:10:11.831 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:11.831 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:11.831 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 48842 00:10:11.831 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:11.831 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:11.831 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 48842' 00:10:11.831 killing process with pid 48842 00:10:11.831 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 48842 00:10:11.831 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 48842 00:10:12.091 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:12.091 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:12.091 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:12.091 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:12.091 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:10:12.091 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:12.091 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:10:12.091 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:12.091 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:12.091 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.091 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:12.091 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.001 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:14.001 00:10:14.001 real 0m19.801s 00:10:14.001 user 0m23.167s 00:10:14.001 sys 0m6.084s 00:10:14.001 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:14.001 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:14.001 ************************************ 00:10:14.001 END TEST nvmf_queue_depth 00:10:14.001 ************************************ 00:10:14.260 12:24:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:14.260 12:24:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:14.260 12:24:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:14.260 12:24:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:14.260 ************************************ 00:10:14.260 START TEST nvmf_target_multipath 00:10:14.260 ************************************ 00:10:14.260 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:14.260 * Looking for test storage... 00:10:14.260 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:14.260 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:14.260 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:10:14.260 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:14.260 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:14.260 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:14.260 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:14.260 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:14.260 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:14.260 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:14.260 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:14.260 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:14.260 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:14.260 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:14.260 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:14.260 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:14.260 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:14.260 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:14.260 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:14.260 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:14.260 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:14.260 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:14.260 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:14.260 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:14.260 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:14.260 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:14.260 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:14.260 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:14.260 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:14.260 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:14.260 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:14.260 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:14.260 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:14.260 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:14.260 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:14.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.260 --rc genhtml_branch_coverage=1 00:10:14.260 --rc genhtml_function_coverage=1 00:10:14.260 --rc genhtml_legend=1 00:10:14.260 --rc geninfo_all_blocks=1 00:10:14.260 --rc geninfo_unexecuted_blocks=1 00:10:14.260 00:10:14.260 ' 00:10:14.260 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:14.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.260 --rc genhtml_branch_coverage=1 00:10:14.260 --rc genhtml_function_coverage=1 00:10:14.260 --rc genhtml_legend=1 00:10:14.261 --rc geninfo_all_blocks=1 00:10:14.261 --rc geninfo_unexecuted_blocks=1 00:10:14.261 00:10:14.261 ' 00:10:14.261 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:14.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.261 --rc genhtml_branch_coverage=1 00:10:14.261 --rc genhtml_function_coverage=1 00:10:14.261 --rc genhtml_legend=1 00:10:14.261 --rc geninfo_all_blocks=1 00:10:14.261 --rc geninfo_unexecuted_blocks=1 00:10:14.261 00:10:14.261 ' 00:10:14.261 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:14.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.261 --rc genhtml_branch_coverage=1 00:10:14.261 --rc genhtml_function_coverage=1 00:10:14.261 --rc genhtml_legend=1 00:10:14.261 --rc geninfo_all_blocks=1 00:10:14.261 --rc geninfo_unexecuted_blocks=1 00:10:14.261 00:10:14.261 ' 00:10:14.261 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:14.261 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:14.261 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:14.261 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:14.261 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:14.261 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:14.261 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:14.261 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:14.261 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:14.261 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:14.261 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:14.261 12:24:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:14.261 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:14.261 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:14.261 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:14.261 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:14.261 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:14.261 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:14.261 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:14.261 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:14.261 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:14.261 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:14.261 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:14.261 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.261 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.261 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.261 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:14.261 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.261 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:14.261 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:14.261 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:14.261 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:14.261 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:14.261 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:14.261 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:14.261 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:14.261 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:14.261 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:14.261 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:14.261 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:14.261 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:14.261 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:14.261 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:14.261 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:14.261 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:14.261 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:14.261 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:14.261 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:14.261 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:14.261 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.261 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:14.261 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.520 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:14.520 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:14.520 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:10:14.520 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:21.094 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:21.094 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:10:21.094 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:21.094 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:21.094 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:21.094 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:21.094 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:21.094 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:10:21.094 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:21.094 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:10:21.094 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:10:21.094 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:10:21.094 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:10:21.094 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:10:21.094 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:10:21.094 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:21.094 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:21.094 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:21.094 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:21.094 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:21.094 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:21.094 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:21.094 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:21.094 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:21.094 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:21.094 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:21.094 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:21.094 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:21.094 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:21.094 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:21.094 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:21.094 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:21.094 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:21.094 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:21.095 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:21.095 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:21.095 Found net devices under 0000:86:00.0: cvl_0_0 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:21.095 Found net devices under 0000:86:00.1: cvl_0_1 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:21.095 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:21.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:21.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms 00:10:21.095 00:10:21.095 --- 10.0.0.2 ping statistics --- 00:10:21.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.095 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:10:21.095 12:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:21.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:21.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:10:21.095 00:10:21.095 --- 10.0.0.1 ping statistics --- 00:10:21.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.095 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:10:21.095 12:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:21.095 12:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:10:21.095 12:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:21.095 12:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:21.095 12:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:21.095 12:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:21.095 12:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:21.095 12:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:21.095 12:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:21.095 12:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:21.095 12:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:21.095 only one NIC for nvmf test 00:10:21.095 12:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:21.095 12:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:21.095 12:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:21.095 12:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:21.095 12:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:21.095 12:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:21.095 12:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:21.095 rmmod nvme_tcp 00:10:21.095 rmmod nvme_fabrics 00:10:21.095 rmmod nvme_keyring 00:10:21.095 12:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:21.095 12:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:21.095 12:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:21.095 12:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:21.095 12:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:21.095 12:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:21.095 12:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:21.095 12:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:21.095 12:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:21.095 12:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:21.095 12:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:21.095 12:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:21.095 12:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:21.095 12:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.096 12:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:21.096 12:24:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.477 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:22.477 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:22.477 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:22.477 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:22.477 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:22.477 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:22.477 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:22.477 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:22.477 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:22.477 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:22.477 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:22.477 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:22.477 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:22.477 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:22.477 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:22.477 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:22.477 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:22.477 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:22.477 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:22.477 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:22.477 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:22.477 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:22.477 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.477 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:22.477 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.477 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:22.477 00:10:22.477 real 0m8.418s 00:10:22.477 user 0m1.912s 00:10:22.477 sys 0m4.513s 00:10:22.477 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:22.477 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:22.477 ************************************ 00:10:22.477 END TEST nvmf_target_multipath 00:10:22.477 ************************************ 00:10:22.737 12:24:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:22.737 12:24:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:22.737 12:24:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:22.737 12:24:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:22.737 ************************************ 00:10:22.737 START TEST nvmf_zcopy 00:10:22.737 ************************************ 00:10:22.737 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:22.737 * Looking for test storage... 00:10:22.737 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:22.737 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:22.737 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:10:22.737 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:22.737 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:22.737 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:22.737 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:22.737 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:22.737 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:22.737 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:22.737 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:22.737 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:22.737 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:22.737 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:22.737 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:22.737 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:22.737 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:22.737 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:22.737 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:22.737 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:22.737 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:22.737 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:22.737 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:22.737 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:22.737 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:22.737 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:22.737 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:22.737 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:22.737 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:22.737 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:22.737 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:22.737 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:22.737 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:22.737 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:22.737 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:22.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.737 --rc genhtml_branch_coverage=1 00:10:22.737 --rc genhtml_function_coverage=1 00:10:22.737 --rc genhtml_legend=1 00:10:22.737 --rc geninfo_all_blocks=1 00:10:22.737 --rc geninfo_unexecuted_blocks=1 00:10:22.737 00:10:22.737 ' 00:10:22.737 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:22.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.737 --rc genhtml_branch_coverage=1 00:10:22.737 --rc genhtml_function_coverage=1 00:10:22.737 --rc genhtml_legend=1 00:10:22.737 --rc geninfo_all_blocks=1 00:10:22.737 --rc geninfo_unexecuted_blocks=1 00:10:22.737 00:10:22.737 ' 00:10:22.737 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:22.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.737 --rc genhtml_branch_coverage=1 00:10:22.737 --rc genhtml_function_coverage=1 00:10:22.737 --rc genhtml_legend=1 00:10:22.737 --rc geninfo_all_blocks=1 00:10:22.737 --rc geninfo_unexecuted_blocks=1 00:10:22.737 00:10:22.737 ' 00:10:22.737 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:22.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.737 --rc genhtml_branch_coverage=1 00:10:22.737 --rc genhtml_function_coverage=1 00:10:22.737 --rc genhtml_legend=1 00:10:22.737 --rc geninfo_all_blocks=1 00:10:22.737 --rc geninfo_unexecuted_blocks=1 00:10:22.737 00:10:22.737 ' 00:10:22.737 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:22.737 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:22.737 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:22.738 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:22.738 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:22.738 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:22.738 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:22.738 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:22.738 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:22.738 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:22.738 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:22.738 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:22.738 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:22.738 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:22.738 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:22.738 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:22.738 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:22.738 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:22.997 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:22.997 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:22.997 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:22.997 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:22.997 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:22.997 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.997 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.997 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.997 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:22.997 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.997 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:22.997 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:22.997 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:22.998 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:22.998 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:22.998 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:22.998 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:22.998 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:22.998 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:22.998 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:22.998 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:22.998 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:22.998 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:22.998 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:22.998 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:22.998 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:22.998 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:22.998 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.998 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:22.998 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.998 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:22.998 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:22.998 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:22.998 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:29.570 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:29.570 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:29.570 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:29.570 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:29.570 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:29.570 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:29.570 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:29.570 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:29.570 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:29.570 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:29.570 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:29.570 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:29.570 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:29.570 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:29.570 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:29.570 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:29.570 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:29.571 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:29.571 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:29.571 Found net devices under 0000:86:00.0: cvl_0_0 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:29.571 Found net devices under 0000:86:00.1: cvl_0_1 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:29.571 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:29.571 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.485 ms 00:10:29.571 00:10:29.571 --- 10.0.0.2 ping statistics --- 00:10:29.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:29.571 rtt min/avg/max/mdev = 0.485/0.485/0.485/0.000 ms 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:29.571 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:29.571 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:10:29.571 00:10:29.571 --- 10.0.0.1 ping statistics --- 00:10:29.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:29.571 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=57954 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 57954 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 57954 ']' 00:10:29.571 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.572 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:29.572 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.572 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:29.572 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:29.572 [2024-11-20 12:24:34.570011] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:10:29.572 [2024-11-20 12:24:34.570057] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:29.572 [2024-11-20 12:24:34.649600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.572 [2024-11-20 12:24:34.689908] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:29.572 [2024-11-20 12:24:34.689941] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:29.572 [2024-11-20 12:24:34.689948] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:29.572 [2024-11-20 12:24:34.689954] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:29.572 [2024-11-20 12:24:34.689958] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:29.572 [2024-11-20 12:24:34.690539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:29.572 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:29.572 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:29.572 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:29.572 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:29.572 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:29.572 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:29.572 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:29.572 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:29.572 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.572 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:29.572 [2024-11-20 12:24:34.829085] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:29.572 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.572 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:29.572 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.572 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:29.572 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.572 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:29.572 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.572 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:29.572 [2024-11-20 12:24:34.849263] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:29.572 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.572 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:29.572 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.572 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:29.572 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.572 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:29.572 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.572 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:29.572 malloc0 00:10:29.572 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.572 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:29.572 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.572 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:29.572 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.572 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:29.572 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:29.572 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:29.572 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:29.572 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:29.572 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:29.572 { 00:10:29.572 "params": { 00:10:29.572 "name": "Nvme$subsystem", 00:10:29.572 "trtype": "$TEST_TRANSPORT", 00:10:29.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:29.572 "adrfam": "ipv4", 00:10:29.572 "trsvcid": "$NVMF_PORT", 00:10:29.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:29.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:29.572 "hdgst": ${hdgst:-false}, 00:10:29.572 "ddgst": ${ddgst:-false} 00:10:29.572 }, 00:10:29.572 "method": "bdev_nvme_attach_controller" 00:10:29.572 } 00:10:29.572 EOF 00:10:29.572 )") 00:10:29.572 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:29.572 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:29.572 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:29.572 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:29.572 "params": { 00:10:29.572 "name": "Nvme1", 00:10:29.572 "trtype": "tcp", 00:10:29.572 "traddr": "10.0.0.2", 00:10:29.572 "adrfam": "ipv4", 00:10:29.572 "trsvcid": "4420", 00:10:29.572 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:29.572 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:29.572 "hdgst": false, 00:10:29.572 "ddgst": false 00:10:29.572 }, 00:10:29.572 "method": "bdev_nvme_attach_controller" 00:10:29.572 }' 00:10:29.572 [2024-11-20 12:24:34.927652] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:10:29.572 [2024-11-20 12:24:34.927695] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57974 ] 00:10:29.572 [2024-11-20 12:24:35.001776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.572 [2024-11-20 12:24:35.042618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.572 Running I/O for 10 seconds... 00:10:31.442 8559.00 IOPS, 66.87 MiB/s [2024-11-20T11:24:38.583Z] 8613.50 IOPS, 67.29 MiB/s [2024-11-20T11:24:39.518Z] 8655.33 IOPS, 67.62 MiB/s [2024-11-20T11:24:40.453Z] 8660.75 IOPS, 67.66 MiB/s [2024-11-20T11:24:41.395Z] 8673.80 IOPS, 67.76 MiB/s [2024-11-20T11:24:42.332Z] 8641.67 IOPS, 67.51 MiB/s [2024-11-20T11:24:43.268Z] 8655.14 IOPS, 67.62 MiB/s [2024-11-20T11:24:44.256Z] 8673.12 IOPS, 67.76 MiB/s [2024-11-20T11:24:45.230Z] 8681.33 IOPS, 67.82 MiB/s [2024-11-20T11:24:45.489Z] 8692.70 IOPS, 67.91 MiB/s 00:10:39.723 Latency(us) 00:10:39.723 [2024-11-20T11:24:45.489Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:39.723 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:39.723 Verification LBA range: start 0x0 length 0x1000 00:10:39.723 Nvme1n1 : 10.01 8695.13 67.93 0.00 0.00 14679.75 2543.42 22344.66 00:10:39.723 [2024-11-20T11:24:45.489Z] =================================================================================================================== 00:10:39.723 [2024-11-20T11:24:45.489Z] Total : 8695.13 67.93 0.00 0.00 14679.75 2543.42 22344.66 00:10:39.724 12:24:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=59813 00:10:39.724 12:24:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:39.724 12:24:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:39.724 12:24:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:39.724 12:24:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:39.724 12:24:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:39.724 12:24:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:39.724 12:24:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:39.724 12:24:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:39.724 { 00:10:39.724 "params": { 00:10:39.724 "name": "Nvme$subsystem", 00:10:39.724 "trtype": "$TEST_TRANSPORT", 00:10:39.724 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:39.724 "adrfam": "ipv4", 00:10:39.724 "trsvcid": "$NVMF_PORT", 00:10:39.724 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:39.724 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:39.724 "hdgst": ${hdgst:-false}, 00:10:39.724 "ddgst": ${ddgst:-false} 00:10:39.724 }, 00:10:39.724 "method": "bdev_nvme_attach_controller" 00:10:39.724 } 00:10:39.724 EOF 00:10:39.724 )") 00:10:39.724 12:24:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:39.724 [2024-11-20 12:24:45.398295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.724 [2024-11-20 12:24:45.398329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.724 12:24:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:39.724 12:24:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:39.724 12:24:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:39.724 "params": { 00:10:39.724 "name": "Nvme1", 00:10:39.724 "trtype": "tcp", 00:10:39.724 "traddr": "10.0.0.2", 00:10:39.724 "adrfam": "ipv4", 00:10:39.724 "trsvcid": "4420", 00:10:39.724 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:39.724 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:39.724 "hdgst": false, 00:10:39.724 "ddgst": false 00:10:39.724 }, 00:10:39.724 "method": "bdev_nvme_attach_controller" 00:10:39.724 }' 00:10:39.724 [2024-11-20 12:24:45.410304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.724 [2024-11-20 12:24:45.410317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.724 [2024-11-20 12:24:45.422328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.724 [2024-11-20 12:24:45.422337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.724 [2024-11-20 12:24:45.434360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.724 [2024-11-20 12:24:45.434369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.724 [2024-11-20 12:24:45.439395] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:10:39.724 [2024-11-20 12:24:45.439434] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59813 ] 00:10:39.724 [2024-11-20 12:24:45.446392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.724 [2024-11-20 12:24:45.446402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.724 [2024-11-20 12:24:45.458424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.724 [2024-11-20 12:24:45.458433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.724 [2024-11-20 12:24:45.470457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.724 [2024-11-20 12:24:45.470466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.724 [2024-11-20 12:24:45.482488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.724 [2024-11-20 12:24:45.482497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.983 [2024-11-20 12:24:45.494523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.983 [2024-11-20 12:24:45.494532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.983 [2024-11-20 12:24:45.506554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.983 [2024-11-20 12:24:45.506563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.983 [2024-11-20 12:24:45.513652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.983 [2024-11-20 12:24:45.518583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.983 [2024-11-20 12:24:45.518592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.983 [2024-11-20 12:24:45.530619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.983 [2024-11-20 12:24:45.530633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.983 [2024-11-20 12:24:45.542645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.983 [2024-11-20 12:24:45.542654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.983 [2024-11-20 12:24:45.554678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.983 [2024-11-20 12:24:45.554690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.983 [2024-11-20 12:24:45.555684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.983 [2024-11-20 12:24:45.566718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.983 [2024-11-20 12:24:45.566733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.983 [2024-11-20 12:24:45.578748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.983 [2024-11-20 12:24:45.578764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.983 [2024-11-20 12:24:45.590779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.983 [2024-11-20 12:24:45.590791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.984 [2024-11-20 12:24:45.602817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.984 [2024-11-20 12:24:45.602834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.984 [2024-11-20 12:24:45.614844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.984 [2024-11-20 12:24:45.614856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.984 [2024-11-20 12:24:45.626873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.984 [2024-11-20 12:24:45.626883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.984 [2024-11-20 12:24:45.638904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.984 [2024-11-20 12:24:45.638913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.984 [2024-11-20 12:24:45.650953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.984 [2024-11-20 12:24:45.650974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.984 [2024-11-20 12:24:45.662984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.984 [2024-11-20 12:24:45.662998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.984 [2024-11-20 12:24:45.675014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.984 [2024-11-20 12:24:45.675027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.984 [2024-11-20 12:24:45.687043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.984 [2024-11-20 12:24:45.687052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.984 [2024-11-20 12:24:45.699073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.984 [2024-11-20 12:24:45.699082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.984 [2024-11-20 12:24:45.711119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.984 [2024-11-20 12:24:45.711134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.984 [2024-11-20 12:24:45.723144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.984 [2024-11-20 12:24:45.723158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.984 [2024-11-20 12:24:45.735173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.984 [2024-11-20 12:24:45.735184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.243 [2024-11-20 12:24:45.747208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.243 [2024-11-20 12:24:45.747218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.243 [2024-11-20 12:24:45.759235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.243 [2024-11-20 12:24:45.759244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.243 [2024-11-20 12:24:45.771270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.243 [2024-11-20 12:24:45.771284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.243 [2024-11-20 12:24:45.783300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.243 [2024-11-20 12:24:45.783313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.243 [2024-11-20 12:24:45.795335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.243 [2024-11-20 12:24:45.795344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.243 [2024-11-20 12:24:45.807371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.243 [2024-11-20 12:24:45.807384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.243 [2024-11-20 12:24:45.819398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.243 [2024-11-20 12:24:45.819407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.243 [2024-11-20 12:24:45.831432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.243 [2024-11-20 12:24:45.831440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.243 [2024-11-20 12:24:45.843475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.243 [2024-11-20 12:24:45.843483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.243 [2024-11-20 12:24:45.855498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.243 [2024-11-20 12:24:45.855509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.243 [2024-11-20 12:24:45.867537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.243 [2024-11-20 12:24:45.867554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.243 Running I/O for 5 seconds... 00:10:40.243 [2024-11-20 12:24:45.879565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.243 [2024-11-20 12:24:45.879575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.243 [2024-11-20 12:24:45.895972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.243 [2024-11-20 12:24:45.895992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.243 [2024-11-20 12:24:45.910534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.243 [2024-11-20 12:24:45.910552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.243 [2024-11-20 12:24:45.920994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.243 [2024-11-20 12:24:45.921013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.243 [2024-11-20 12:24:45.935339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.243 [2024-11-20 12:24:45.935358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.243 [2024-11-20 12:24:45.948834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.243 [2024-11-20 12:24:45.948852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.243 [2024-11-20 12:24:45.962977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.243 [2024-11-20 12:24:45.962995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.243 [2024-11-20 12:24:45.976691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.243 [2024-11-20 12:24:45.976709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.243 [2024-11-20 12:24:45.990403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.243 [2024-11-20 12:24:45.990422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.243 [2024-11-20 12:24:46.004220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.243 [2024-11-20 12:24:46.004238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.503 [2024-11-20 12:24:46.018418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.503 [2024-11-20 12:24:46.018437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.503 [2024-11-20 12:24:46.032443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.503 [2024-11-20 12:24:46.032462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.503 [2024-11-20 12:24:46.046337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.503 [2024-11-20 12:24:46.046356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.503 [2024-11-20 12:24:46.060075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.503 [2024-11-20 12:24:46.060093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.503 [2024-11-20 12:24:46.073834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.503 [2024-11-20 12:24:46.073852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.503 [2024-11-20 12:24:46.087632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.503 [2024-11-20 12:24:46.087650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.503 [2024-11-20 12:24:46.101850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.503 [2024-11-20 12:24:46.101868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.503 [2024-11-20 12:24:46.112699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.503 [2024-11-20 12:24:46.112716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.503 [2024-11-20 12:24:46.126968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.503 [2024-11-20 12:24:46.126986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.503 [2024-11-20 12:24:46.140342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.503 [2024-11-20 12:24:46.140360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.503 [2024-11-20 12:24:46.154082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.503 [2024-11-20 12:24:46.154102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.503 [2024-11-20 12:24:46.167651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.503 [2024-11-20 12:24:46.167669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.503 [2024-11-20 12:24:46.181648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.503 [2024-11-20 12:24:46.181666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.503 [2024-11-20 12:24:46.195561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.503 [2024-11-20 12:24:46.195580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.503 [2024-11-20 12:24:46.209545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.503 [2024-11-20 12:24:46.209569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.503 [2024-11-20 12:24:46.223268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.503 [2024-11-20 12:24:46.223287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.503 [2024-11-20 12:24:46.237366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.503 [2024-11-20 12:24:46.237385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.503 [2024-11-20 12:24:46.250773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.503 [2024-11-20 12:24:46.250793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.503 [2024-11-20 12:24:46.264843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.503 [2024-11-20 12:24:46.264863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.762 [2024-11-20 12:24:46.279115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.762 [2024-11-20 12:24:46.279136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.762 [2024-11-20 12:24:46.290121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.762 [2024-11-20 12:24:46.290141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.762 [2024-11-20 12:24:46.304314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.762 [2024-11-20 12:24:46.304333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.762 [2024-11-20 12:24:46.317799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.762 [2024-11-20 12:24:46.317818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.762 [2024-11-20 12:24:46.331777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.762 [2024-11-20 12:24:46.331796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.762 [2024-11-20 12:24:46.346254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.762 [2024-11-20 12:24:46.346272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.762 [2024-11-20 12:24:46.361115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.762 [2024-11-20 12:24:46.361133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.762 [2024-11-20 12:24:46.375528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.762 [2024-11-20 12:24:46.375546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.762 [2024-11-20 12:24:46.390869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.762 [2024-11-20 12:24:46.390888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.762 [2024-11-20 12:24:46.405138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.762 [2024-11-20 12:24:46.405157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.762 [2024-11-20 12:24:46.418926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.762 [2024-11-20 12:24:46.418944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.762 [2024-11-20 12:24:46.433088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.762 [2024-11-20 12:24:46.433106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.762 [2024-11-20 12:24:46.446783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.762 [2024-11-20 12:24:46.446802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.762 [2024-11-20 12:24:46.460478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.762 [2024-11-20 12:24:46.460497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.762 [2024-11-20 12:24:46.474233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.762 [2024-11-20 12:24:46.474251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.762 [2024-11-20 12:24:46.488290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.762 [2024-11-20 12:24:46.488308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.762 [2024-11-20 12:24:46.501788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.762 [2024-11-20 12:24:46.501805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.762 [2024-11-20 12:24:46.515882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.762 [2024-11-20 12:24:46.515901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.022 [2024-11-20 12:24:46.526466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.022 [2024-11-20 12:24:46.526485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.022 [2024-11-20 12:24:46.540762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.022 [2024-11-20 12:24:46.540780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.022 [2024-11-20 12:24:46.554641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.022 [2024-11-20 12:24:46.554659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.022 [2024-11-20 12:24:46.568309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.022 [2024-11-20 12:24:46.568328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.022 [2024-11-20 12:24:46.582155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.022 [2024-11-20 12:24:46.582174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.022 [2024-11-20 12:24:46.596152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.022 [2024-11-20 12:24:46.596171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.022 [2024-11-20 12:24:46.610349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.022 [2024-11-20 12:24:46.610368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.022 [2024-11-20 12:24:46.621885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.022 [2024-11-20 12:24:46.621903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.022 [2024-11-20 12:24:46.636215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.022 [2024-11-20 12:24:46.636234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.022 [2024-11-20 12:24:46.649872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.022 [2024-11-20 12:24:46.649890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.022 [2024-11-20 12:24:46.663622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.022 [2024-11-20 12:24:46.663640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.022 [2024-11-20 12:24:46.677456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.022 [2024-11-20 12:24:46.677474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.022 [2024-11-20 12:24:46.691469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.022 [2024-11-20 12:24:46.691488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.022 [2024-11-20 12:24:46.705206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.022 [2024-11-20 12:24:46.705245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.022 [2024-11-20 12:24:46.718819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.022 [2024-11-20 12:24:46.718838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.022 [2024-11-20 12:24:46.732422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.022 [2024-11-20 12:24:46.732441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.022 [2024-11-20 12:24:46.746372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.022 [2024-11-20 12:24:46.746390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.022 [2024-11-20 12:24:46.760124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.022 [2024-11-20 12:24:46.760143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.022 [2024-11-20 12:24:46.773774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.022 [2024-11-20 12:24:46.773793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.281 [2024-11-20 12:24:46.787782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.281 [2024-11-20 12:24:46.787800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.281 [2024-11-20 12:24:46.801651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.281 [2024-11-20 12:24:46.801677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.281 [2024-11-20 12:24:46.815518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.281 [2024-11-20 12:24:46.815536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.281 [2024-11-20 12:24:46.829756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.281 [2024-11-20 12:24:46.829774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.281 [2024-11-20 12:24:46.840669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.281 [2024-11-20 12:24:46.840687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.281 [2024-11-20 12:24:46.854799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.281 [2024-11-20 12:24:46.854818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.281 [2024-11-20 12:24:46.868807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.281 [2024-11-20 12:24:46.868825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.281 16744.00 IOPS, 130.81 MiB/s [2024-11-20T11:24:47.047Z] [2024-11-20 12:24:46.882304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.281 [2024-11-20 12:24:46.882322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.281 [2024-11-20 12:24:46.896237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.281 [2024-11-20 12:24:46.896256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.281 [2024-11-20 12:24:46.910036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.281 [2024-11-20 12:24:46.910055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.281 [2024-11-20 12:24:46.923791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.281 [2024-11-20 12:24:46.923809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.281 [2024-11-20 12:24:46.937598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.281 [2024-11-20 12:24:46.937616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.281 [2024-11-20 12:24:46.951283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.281 [2024-11-20 12:24:46.951301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.281 [2024-11-20 12:24:46.960657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.281 [2024-11-20 12:24:46.960675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.281 [2024-11-20 12:24:46.974412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.281 [2024-11-20 12:24:46.974430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.281 [2024-11-20 12:24:46.988187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.281 [2024-11-20 12:24:46.988210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.281 [2024-11-20 12:24:47.002208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.281 [2024-11-20 12:24:47.002226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.281 [2024-11-20 12:24:47.015858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.281 [2024-11-20 12:24:47.015876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.281 [2024-11-20 12:24:47.029521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.281 [2024-11-20 12:24:47.029539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.281 [2024-11-20 12:24:47.043400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.281 [2024-11-20 12:24:47.043418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.539 [2024-11-20 12:24:47.057255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.540 [2024-11-20 12:24:47.057278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.540 [2024-11-20 12:24:47.070902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.540 [2024-11-20 12:24:47.070920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.540 [2024-11-20 12:24:47.084853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.540 [2024-11-20 12:24:47.084871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.540 [2024-11-20 12:24:47.098720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.540 [2024-11-20 12:24:47.098738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.540 [2024-11-20 12:24:47.112462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.540 [2024-11-20 12:24:47.112480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.540 [2024-11-20 12:24:47.126879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.540 [2024-11-20 12:24:47.126897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.540 [2024-11-20 12:24:47.137811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.540 [2024-11-20 12:24:47.137829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.540 [2024-11-20 12:24:47.151807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.540 [2024-11-20 12:24:47.151824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.540 [2024-11-20 12:24:47.165771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.540 [2024-11-20 12:24:47.165789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.540 [2024-11-20 12:24:47.177231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.540 [2024-11-20 12:24:47.177248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.540 [2024-11-20 12:24:47.192047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.540 [2024-11-20 12:24:47.192064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.540 [2024-11-20 12:24:47.203449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.540 [2024-11-20 12:24:47.203466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.540 [2024-11-20 12:24:47.218249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.540 [2024-11-20 12:24:47.218267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.540 [2024-11-20 12:24:47.231982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.540 [2024-11-20 12:24:47.232000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.540 [2024-11-20 12:24:47.245927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.540 [2024-11-20 12:24:47.245945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.540 [2024-11-20 12:24:47.259494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.540 [2024-11-20 12:24:47.259514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.540 [2024-11-20 12:24:47.273619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.540 [2024-11-20 12:24:47.273637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.540 [2024-11-20 12:24:47.287668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.540 [2024-11-20 12:24:47.287685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.540 [2024-11-20 12:24:47.301333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.540 [2024-11-20 12:24:47.301351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.799 [2024-11-20 12:24:47.315313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.799 [2024-11-20 12:24:47.315335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.799 [2024-11-20 12:24:47.329506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.799 [2024-11-20 12:24:47.329524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.799 [2024-11-20 12:24:47.343843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.799 [2024-11-20 12:24:47.343861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.799 [2024-11-20 12:24:47.357667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.799 [2024-11-20 12:24:47.357685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.799 [2024-11-20 12:24:47.371669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.799 [2024-11-20 12:24:47.371687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.799 [2024-11-20 12:24:47.385421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.799 [2024-11-20 12:24:47.385439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.799 [2024-11-20 12:24:47.399244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.799 [2024-11-20 12:24:47.399262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.799 [2024-11-20 12:24:47.412899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.799 [2024-11-20 12:24:47.412917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.799 [2024-11-20 12:24:47.427263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.799 [2024-11-20 12:24:47.427281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.799 [2024-11-20 12:24:47.438298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.799 [2024-11-20 12:24:47.438316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.799 [2024-11-20 12:24:47.452524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.799 [2024-11-20 12:24:47.452542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.799 [2024-11-20 12:24:47.466010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.799 [2024-11-20 12:24:47.466029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.799 [2024-11-20 12:24:47.479967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.799 [2024-11-20 12:24:47.479985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.799 [2024-11-20 12:24:47.493529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.799 [2024-11-20 12:24:47.493547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.799 [2024-11-20 12:24:47.507887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.799 [2024-11-20 12:24:47.507904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.799 [2024-11-20 12:24:47.519039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.799 [2024-11-20 12:24:47.519056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.799 [2024-11-20 12:24:47.533441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.799 [2024-11-20 12:24:47.533469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.799 [2024-11-20 12:24:47.547158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.799 [2024-11-20 12:24:47.547178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.799 [2024-11-20 12:24:47.560902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.799 [2024-11-20 12:24:47.560920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.058 [2024-11-20 12:24:47.575179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.058 [2024-11-20 12:24:47.575200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.058 [2024-11-20 12:24:47.589048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.058 [2024-11-20 12:24:47.589067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.058 [2024-11-20 12:24:47.603324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.058 [2024-11-20 12:24:47.603343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.058 [2024-11-20 12:24:47.613629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.058 [2024-11-20 12:24:47.613647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.058 [2024-11-20 12:24:47.627628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.058 [2024-11-20 12:24:47.627648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.058 [2024-11-20 12:24:47.641551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.058 [2024-11-20 12:24:47.641570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.059 [2024-11-20 12:24:47.655336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.059 [2024-11-20 12:24:47.655355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.059 [2024-11-20 12:24:47.669339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.059 [2024-11-20 12:24:47.669358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.059 [2024-11-20 12:24:47.683249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.059 [2024-11-20 12:24:47.683268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.059 [2024-11-20 12:24:47.696984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.059 [2024-11-20 12:24:47.697004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.059 [2024-11-20 12:24:47.710903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.059 [2024-11-20 12:24:47.710921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.059 [2024-11-20 12:24:47.725133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.059 [2024-11-20 12:24:47.725152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.059 [2024-11-20 12:24:47.739366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.059 [2024-11-20 12:24:47.739385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.059 [2024-11-20 12:24:47.753255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.059 [2024-11-20 12:24:47.753274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.059 [2024-11-20 12:24:47.767054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.059 [2024-11-20 12:24:47.767073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.059 [2024-11-20 12:24:47.780687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.059 [2024-11-20 12:24:47.780707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.059 [2024-11-20 12:24:47.794787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.059 [2024-11-20 12:24:47.794807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.059 [2024-11-20 12:24:47.808815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.059 [2024-11-20 12:24:47.808835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.059 [2024-11-20 12:24:47.819440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.059 [2024-11-20 12:24:47.819458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.318 [2024-11-20 12:24:47.833414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.318 [2024-11-20 12:24:47.833433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.318 [2024-11-20 12:24:47.847399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.318 [2024-11-20 12:24:47.847419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.318 [2024-11-20 12:24:47.861786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.318 [2024-11-20 12:24:47.861805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.318 [2024-11-20 12:24:47.872272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.318 [2024-11-20 12:24:47.872290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.318 16803.00 IOPS, 131.27 MiB/s [2024-11-20T11:24:48.084Z] [2024-11-20 12:24:47.886527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.318 [2024-11-20 12:24:47.886546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.318 [2024-11-20 12:24:47.900583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.318 [2024-11-20 12:24:47.900602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.318 [2024-11-20 12:24:47.914756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.318 [2024-11-20 12:24:47.914775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.318 [2024-11-20 12:24:47.928667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.318 [2024-11-20 12:24:47.928685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.318 [2024-11-20 12:24:47.942408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.318 [2024-11-20 12:24:47.942428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.318 [2024-11-20 12:24:47.956260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.318 [2024-11-20 12:24:47.956280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.318 [2024-11-20 12:24:47.970280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.318 [2024-11-20 12:24:47.970299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.318 [2024-11-20 12:24:47.984123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.318 [2024-11-20 12:24:47.984142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.318 [2024-11-20 12:24:47.998055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.318 [2024-11-20 12:24:47.998075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.318 [2024-11-20 12:24:48.012037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.318 [2024-11-20 12:24:48.012056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.318 [2024-11-20 12:24:48.025722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.318 [2024-11-20 12:24:48.025740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.318 [2024-11-20 12:24:48.039760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.318 [2024-11-20 12:24:48.039777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.318 [2024-11-20 12:24:48.054051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.318 [2024-11-20 12:24:48.054069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.318 [2024-11-20 12:24:48.068471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.318 [2024-11-20 12:24:48.068488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.577 [2024-11-20 12:24:48.083509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.577 [2024-11-20 12:24:48.083528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.577 [2024-11-20 12:24:48.097629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.577 [2024-11-20 12:24:48.097648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.577 [2024-11-20 12:24:48.111494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.577 [2024-11-20 12:24:48.111512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.577 [2024-11-20 12:24:48.125151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.577 [2024-11-20 12:24:48.125169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.577 [2024-11-20 12:24:48.139417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.577 [2024-11-20 12:24:48.139434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.577 [2024-11-20 12:24:48.153197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.577 [2024-11-20 12:24:48.153222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.577 [2024-11-20 12:24:48.166627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.577 [2024-11-20 12:24:48.166645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.577 [2024-11-20 12:24:48.180612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.577 [2024-11-20 12:24:48.180630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.577 [2024-11-20 12:24:48.194389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.577 [2024-11-20 12:24:48.194407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.577 [2024-11-20 12:24:48.207901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.577 [2024-11-20 12:24:48.207919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.577 [2024-11-20 12:24:48.221737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.577 [2024-11-20 12:24:48.221755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.577 [2024-11-20 12:24:48.235773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.577 [2024-11-20 12:24:48.235791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.577 [2024-11-20 12:24:48.249790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.577 [2024-11-20 12:24:48.249808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.577 [2024-11-20 12:24:48.263353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.577 [2024-11-20 12:24:48.263372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.577 [2024-11-20 12:24:48.277610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.577 [2024-11-20 12:24:48.277627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.577 [2024-11-20 12:24:48.291573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.577 [2024-11-20 12:24:48.291590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.577 [2024-11-20 12:24:48.305990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.577 [2024-11-20 12:24:48.306009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.577 [2024-11-20 12:24:48.319592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.577 [2024-11-20 12:24:48.319610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.577 [2024-11-20 12:24:48.334093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.577 [2024-11-20 12:24:48.334112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.837 [2024-11-20 12:24:48.344585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.837 [2024-11-20 12:24:48.344607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.837 [2024-11-20 12:24:48.359128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.837 [2024-11-20 12:24:48.359146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.837 [2024-11-20 12:24:48.372319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.837 [2024-11-20 12:24:48.372338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.837 [2024-11-20 12:24:48.386618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.837 [2024-11-20 12:24:48.386636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.837 [2024-11-20 12:24:48.400208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.837 [2024-11-20 12:24:48.400226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.837 [2024-11-20 12:24:48.414339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.837 [2024-11-20 12:24:48.414358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.837 [2024-11-20 12:24:48.425784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.837 [2024-11-20 12:24:48.425802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.837 [2024-11-20 12:24:48.439983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.837 [2024-11-20 12:24:48.440001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.837 [2024-11-20 12:24:48.453678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.837 [2024-11-20 12:24:48.453696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.837 [2024-11-20 12:24:48.467846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.837 [2024-11-20 12:24:48.467870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.837 [2024-11-20 12:24:48.478612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.837 [2024-11-20 12:24:48.478630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.837 [2024-11-20 12:24:48.493070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.837 [2024-11-20 12:24:48.493088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.837 [2024-11-20 12:24:48.506713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.837 [2024-11-20 12:24:48.506731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.837 [2024-11-20 12:24:48.520661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.837 [2024-11-20 12:24:48.520679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.837 [2024-11-20 12:24:48.534309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.837 [2024-11-20 12:24:48.534326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.837 [2024-11-20 12:24:48.547807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.837 [2024-11-20 12:24:48.547825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.837 [2024-11-20 12:24:48.561943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.837 [2024-11-20 12:24:48.561961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.837 [2024-11-20 12:24:48.575546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.837 [2024-11-20 12:24:48.575564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.837 [2024-11-20 12:24:48.589252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.837 [2024-11-20 12:24:48.589269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.095 [2024-11-20 12:24:48.603222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.095 [2024-11-20 12:24:48.603244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.095 [2024-11-20 12:24:48.617179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.095 [2024-11-20 12:24:48.617197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.095 [2024-11-20 12:24:48.631262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.095 [2024-11-20 12:24:48.631280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.095 [2024-11-20 12:24:48.645080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.095 [2024-11-20 12:24:48.645098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.095 [2024-11-20 12:24:48.658932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.095 [2024-11-20 12:24:48.658951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.095 [2024-11-20 12:24:48.672822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.095 [2024-11-20 12:24:48.672840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.095 [2024-11-20 12:24:48.686796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.095 [2024-11-20 12:24:48.686813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.095 [2024-11-20 12:24:48.700906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.095 [2024-11-20 12:24:48.700924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.095 [2024-11-20 12:24:48.715046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.095 [2024-11-20 12:24:48.715065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.095 [2024-11-20 12:24:48.728303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.095 [2024-11-20 12:24:48.728322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.095 [2024-11-20 12:24:48.742362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.095 [2024-11-20 12:24:48.742381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.095 [2024-11-20 12:24:48.756143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.095 [2024-11-20 12:24:48.756162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.095 [2024-11-20 12:24:48.769862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.095 [2024-11-20 12:24:48.769880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.095 [2024-11-20 12:24:48.783384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.095 [2024-11-20 12:24:48.783401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.095 [2024-11-20 12:24:48.797117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.095 [2024-11-20 12:24:48.797135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.095 [2024-11-20 12:24:48.810938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.095 [2024-11-20 12:24:48.810957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.095 [2024-11-20 12:24:48.825048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.095 [2024-11-20 12:24:48.825067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.095 [2024-11-20 12:24:48.838762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.095 [2024-11-20 12:24:48.838780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.095 [2024-11-20 12:24:48.852689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.095 [2024-11-20 12:24:48.852708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.354 [2024-11-20 12:24:48.866160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.354 [2024-11-20 12:24:48.866183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.354 [2024-11-20 12:24:48.880437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.354 [2024-11-20 12:24:48.880455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.354 16833.33 IOPS, 131.51 MiB/s [2024-11-20T11:24:49.120Z] [2024-11-20 12:24:48.894559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.354 [2024-11-20 12:24:48.894577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.354 [2024-11-20 12:24:48.908599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.354 [2024-11-20 12:24:48.908617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.354 [2024-11-20 12:24:48.920374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.354 [2024-11-20 12:24:48.920392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.354 [2024-11-20 12:24:48.934336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.354 [2024-11-20 12:24:48.934354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.354 [2024-11-20 12:24:48.948042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.354 [2024-11-20 12:24:48.948061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.354 [2024-11-20 12:24:48.962293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.354 [2024-11-20 12:24:48.962312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.354 [2024-11-20 12:24:48.975773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.354 [2024-11-20 12:24:48.975791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.354 [2024-11-20 12:24:48.989596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.354 [2024-11-20 12:24:48.989614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.354 [2024-11-20 12:24:49.003335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.354 [2024-11-20 12:24:49.003353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.354 [2024-11-20 12:24:49.017245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.354 [2024-11-20 12:24:49.017264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.354 [2024-11-20 12:24:49.031128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.354 [2024-11-20 12:24:49.031148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.354 [2024-11-20 12:24:49.045360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.354 [2024-11-20 12:24:49.045381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.354 [2024-11-20 12:24:49.056032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.354 [2024-11-20 12:24:49.056051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.354 [2024-11-20 12:24:49.070926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.354 [2024-11-20 12:24:49.070946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.354 [2024-11-20 12:24:49.084702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.354 [2024-11-20 12:24:49.084720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.354 [2024-11-20 12:24:49.098479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.354 [2024-11-20 12:24:49.098497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.354 [2024-11-20 12:24:49.111988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.354 [2024-11-20 12:24:49.112006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.617 [2024-11-20 12:24:49.125939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.617 [2024-11-20 12:24:49.125958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.617 [2024-11-20 12:24:49.139863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.617 [2024-11-20 12:24:49.139880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.617 [2024-11-20 12:24:49.153713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.617 [2024-11-20 12:24:49.153731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.617 [2024-11-20 12:24:49.166992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.617 [2024-11-20 12:24:49.167010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.617 [2024-11-20 12:24:49.180742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.617 [2024-11-20 12:24:49.180760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.617 [2024-11-20 12:24:49.194287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.617 [2024-11-20 12:24:49.194305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.617 [2024-11-20 12:24:49.208132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.617 [2024-11-20 12:24:49.208151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.618 [2024-11-20 12:24:49.221571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.618 [2024-11-20 12:24:49.221590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.618 [2024-11-20 12:24:49.235567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.618 [2024-11-20 12:24:49.235586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.618 [2024-11-20 12:24:49.249866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.618 [2024-11-20 12:24:49.249885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.618 [2024-11-20 12:24:49.263919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.618 [2024-11-20 12:24:49.263938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.618 [2024-11-20 12:24:49.277966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.618 [2024-11-20 12:24:49.277985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.618 [2024-11-20 12:24:49.291589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.618 [2024-11-20 12:24:49.291607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.618 [2024-11-20 12:24:49.305321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.618 [2024-11-20 12:24:49.305340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.618 [2024-11-20 12:24:49.319307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.618 [2024-11-20 12:24:49.319326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.618 [2024-11-20 12:24:49.332726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.618 [2024-11-20 12:24:49.332745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.618 [2024-11-20 12:24:49.346556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.618 [2024-11-20 12:24:49.346575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.618 [2024-11-20 12:24:49.360154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.618 [2024-11-20 12:24:49.360173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.618 [2024-11-20 12:24:49.374098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.618 [2024-11-20 12:24:49.374117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.878 [2024-11-20 12:24:49.387760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.878 [2024-11-20 12:24:49.387779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.878 [2024-11-20 12:24:49.401152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.878 [2024-11-20 12:24:49.401170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.878 [2024-11-20 12:24:49.414821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.878 [2024-11-20 12:24:49.414839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.878 [2024-11-20 12:24:49.429026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.878 [2024-11-20 12:24:49.429044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.878 [2024-11-20 12:24:49.442278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.879 [2024-11-20 12:24:49.442296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.879 [2024-11-20 12:24:49.455998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.879 [2024-11-20 12:24:49.456016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.879 [2024-11-20 12:24:49.469906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.879 [2024-11-20 12:24:49.469924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.879 [2024-11-20 12:24:49.483287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.879 [2024-11-20 12:24:49.483306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.879 [2024-11-20 12:24:49.497138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.879 [2024-11-20 12:24:49.497157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.879 [2024-11-20 12:24:49.510924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.879 [2024-11-20 12:24:49.510942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.879 [2024-11-20 12:24:49.524435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.879 [2024-11-20 12:24:49.524453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.879 [2024-11-20 12:24:49.538060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.879 [2024-11-20 12:24:49.538078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.879 [2024-11-20 12:24:49.551823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.879 [2024-11-20 12:24:49.551841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.879 [2024-11-20 12:24:49.565222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.879 [2024-11-20 12:24:49.565240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.879 [2024-11-20 12:24:49.578876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.879 [2024-11-20 12:24:49.578895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.879 [2024-11-20 12:24:49.593262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.879 [2024-11-20 12:24:49.593280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.879 [2024-11-20 12:24:49.604273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.879 [2024-11-20 12:24:49.604291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.879 [2024-11-20 12:24:49.618241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.879 [2024-11-20 12:24:49.618258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:43.879 [2024-11-20 12:24:49.631873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:43.879 [2024-11-20 12:24:49.631892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.137 [2024-11-20 12:24:49.645809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.137 [2024-11-20 12:24:49.645828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.137 [2024-11-20 12:24:49.660151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.137 [2024-11-20 12:24:49.660170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.137 [2024-11-20 12:24:49.671108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.137 [2024-11-20 12:24:49.671126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.138 [2024-11-20 12:24:49.685435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.138 [2024-11-20 12:24:49.685453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.138 [2024-11-20 12:24:49.698965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.138 [2024-11-20 12:24:49.698983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.138 [2024-11-20 12:24:49.713090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.138 [2024-11-20 12:24:49.713107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.138 [2024-11-20 12:24:49.723770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.138 [2024-11-20 12:24:49.723788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.138 [2024-11-20 12:24:49.737700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.138 [2024-11-20 12:24:49.737719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.138 [2024-11-20 12:24:49.751086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.138 [2024-11-20 12:24:49.751104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.138 [2024-11-20 12:24:49.765270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.138 [2024-11-20 12:24:49.765288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.138 [2024-11-20 12:24:49.779109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.138 [2024-11-20 12:24:49.779127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.138 [2024-11-20 12:24:49.792778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.138 [2024-11-20 12:24:49.792796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.138 [2024-11-20 12:24:49.806811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.138 [2024-11-20 12:24:49.806829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.138 [2024-11-20 12:24:49.820403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.138 [2024-11-20 12:24:49.820421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.138 [2024-11-20 12:24:49.834208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.138 [2024-11-20 12:24:49.834227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.138 [2024-11-20 12:24:49.848075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.138 [2024-11-20 12:24:49.848094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.138 [2024-11-20 12:24:49.862269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.138 [2024-11-20 12:24:49.862288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.138 [2024-11-20 12:24:49.876274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.138 [2024-11-20 12:24:49.876292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.138 16873.75 IOPS, 131.83 MiB/s [2024-11-20T11:24:49.904Z] [2024-11-20 12:24:49.890222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.138 [2024-11-20 12:24:49.890244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.397 [2024-11-20 12:24:49.903975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.397 [2024-11-20 12:24:49.903993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.397 [2024-11-20 12:24:49.917513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.397 [2024-11-20 12:24:49.917531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.397 [2024-11-20 12:24:49.931258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.397 [2024-11-20 12:24:49.931277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.397 [2024-11-20 12:24:49.945109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.397 [2024-11-20 12:24:49.945128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.397 [2024-11-20 12:24:49.958768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.397 [2024-11-20 12:24:49.958786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.397 [2024-11-20 12:24:49.972912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.397 [2024-11-20 12:24:49.972930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.397 [2024-11-20 12:24:49.986413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.397 [2024-11-20 12:24:49.986431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.397 [2024-11-20 12:24:50.000432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.397 [2024-11-20 12:24:50.000453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.397 [2024-11-20 12:24:50.014856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.397 [2024-11-20 12:24:50.014876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.397 [2024-11-20 12:24:50.025906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.397 [2024-11-20 12:24:50.025924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.397 [2024-11-20 12:24:50.040176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.397 [2024-11-20 12:24:50.040195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.397 [2024-11-20 12:24:50.054350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.397 [2024-11-20 12:24:50.054370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.397 [2024-11-20 12:24:50.068132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.397 [2024-11-20 12:24:50.068151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.397 [2024-11-20 12:24:50.081706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.397 [2024-11-20 12:24:50.081725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.397 [2024-11-20 12:24:50.095251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.397 [2024-11-20 12:24:50.095270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.397 [2024-11-20 12:24:50.109420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.397 [2024-11-20 12:24:50.109439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.397 [2024-11-20 12:24:50.123050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.397 [2024-11-20 12:24:50.123069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.397 [2024-11-20 12:24:50.136997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.397 [2024-11-20 12:24:50.137015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.397 [2024-11-20 12:24:50.151323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.397 [2024-11-20 12:24:50.151346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.656 [2024-11-20 12:24:50.162462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.656 [2024-11-20 12:24:50.162480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.656 [2024-11-20 12:24:50.176482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.656 [2024-11-20 12:24:50.176500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.656 [2024-11-20 12:24:50.190527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.656 [2024-11-20 12:24:50.190545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.656 [2024-11-20 12:24:50.200875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.656 [2024-11-20 12:24:50.200893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.656 [2024-11-20 12:24:50.215668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.656 [2024-11-20 12:24:50.215686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.656 [2024-11-20 12:24:50.226957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.656 [2024-11-20 12:24:50.226975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.656 [2024-11-20 12:24:50.241344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.656 [2024-11-20 12:24:50.241362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.656 [2024-11-20 12:24:50.255401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.656 [2024-11-20 12:24:50.255420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.656 [2024-11-20 12:24:50.269370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.656 [2024-11-20 12:24:50.269388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.656 [2024-11-20 12:24:50.279808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.656 [2024-11-20 12:24:50.279826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.656 [2024-11-20 12:24:50.294398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.656 [2024-11-20 12:24:50.294416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.656 [2024-11-20 12:24:50.305503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.656 [2024-11-20 12:24:50.305521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.656 [2024-11-20 12:24:50.320000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.656 [2024-11-20 12:24:50.320019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.656 [2024-11-20 12:24:50.333594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.656 [2024-11-20 12:24:50.333614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.656 [2024-11-20 12:24:50.347542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.656 [2024-11-20 12:24:50.347562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.656 [2024-11-20 12:24:50.361695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.656 [2024-11-20 12:24:50.361713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.656 [2024-11-20 12:24:50.372660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.656 [2024-11-20 12:24:50.372678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.656 [2024-11-20 12:24:50.387231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.656 [2024-11-20 12:24:50.387250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.656 [2024-11-20 12:24:50.401275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.657 [2024-11-20 12:24:50.401300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.657 [2024-11-20 12:24:50.415295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.657 [2024-11-20 12:24:50.415315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.914 [2024-11-20 12:24:50.428775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.914 [2024-11-20 12:24:50.428795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.914 [2024-11-20 12:24:50.442726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.914 [2024-11-20 12:24:50.442746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.914 [2024-11-20 12:24:50.456493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.914 [2024-11-20 12:24:50.456513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.914 [2024-11-20 12:24:50.470776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.914 [2024-11-20 12:24:50.470795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.914 [2024-11-20 12:24:50.481894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.914 [2024-11-20 12:24:50.481914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.914 [2024-11-20 12:24:50.491450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.914 [2024-11-20 12:24:50.491469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.914 [2024-11-20 12:24:50.500975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.914 [2024-11-20 12:24:50.500993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.914 [2024-11-20 12:24:50.515500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.914 [2024-11-20 12:24:50.515519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.914 [2024-11-20 12:24:50.528893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.914 [2024-11-20 12:24:50.528912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.914 [2024-11-20 12:24:50.543326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.914 [2024-11-20 12:24:50.543345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.914 [2024-11-20 12:24:50.553989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.914 [2024-11-20 12:24:50.554008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.914 [2024-11-20 12:24:50.568320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.914 [2024-11-20 12:24:50.568339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.914 [2024-11-20 12:24:50.582782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.914 [2024-11-20 12:24:50.582801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.914 [2024-11-20 12:24:50.598510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.914 [2024-11-20 12:24:50.598528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.914 [2024-11-20 12:24:50.612604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.914 [2024-11-20 12:24:50.612623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.914 [2024-11-20 12:24:50.626329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.914 [2024-11-20 12:24:50.626348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.914 [2024-11-20 12:24:50.640649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.914 [2024-11-20 12:24:50.640668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.914 [2024-11-20 12:24:50.654598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.914 [2024-11-20 12:24:50.654624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.915 [2024-11-20 12:24:50.668228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:44.915 [2024-11-20 12:24:50.668247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.173 [2024-11-20 12:24:50.681682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.173 [2024-11-20 12:24:50.681701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.173 [2024-11-20 12:24:50.696247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.173 [2024-11-20 12:24:50.696267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.173 [2024-11-20 12:24:50.710091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.173 [2024-11-20 12:24:50.710110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.173 [2024-11-20 12:24:50.723969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.173 [2024-11-20 12:24:50.723987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.173 [2024-11-20 12:24:50.737610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.173 [2024-11-20 12:24:50.737630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.173 [2024-11-20 12:24:50.751316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.173 [2024-11-20 12:24:50.751335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.174 [2024-11-20 12:24:50.765265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.174 [2024-11-20 12:24:50.765288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.174 [2024-11-20 12:24:50.779105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.174 [2024-11-20 12:24:50.779123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.174 [2024-11-20 12:24:50.792957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.174 [2024-11-20 12:24:50.792975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.174 [2024-11-20 12:24:50.806398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.174 [2024-11-20 12:24:50.806416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.174 [2024-11-20 12:24:50.820131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.174 [2024-11-20 12:24:50.820149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.174 [2024-11-20 12:24:50.833825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.174 [2024-11-20 12:24:50.833843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.174 [2024-11-20 12:24:50.847602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.174 [2024-11-20 12:24:50.847620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.174 [2024-11-20 12:24:50.861567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.174 [2024-11-20 12:24:50.861586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.174 [2024-11-20 12:24:50.875431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.174 [2024-11-20 12:24:50.875451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.174 16850.20 IOPS, 131.64 MiB/s [2024-11-20T11:24:50.940Z] [2024-11-20 12:24:50.889042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.174 [2024-11-20 12:24:50.889060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.174 00:10:45.174 Latency(us) 00:10:45.174 [2024-11-20T11:24:50.940Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:45.174 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:45.174 Nvme1n1 : 5.01 16852.70 131.66 0.00 0.00 7588.26 3510.86 16976.94 00:10:45.174 [2024-11-20T11:24:50.940Z] =================================================================================================================== 00:10:45.174 [2024-11-20T11:24:50.940Z] Total : 16852.70 131.66 0.00 0.00 7588.26 3510.86 16976.94 00:10:45.174 [2024-11-20 12:24:50.898603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.174 [2024-11-20 12:24:50.898620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.174 [2024-11-20 12:24:50.910649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.174 [2024-11-20 12:24:50.910666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.174 [2024-11-20 12:24:50.922675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.174 [2024-11-20 12:24:50.922692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.174 [2024-11-20 12:24:50.934703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.174 [2024-11-20 12:24:50.934719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.433 [2024-11-20 12:24:50.946734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.433 [2024-11-20 12:24:50.946748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.433 [2024-11-20 12:24:50.958762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.433 [2024-11-20 12:24:50.958774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.433 [2024-11-20 12:24:50.970794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.433 [2024-11-20 12:24:50.970806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.433 [2024-11-20 12:24:50.982826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.433 [2024-11-20 12:24:50.982838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.433 [2024-11-20 12:24:50.994859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.433 [2024-11-20 12:24:50.994872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.433 [2024-11-20 12:24:51.006888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.433 [2024-11-20 12:24:51.006897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.433 [2024-11-20 12:24:51.018923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.433 [2024-11-20 12:24:51.018933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.433 [2024-11-20 12:24:51.030953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.433 [2024-11-20 12:24:51.030965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.433 [2024-11-20 12:24:51.042983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.433 [2024-11-20 12:24:51.042992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.433 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (59813) - No such process 00:10:45.433 12:24:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 59813 00:10:45.433 12:24:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.433 12:24:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.433 12:24:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:45.433 12:24:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.433 12:24:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:45.433 12:24:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.433 12:24:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:45.433 delay0 00:10:45.433 12:24:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.433 12:24:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:45.433 12:24:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.433 12:24:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:45.433 12:24:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.433 12:24:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:45.433 [2024-11-20 12:24:51.194889] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:51.997 Initializing NVMe Controllers 00:10:51.997 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:51.997 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:51.997 Initialization complete. Launching workers. 00:10:51.997 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 113 00:10:51.997 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 400, failed to submit 33 00:10:51.997 success 225, unsuccessful 175, failed 0 00:10:51.997 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:51.997 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:51.997 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:51.997 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:51.997 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:51.997 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:51.997 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:51.997 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:51.997 rmmod nvme_tcp 00:10:51.997 rmmod nvme_fabrics 00:10:51.997 rmmod nvme_keyring 00:10:51.997 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:51.998 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:51.998 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:51.998 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 57954 ']' 00:10:51.998 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 57954 00:10:51.998 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 57954 ']' 00:10:51.998 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 57954 00:10:51.998 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:51.998 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:51.998 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57954 00:10:51.998 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:51.998 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:51.998 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57954' 00:10:51.998 killing process with pid 57954 00:10:51.998 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 57954 00:10:51.998 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 57954 00:10:51.998 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:51.998 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:51.998 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:51.998 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:51.998 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:51.998 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:51.998 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:51.998 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:51.998 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:51.998 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.998 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:51.998 12:24:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.903 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:53.903 00:10:53.903 real 0m31.308s 00:10:53.903 user 0m41.760s 00:10:53.903 sys 0m11.081s 00:10:53.903 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:53.903 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:53.903 ************************************ 00:10:53.903 END TEST nvmf_zcopy 00:10:53.903 ************************************ 00:10:53.903 12:24:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:53.903 12:24:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:53.903 12:24:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:53.903 12:24:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:54.163 ************************************ 00:10:54.163 START TEST nvmf_nmic 00:10:54.163 ************************************ 00:10:54.163 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:54.163 * Looking for test storage... 00:10:54.163 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:54.163 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:54.163 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:10:54.163 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:54.163 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:54.163 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:54.163 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:54.163 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:54.163 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:54.163 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:54.163 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:54.163 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:54.163 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:54.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.164 --rc genhtml_branch_coverage=1 00:10:54.164 --rc genhtml_function_coverage=1 00:10:54.164 --rc genhtml_legend=1 00:10:54.164 --rc geninfo_all_blocks=1 00:10:54.164 --rc geninfo_unexecuted_blocks=1 00:10:54.164 00:10:54.164 ' 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:54.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.164 --rc genhtml_branch_coverage=1 00:10:54.164 --rc genhtml_function_coverage=1 00:10:54.164 --rc genhtml_legend=1 00:10:54.164 --rc geninfo_all_blocks=1 00:10:54.164 --rc geninfo_unexecuted_blocks=1 00:10:54.164 00:10:54.164 ' 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:54.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.164 --rc genhtml_branch_coverage=1 00:10:54.164 --rc genhtml_function_coverage=1 00:10:54.164 --rc genhtml_legend=1 00:10:54.164 --rc geninfo_all_blocks=1 00:10:54.164 --rc geninfo_unexecuted_blocks=1 00:10:54.164 00:10:54.164 ' 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:54.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.164 --rc genhtml_branch_coverage=1 00:10:54.164 --rc genhtml_function_coverage=1 00:10:54.164 --rc genhtml_legend=1 00:10:54.164 --rc geninfo_all_blocks=1 00:10:54.164 --rc geninfo_unexecuted_blocks=1 00:10:54.164 00:10:54.164 ' 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:54.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:54.164 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:54.165 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:54.165 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.165 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:54.165 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.165 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:54.165 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:54.165 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:54.165 12:24:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:00.735 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:00.735 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:11:00.735 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:00.735 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:00.735 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:00.735 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:00.735 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:00.735 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:11:00.735 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:00.736 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:00.736 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:00.736 Found net devices under 0000:86:00.0: cvl_0_0 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:00.736 Found net devices under 0000:86:00.1: cvl_0_1 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:00.736 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:00.736 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:11:00.736 00:11:00.736 --- 10.0.0.2 ping statistics --- 00:11:00.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.736 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:00.736 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:00.736 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:11:00.736 00:11:00.736 --- 10.0.0.1 ping statistics --- 00:11:00.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.736 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:00.736 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:11:00.737 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:00.737 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:00.737 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:00.737 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:00.737 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:00.737 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:00.737 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:00.737 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:00.737 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:00.737 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:00.737 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:00.737 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=65348 00:11:00.737 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 65348 00:11:00.737 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:00.737 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 65348 ']' 00:11:00.737 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.737 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:00.737 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.737 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:00.737 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:00.737 [2024-11-20 12:25:05.974179] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:11:00.737 [2024-11-20 12:25:05.974227] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:00.737 [2024-11-20 12:25:06.056395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:00.737 [2024-11-20 12:25:06.100124] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:00.737 [2024-11-20 12:25:06.100162] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:00.737 [2024-11-20 12:25:06.100169] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:00.737 [2024-11-20 12:25:06.100176] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:00.737 [2024-11-20 12:25:06.100180] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:00.737 [2024-11-20 12:25:06.101733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:00.737 [2024-11-20 12:25:06.101845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.737 [2024-11-20 12:25:06.101846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:00.737 [2024-11-20 12:25:06.101751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:01.306 [2024-11-20 12:25:06.852380] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:01.306 Malloc0 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:01.306 [2024-11-20 12:25:06.920957] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:01.306 test case1: single bdev can't be used in multiple subsystems 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:01.306 [2024-11-20 12:25:06.948846] bdev.c:8203:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:01.306 [2024-11-20 12:25:06.948864] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:01.306 [2024-11-20 12:25:06.948872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.306 request: 00:11:01.306 { 00:11:01.306 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:01.306 "namespace": { 00:11:01.306 "bdev_name": "Malloc0", 00:11:01.306 "no_auto_visible": false 00:11:01.306 }, 00:11:01.306 "method": "nvmf_subsystem_add_ns", 00:11:01.306 "req_id": 1 00:11:01.306 } 00:11:01.306 Got JSON-RPC error response 00:11:01.306 response: 00:11:01.306 { 00:11:01.306 "code": -32602, 00:11:01.306 "message": "Invalid parameters" 00:11:01.306 } 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:01.306 Adding namespace failed - expected result. 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:01.306 test case2: host connect to nvmf target in multiple paths 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:01.306 [2024-11-20 12:25:06.960980] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.306 12:25:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:02.685 12:25:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:03.620 12:25:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:03.620 12:25:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:11:03.620 12:25:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:03.620 12:25:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:03.620 12:25:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:11:05.523 12:25:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:05.523 12:25:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:05.523 12:25:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:05.523 12:25:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:05.523 12:25:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:05.523 12:25:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:11:05.523 12:25:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:05.523 [global] 00:11:05.523 thread=1 00:11:05.523 invalidate=1 00:11:05.523 rw=write 00:11:05.523 time_based=1 00:11:05.523 runtime=1 00:11:05.523 ioengine=libaio 00:11:05.523 direct=1 00:11:05.523 bs=4096 00:11:05.523 iodepth=1 00:11:05.523 norandommap=0 00:11:05.523 numjobs=1 00:11:05.523 00:11:05.523 verify_dump=1 00:11:05.523 verify_backlog=512 00:11:05.523 verify_state_save=0 00:11:05.523 do_verify=1 00:11:05.523 verify=crc32c-intel 00:11:05.523 [job0] 00:11:05.523 filename=/dev/nvme0n1 00:11:05.523 Could not set queue depth (nvme0n1) 00:11:05.781 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:05.781 fio-3.35 00:11:05.781 Starting 1 thread 00:11:07.159 00:11:07.159 job0: (groupid=0, jobs=1): err= 0: pid=66390: Wed Nov 20 12:25:12 2024 00:11:07.159 read: IOPS=522, BW=2091KiB/s (2141kB/s)(2116KiB/1012msec) 00:11:07.159 slat (nsec): min=6386, max=27113, avg=7736.91, stdev=3100.17 00:11:07.159 clat (usec): min=185, max=42086, avg=1575.27, stdev=7293.33 00:11:07.159 lat (usec): min=192, max=42110, avg=1583.00, stdev=7296.05 00:11:07.159 clat percentiles (usec): 00:11:07.159 | 1.00th=[ 212], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 227], 00:11:07.159 | 30.00th=[ 231], 40.00th=[ 235], 50.00th=[ 249], 60.00th=[ 262], 00:11:07.159 | 70.00th=[ 269], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 285], 00:11:07.159 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:07.159 | 99.99th=[42206] 00:11:07.159 write: IOPS=1011, BW=4047KiB/s (4145kB/s)(4096KiB/1012msec); 0 zone resets 00:11:07.159 slat (usec): min=9, max=24592, avg=34.43, stdev=768.18 00:11:07.159 clat (usec): min=110, max=308, avg=132.18, stdev=14.09 00:11:07.159 lat (usec): min=120, max=24848, avg=166.61, stdev=772.20 00:11:07.159 clat percentiles (usec): 00:11:07.159 | 1.00th=[ 118], 5.00th=[ 121], 10.00th=[ 122], 20.00th=[ 125], 00:11:07.159 | 30.00th=[ 127], 40.00th=[ 128], 50.00th=[ 130], 60.00th=[ 133], 00:11:07.159 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 143], 95.00th=[ 159], 00:11:07.159 | 99.00th=[ 182], 99.50th=[ 188], 99.90th=[ 281], 99.95th=[ 310], 00:11:07.159 | 99.99th=[ 310] 00:11:07.159 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:11:07.159 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:07.159 lat (usec) : 250=82.87%, 500=16.03% 00:11:07.159 lat (msec) : 50=1.09% 00:11:07.159 cpu : usr=0.20%, sys=1.98%, ctx=1556, majf=0, minf=1 00:11:07.159 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.159 issued rwts: total=529,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.159 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.159 00:11:07.159 Run status group 0 (all jobs): 00:11:07.159 READ: bw=2091KiB/s (2141kB/s), 2091KiB/s-2091KiB/s (2141kB/s-2141kB/s), io=2116KiB (2167kB), run=1012-1012msec 00:11:07.159 WRITE: bw=4047KiB/s (4145kB/s), 4047KiB/s-4047KiB/s (4145kB/s-4145kB/s), io=4096KiB (4194kB), run=1012-1012msec 00:11:07.159 00:11:07.159 Disk stats (read/write): 00:11:07.159 nvme0n1: ios=551/1024, merge=0/0, ticks=1665/130, in_queue=1795, util=98.50% 00:11:07.159 12:25:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:07.159 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:07.159 12:25:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:07.159 12:25:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:11:07.159 12:25:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:07.159 12:25:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:07.159 12:25:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:07.159 12:25:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:07.159 12:25:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:11:07.159 12:25:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:07.159 12:25:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:07.159 12:25:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:07.159 12:25:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:07.159 12:25:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:07.159 12:25:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:07.159 12:25:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:07.159 12:25:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:07.159 rmmod nvme_tcp 00:11:07.160 rmmod nvme_fabrics 00:11:07.160 rmmod nvme_keyring 00:11:07.160 12:25:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:07.160 12:25:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:07.160 12:25:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:07.160 12:25:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 65348 ']' 00:11:07.160 12:25:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 65348 00:11:07.160 12:25:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 65348 ']' 00:11:07.160 12:25:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 65348 00:11:07.160 12:25:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:11:07.160 12:25:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:07.160 12:25:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65348 00:11:07.160 12:25:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:07.160 12:25:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:07.160 12:25:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65348' 00:11:07.160 killing process with pid 65348 00:11:07.160 12:25:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 65348 00:11:07.160 12:25:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 65348 00:11:07.419 12:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:07.419 12:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:07.419 12:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:07.419 12:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:11:07.419 12:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:11:07.419 12:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:07.419 12:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:11:07.419 12:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:07.419 12:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:07.419 12:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.419 12:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:07.419 12:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.955 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:09.955 00:11:09.955 real 0m15.492s 00:11:09.955 user 0m35.291s 00:11:09.955 sys 0m5.322s 00:11:09.955 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:09.955 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:09.955 ************************************ 00:11:09.955 END TEST nvmf_nmic 00:11:09.955 ************************************ 00:11:09.955 12:25:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:09.955 12:25:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:09.955 12:25:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:09.955 12:25:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:09.955 ************************************ 00:11:09.955 START TEST nvmf_fio_target 00:11:09.955 ************************************ 00:11:09.955 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:09.955 * Looking for test storage... 00:11:09.955 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:09.955 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:09.955 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:11:09.955 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:09.955 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:09.955 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:09.955 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:09.955 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:09.955 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:09.955 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:09.955 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:09.955 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:09.955 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:09.955 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:09.955 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:09.955 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:09.955 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:09.955 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:09.955 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:09.955 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:09.955 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:09.955 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:09.955 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:09.955 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:09.955 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:09.955 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:09.955 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:09.955 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:09.955 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:09.955 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:09.955 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:09.955 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:09.955 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:09.955 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:09.955 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:09.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.955 --rc genhtml_branch_coverage=1 00:11:09.955 --rc genhtml_function_coverage=1 00:11:09.955 --rc genhtml_legend=1 00:11:09.955 --rc geninfo_all_blocks=1 00:11:09.955 --rc geninfo_unexecuted_blocks=1 00:11:09.955 00:11:09.955 ' 00:11:09.955 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:09.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.955 --rc genhtml_branch_coverage=1 00:11:09.955 --rc genhtml_function_coverage=1 00:11:09.955 --rc genhtml_legend=1 00:11:09.955 --rc geninfo_all_blocks=1 00:11:09.955 --rc geninfo_unexecuted_blocks=1 00:11:09.955 00:11:09.955 ' 00:11:09.955 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:09.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.955 --rc genhtml_branch_coverage=1 00:11:09.955 --rc genhtml_function_coverage=1 00:11:09.955 --rc genhtml_legend=1 00:11:09.955 --rc geninfo_all_blocks=1 00:11:09.955 --rc geninfo_unexecuted_blocks=1 00:11:09.955 00:11:09.955 ' 00:11:09.955 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:09.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.955 --rc genhtml_branch_coverage=1 00:11:09.955 --rc genhtml_function_coverage=1 00:11:09.955 --rc genhtml_legend=1 00:11:09.955 --rc geninfo_all_blocks=1 00:11:09.956 --rc geninfo_unexecuted_blocks=1 00:11:09.956 00:11:09.956 ' 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:09.956 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:11:09.956 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.525 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:16.525 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:11:16.525 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:16.525 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:16.525 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:16.525 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:16.525 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:16.525 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:11:16.525 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:16.525 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:11:16.525 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:11:16.525 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:11:16.525 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:11:16.525 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:11:16.525 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:11:16.525 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:16.525 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:16.525 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:16.525 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:16.525 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:16.525 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:16.525 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:16.525 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:16.525 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:16.525 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:16.525 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:16.525 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:16.525 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:16.525 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:16.525 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:16.525 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:16.525 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:16.525 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:16.526 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:16.526 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:16.526 Found net devices under 0000:86:00.0: cvl_0_0 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:16.526 Found net devices under 0000:86:00.1: cvl_0_1 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:16.526 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:16.526 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.384 ms 00:11:16.526 00:11:16.526 --- 10.0.0.2 ping statistics --- 00:11:16.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.526 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:16.526 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:16.526 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:11:16.526 00:11:16.526 --- 10.0.0.1 ping statistics --- 00:11:16.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.526 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=70245 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 70245 00:11:16.526 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 70245 ']' 00:11:16.527 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.527 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:16.527 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.527 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:16.527 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.527 [2024-11-20 12:25:21.504151] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:11:16.527 [2024-11-20 12:25:21.504198] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:16.527 [2024-11-20 12:25:21.581713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:16.527 [2024-11-20 12:25:21.622230] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:16.527 [2024-11-20 12:25:21.622266] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:16.527 [2024-11-20 12:25:21.622273] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:16.527 [2024-11-20 12:25:21.622279] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:16.527 [2024-11-20 12:25:21.622284] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:16.527 [2024-11-20 12:25:21.623887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:16.527 [2024-11-20 12:25:21.623994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:16.527 [2024-11-20 12:25:21.624080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.527 [2024-11-20 12:25:21.624081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:16.527 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:16.527 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:11:16.527 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:16.527 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:16.527 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.527 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:16.527 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:16.527 [2024-11-20 12:25:21.928363] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:16.527 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:16.527 12:25:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:16.527 12:25:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:16.786 12:25:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:16.786 12:25:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:17.045 12:25:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:17.045 12:25:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:17.304 12:25:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:17.304 12:25:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:17.304 12:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:17.563 12:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:17.563 12:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:17.822 12:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:17.822 12:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:18.081 12:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:18.081 12:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:18.341 12:25:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:18.341 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:18.341 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:18.600 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:18.600 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:18.859 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:19.117 [2024-11-20 12:25:24.625538] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:19.117 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:19.117 12:25:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:19.376 12:25:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:20.755 12:25:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:20.755 12:25:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:11:20.755 12:25:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:20.755 12:25:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:11:20.755 12:25:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:11:20.755 12:25:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:11:22.658 12:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:22.658 12:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:22.659 12:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:22.659 12:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:11:22.659 12:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:22.659 12:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:11:22.659 12:25:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:22.659 [global] 00:11:22.659 thread=1 00:11:22.659 invalidate=1 00:11:22.659 rw=write 00:11:22.659 time_based=1 00:11:22.659 runtime=1 00:11:22.659 ioengine=libaio 00:11:22.659 direct=1 00:11:22.659 bs=4096 00:11:22.659 iodepth=1 00:11:22.659 norandommap=0 00:11:22.659 numjobs=1 00:11:22.659 00:11:22.659 verify_dump=1 00:11:22.659 verify_backlog=512 00:11:22.659 verify_state_save=0 00:11:22.659 do_verify=1 00:11:22.659 verify=crc32c-intel 00:11:22.659 [job0] 00:11:22.659 filename=/dev/nvme0n1 00:11:22.659 [job1] 00:11:22.659 filename=/dev/nvme0n2 00:11:22.659 [job2] 00:11:22.659 filename=/dev/nvme0n3 00:11:22.659 [job3] 00:11:22.659 filename=/dev/nvme0n4 00:11:22.659 Could not set queue depth (nvme0n1) 00:11:22.659 Could not set queue depth (nvme0n2) 00:11:22.659 Could not set queue depth (nvme0n3) 00:11:22.659 Could not set queue depth (nvme0n4) 00:11:22.916 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:22.916 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:22.916 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:22.916 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:22.916 fio-3.35 00:11:22.916 Starting 4 threads 00:11:24.294 00:11:24.294 job0: (groupid=0, jobs=1): err= 0: pid=71612: Wed Nov 20 12:25:29 2024 00:11:24.294 read: IOPS=2052, BW=8212KiB/s (8409kB/s)(8220KiB/1001msec) 00:11:24.294 slat (nsec): min=7039, max=35041, avg=8219.37, stdev=1452.70 00:11:24.294 clat (usec): min=192, max=535, avg=252.35, stdev=40.00 00:11:24.294 lat (usec): min=200, max=543, avg=260.57, stdev=40.16 00:11:24.294 clat percentiles (usec): 00:11:24.294 | 1.00th=[ 212], 5.00th=[ 225], 10.00th=[ 229], 20.00th=[ 235], 00:11:24.294 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 243], 60.00th=[ 247], 00:11:24.294 | 70.00th=[ 251], 80.00th=[ 260], 90.00th=[ 277], 95.00th=[ 306], 00:11:24.294 | 99.00th=[ 478], 99.50th=[ 494], 99.90th=[ 519], 99.95th=[ 523], 00:11:24.294 | 99.99th=[ 537] 00:11:24.294 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:11:24.294 slat (nsec): min=10197, max=43141, avg=11501.37, stdev=1658.17 00:11:24.294 clat (usec): min=111, max=375, avg=164.75, stdev=35.74 00:11:24.294 lat (usec): min=122, max=385, avg=176.25, stdev=35.88 00:11:24.294 clat percentiles (usec): 00:11:24.294 | 1.00th=[ 121], 5.00th=[ 126], 10.00th=[ 131], 20.00th=[ 137], 00:11:24.294 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 149], 60.00th=[ 159], 00:11:24.294 | 70.00th=[ 176], 80.00th=[ 204], 90.00th=[ 223], 95.00th=[ 233], 00:11:24.294 | 99.00th=[ 253], 99.50th=[ 289], 99.90th=[ 322], 99.95th=[ 322], 00:11:24.294 | 99.99th=[ 375] 00:11:24.294 bw ( KiB/s): min= 9024, max= 9024, per=38.53%, avg=9024.00, stdev= 0.00, samples=1 00:11:24.294 iops : min= 2256, max= 2256, avg=2256.00, stdev= 0.00, samples=1 00:11:24.294 lat (usec) : 250=84.59%, 500=15.28%, 750=0.13% 00:11:24.294 cpu : usr=4.30%, sys=6.90%, ctx=4615, majf=0, minf=1 00:11:24.294 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:24.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.294 issued rwts: total=2055,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:24.294 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:24.294 job1: (groupid=0, jobs=1): err= 0: pid=71613: Wed Nov 20 12:25:29 2024 00:11:24.294 read: IOPS=21, BW=85.9KiB/s (88.0kB/s)(88.0KiB/1024msec) 00:11:24.294 slat (nsec): min=10195, max=23102, avg=21947.14, stdev=2640.68 00:11:24.294 clat (usec): min=40709, max=41983, avg=41045.07, stdev=313.34 00:11:24.294 lat (usec): min=40719, max=42005, avg=41067.02, stdev=314.04 00:11:24.294 clat percentiles (usec): 00:11:24.294 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:11:24.294 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:24.294 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:11:24.294 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:24.294 | 99.99th=[42206] 00:11:24.294 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:11:24.294 slat (nsec): min=8811, max=39695, avg=10412.50, stdev=1887.25 00:11:24.294 clat (usec): min=169, max=347, avg=222.41, stdev=13.90 00:11:24.294 lat (usec): min=180, max=387, avg=232.83, stdev=14.40 00:11:24.294 clat percentiles (usec): 00:11:24.294 | 1.00th=[ 196], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 212], 00:11:24.294 | 30.00th=[ 217], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 227], 00:11:24.294 | 70.00th=[ 231], 80.00th=[ 235], 90.00th=[ 239], 95.00th=[ 241], 00:11:24.294 | 99.00th=[ 251], 99.50th=[ 253], 99.90th=[ 347], 99.95th=[ 347], 00:11:24.294 | 99.99th=[ 347] 00:11:24.294 bw ( KiB/s): min= 4096, max= 4096, per=17.49%, avg=4096.00, stdev= 0.00, samples=1 00:11:24.294 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:24.294 lat (usec) : 250=94.76%, 500=1.12% 00:11:24.294 lat (msec) : 50=4.12% 00:11:24.294 cpu : usr=0.20%, sys=0.59%, ctx=534, majf=0, minf=1 00:11:24.294 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:24.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.294 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:24.294 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:24.294 job2: (groupid=0, jobs=1): err= 0: pid=71614: Wed Nov 20 12:25:29 2024 00:11:24.294 read: IOPS=22, BW=89.8KiB/s (91.9kB/s)(92.0KiB/1025msec) 00:11:24.294 slat (nsec): min=10683, max=26289, avg=20193.74, stdev=5248.20 00:11:24.294 clat (usec): min=410, max=42053, avg=39281.07, stdev=8479.08 00:11:24.294 lat (usec): min=427, max=42066, avg=39301.26, stdev=8479.72 00:11:24.294 clat percentiles (usec): 00:11:24.294 | 1.00th=[ 412], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:11:24.294 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:24.294 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:11:24.294 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:24.294 | 99.99th=[42206] 00:11:24.294 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:11:24.294 slat (nsec): min=11017, max=54263, avg=12679.90, stdev=2612.79 00:11:24.294 clat (usec): min=151, max=312, avg=219.73, stdev=15.02 00:11:24.294 lat (usec): min=163, max=367, avg=232.41, stdev=15.60 00:11:24.294 clat percentiles (usec): 00:11:24.294 | 1.00th=[ 188], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 210], 00:11:24.294 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 223], 00:11:24.294 | 70.00th=[ 227], 80.00th=[ 229], 90.00th=[ 235], 95.00th=[ 241], 00:11:24.294 | 99.00th=[ 269], 99.50th=[ 306], 99.90th=[ 314], 99.95th=[ 314], 00:11:24.294 | 99.99th=[ 314] 00:11:24.294 bw ( KiB/s): min= 4096, max= 4096, per=17.49%, avg=4096.00, stdev= 0.00, samples=1 00:11:24.294 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:24.294 lat (usec) : 250=93.46%, 500=2.43% 00:11:24.294 lat (msec) : 50=4.11% 00:11:24.294 cpu : usr=0.39%, sys=0.88%, ctx=536, majf=0, minf=1 00:11:24.294 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:24.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.294 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:24.294 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:24.294 job3: (groupid=0, jobs=1): err= 0: pid=71615: Wed Nov 20 12:25:29 2024 00:11:24.294 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:11:24.294 slat (nsec): min=8457, max=25438, avg=9680.38, stdev=1292.67 00:11:24.294 clat (usec): min=195, max=544, avg=257.63, stdev=49.36 00:11:24.294 lat (usec): min=204, max=553, avg=267.31, stdev=49.44 00:11:24.294 clat percentiles (usec): 00:11:24.294 | 1.00th=[ 208], 5.00th=[ 219], 10.00th=[ 225], 20.00th=[ 233], 00:11:24.294 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 251], 00:11:24.294 | 70.00th=[ 258], 80.00th=[ 269], 90.00th=[ 285], 95.00th=[ 322], 00:11:24.294 | 99.00th=[ 486], 99.50th=[ 494], 99.90th=[ 502], 99.95th=[ 519], 00:11:24.294 | 99.99th=[ 545] 00:11:24.294 write: IOPS=2414, BW=9658KiB/s (9890kB/s)(9668KiB/1001msec); 0 zone resets 00:11:24.294 slat (nsec): min=10062, max=77988, avg=13586.61, stdev=2261.10 00:11:24.294 clat (usec): min=123, max=639, avg=167.89, stdev=36.90 00:11:24.294 lat (usec): min=136, max=654, avg=181.48, stdev=37.20 00:11:24.294 clat percentiles (usec): 00:11:24.294 | 1.00th=[ 133], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 143], 00:11:24.294 | 30.00th=[ 147], 40.00th=[ 153], 50.00th=[ 159], 60.00th=[ 165], 00:11:24.294 | 70.00th=[ 172], 80.00th=[ 180], 90.00th=[ 204], 95.00th=[ 265], 00:11:24.294 | 99.00th=[ 297], 99.50th=[ 310], 99.90th=[ 322], 99.95th=[ 359], 00:11:24.294 | 99.99th=[ 644] 00:11:24.294 bw ( KiB/s): min=10160, max=10160, per=43.38%, avg=10160.00, stdev= 0.00, samples=1 00:11:24.294 iops : min= 2540, max= 2540, avg=2540.00, stdev= 0.00, samples=1 00:11:24.294 lat (usec) : 250=77.72%, 500=22.15%, 750=0.13% 00:11:24.294 cpu : usr=4.30%, sys=7.50%, ctx=4467, majf=0, minf=1 00:11:24.294 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:24.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.294 issued rwts: total=2048,2417,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:24.294 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:24.294 00:11:24.294 Run status group 0 (all jobs): 00:11:24.294 READ: bw=15.8MiB/s (16.6MB/s), 85.9KiB/s-8212KiB/s (88.0kB/s-8409kB/s), io=16.2MiB (17.0MB), run=1001-1025msec 00:11:24.294 WRITE: bw=22.9MiB/s (24.0MB/s), 1998KiB/s-9.99MiB/s (2046kB/s-10.5MB/s), io=23.4MiB (24.6MB), run=1001-1025msec 00:11:24.294 00:11:24.294 Disk stats (read/write): 00:11:24.294 nvme0n1: ios=1853/2048, merge=0/0, ticks=475/328, in_queue=803, util=86.37% 00:11:24.294 nvme0n2: ios=37/512, merge=0/0, ticks=919/112, in_queue=1031, util=91.24% 00:11:24.294 nvme0n3: ios=43/512, merge=0/0, ticks=1683/105, in_queue=1788, util=98.12% 00:11:24.294 nvme0n4: ios=1897/2048, merge=0/0, ticks=1054/296, in_queue=1350, util=98.00% 00:11:24.294 12:25:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:24.294 [global] 00:11:24.294 thread=1 00:11:24.294 invalidate=1 00:11:24.294 rw=randwrite 00:11:24.294 time_based=1 00:11:24.294 runtime=1 00:11:24.294 ioengine=libaio 00:11:24.294 direct=1 00:11:24.294 bs=4096 00:11:24.294 iodepth=1 00:11:24.294 norandommap=0 00:11:24.294 numjobs=1 00:11:24.294 00:11:24.294 verify_dump=1 00:11:24.294 verify_backlog=512 00:11:24.294 verify_state_save=0 00:11:24.294 do_verify=1 00:11:24.294 verify=crc32c-intel 00:11:24.294 [job0] 00:11:24.294 filename=/dev/nvme0n1 00:11:24.294 [job1] 00:11:24.294 filename=/dev/nvme0n2 00:11:24.294 [job2] 00:11:24.294 filename=/dev/nvme0n3 00:11:24.294 [job3] 00:11:24.294 filename=/dev/nvme0n4 00:11:24.295 Could not set queue depth (nvme0n1) 00:11:24.295 Could not set queue depth (nvme0n2) 00:11:24.295 Could not set queue depth (nvme0n3) 00:11:24.295 Could not set queue depth (nvme0n4) 00:11:24.553 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:24.553 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:24.553 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:24.553 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:24.553 fio-3.35 00:11:24.553 Starting 4 threads 00:11:25.952 00:11:25.952 job0: (groupid=0, jobs=1): err= 0: pid=71989: Wed Nov 20 12:25:31 2024 00:11:25.952 read: IOPS=21, BW=87.3KiB/s (89.4kB/s)(88.0KiB/1008msec) 00:11:25.952 slat (nsec): min=9974, max=23812, avg=19851.36, stdev=4495.83 00:11:25.952 clat (usec): min=40443, max=41927, avg=40993.11, stdev=242.62 00:11:25.952 lat (usec): min=40453, max=41951, avg=41012.96, stdev=244.04 00:11:25.952 clat percentiles (usec): 00:11:25.952 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:11:25.952 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:25.952 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:25.952 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:11:25.952 | 99.99th=[41681] 00:11:25.952 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:11:25.952 slat (nsec): min=9932, max=41540, avg=12064.96, stdev=2750.04 00:11:25.952 clat (usec): min=133, max=2490, avg=191.08, stdev=118.23 00:11:25.952 lat (usec): min=143, max=2502, avg=203.14, stdev=118.47 00:11:25.952 clat percentiles (usec): 00:11:25.952 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 159], 00:11:25.952 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:11:25.952 | 70.00th=[ 184], 80.00th=[ 192], 90.00th=[ 249], 95.00th=[ 293], 00:11:25.952 | 99.00th=[ 318], 99.50th=[ 717], 99.90th=[ 2507], 99.95th=[ 2507], 00:11:25.952 | 99.99th=[ 2507] 00:11:25.952 bw ( KiB/s): min= 4096, max= 4096, per=18.66%, avg=4096.00, stdev= 0.00, samples=1 00:11:25.952 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:25.952 lat (usec) : 250=86.33%, 500=8.61%, 750=0.56%, 1000=0.19% 00:11:25.952 lat (msec) : 4=0.19%, 50=4.12% 00:11:25.952 cpu : usr=0.60%, sys=0.60%, ctx=534, majf=0, minf=1 00:11:25.952 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:25.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.952 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.952 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:25.952 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:25.952 job1: (groupid=0, jobs=1): err= 0: pid=71990: Wed Nov 20 12:25:31 2024 00:11:25.952 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:11:25.952 slat (nsec): min=4173, max=34270, avg=7358.13, stdev=1353.43 00:11:25.952 clat (usec): min=156, max=1269, avg=219.71, stdev=42.96 00:11:25.952 lat (usec): min=163, max=1276, avg=227.07, stdev=43.14 00:11:25.952 clat percentiles (usec): 00:11:25.952 | 1.00th=[ 169], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 192], 00:11:25.952 | 30.00th=[ 198], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 221], 00:11:25.952 | 70.00th=[ 231], 80.00th=[ 243], 90.00th=[ 269], 95.00th=[ 289], 00:11:25.952 | 99.00th=[ 310], 99.50th=[ 330], 99.90th=[ 529], 99.95th=[ 1029], 00:11:25.952 | 99.99th=[ 1270] 00:11:25.952 write: IOPS=2584, BW=10.1MiB/s (10.6MB/s)(10.1MiB/1001msec); 0 zone resets 00:11:25.952 slat (nsec): min=5194, max=40009, avg=9427.98, stdev=2161.10 00:11:25.952 clat (usec): min=111, max=425, avg=148.23, stdev=29.38 00:11:25.952 lat (usec): min=116, max=453, avg=157.65, stdev=29.81 00:11:25.952 clat percentiles (usec): 00:11:25.952 | 1.00th=[ 117], 5.00th=[ 120], 10.00th=[ 123], 20.00th=[ 127], 00:11:25.952 | 30.00th=[ 131], 40.00th=[ 135], 50.00th=[ 141], 60.00th=[ 145], 00:11:25.952 | 70.00th=[ 151], 80.00th=[ 172], 90.00th=[ 190], 95.00th=[ 198], 00:11:25.952 | 99.00th=[ 233], 99.50th=[ 260], 99.90th=[ 416], 99.95th=[ 424], 00:11:25.952 | 99.99th=[ 424] 00:11:25.952 bw ( KiB/s): min=12288, max=12288, per=55.97%, avg=12288.00, stdev= 0.00, samples=1 00:11:25.952 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:25.952 lat (usec) : 250=91.98%, 500=7.97%, 750=0.02% 00:11:25.952 lat (msec) : 2=0.04% 00:11:25.952 cpu : usr=1.70%, sys=5.10%, ctx=5147, majf=0, minf=1 00:11:25.952 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:25.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.952 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.952 issued rwts: total=2560,2587,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:25.952 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:25.952 job2: (groupid=0, jobs=1): err= 0: pid=71992: Wed Nov 20 12:25:31 2024 00:11:25.952 read: IOPS=1100, BW=4400KiB/s (4506kB/s)(4400KiB/1000msec) 00:11:25.952 slat (nsec): min=7112, max=28157, avg=8104.94, stdev=1896.80 00:11:25.952 clat (usec): min=195, max=42296, avg=656.31, stdev=4102.82 00:11:25.952 lat (usec): min=203, max=42306, avg=664.41, stdev=4104.27 00:11:25.952 clat percentiles (usec): 00:11:25.952 | 1.00th=[ 202], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 225], 00:11:25.952 | 30.00th=[ 229], 40.00th=[ 233], 50.00th=[ 237], 60.00th=[ 243], 00:11:25.952 | 70.00th=[ 255], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 293], 00:11:25.952 | 99.00th=[ 420], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:11:25.952 | 99.99th=[42206] 00:11:25.952 write: IOPS=1536, BW=6144KiB/s (6291kB/s)(6144KiB/1000msec); 0 zone resets 00:11:25.952 slat (nsec): min=9527, max=37187, avg=10922.04, stdev=1547.66 00:11:25.952 clat (usec): min=118, max=364, avg=160.71, stdev=25.62 00:11:25.952 lat (usec): min=129, max=396, avg=171.64, stdev=25.81 00:11:25.952 clat percentiles (usec): 00:11:25.952 | 1.00th=[ 124], 5.00th=[ 129], 10.00th=[ 135], 20.00th=[ 139], 00:11:25.952 | 30.00th=[ 143], 40.00th=[ 149], 50.00th=[ 155], 60.00th=[ 165], 00:11:25.952 | 70.00th=[ 174], 80.00th=[ 184], 90.00th=[ 196], 95.00th=[ 204], 00:11:25.952 | 99.00th=[ 233], 99.50th=[ 258], 99.90th=[ 334], 99.95th=[ 367], 00:11:25.952 | 99.99th=[ 367] 00:11:25.952 bw ( KiB/s): min= 4096, max= 4096, per=18.66%, avg=4096.00, stdev= 0.00, samples=1 00:11:25.952 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:25.952 lat (usec) : 250=86.08%, 500=13.51% 00:11:25.952 lat (msec) : 50=0.42% 00:11:25.952 cpu : usr=1.10%, sys=2.80%, ctx=2638, majf=0, minf=1 00:11:25.952 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:25.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.952 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.952 issued rwts: total=1100,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:25.952 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:25.952 job3: (groupid=0, jobs=1): err= 0: pid=71993: Wed Nov 20 12:25:31 2024 00:11:25.952 read: IOPS=514, BW=2056KiB/s (2106kB/s)(2120KiB/1031msec) 00:11:25.952 slat (nsec): min=3527, max=29039, avg=4940.10, stdev=3551.46 00:11:25.952 clat (usec): min=179, max=41881, avg=1603.70, stdev=7407.66 00:11:25.952 lat (usec): min=184, max=41906, avg=1608.64, stdev=7410.90 00:11:25.952 clat percentiles (usec): 00:11:25.952 | 1.00th=[ 188], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 202], 00:11:25.952 | 30.00th=[ 206], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 217], 00:11:25.952 | 70.00th=[ 221], 80.00th=[ 225], 90.00th=[ 235], 95.00th=[ 338], 00:11:25.952 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:11:25.952 | 99.99th=[41681] 00:11:25.952 write: IOPS=993, BW=3973KiB/s (4068kB/s)(4096KiB/1031msec); 0 zone resets 00:11:25.952 slat (nsec): min=5144, max=36204, avg=9405.03, stdev=3627.31 00:11:25.952 clat (usec): min=115, max=312, avg=161.51, stdev=26.47 00:11:25.952 lat (usec): min=121, max=327, avg=170.92, stdev=26.67 00:11:25.952 clat percentiles (usec): 00:11:25.952 | 1.00th=[ 130], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 141], 00:11:25.952 | 30.00th=[ 145], 40.00th=[ 147], 50.00th=[ 153], 60.00th=[ 161], 00:11:25.952 | 70.00th=[ 172], 80.00th=[ 186], 90.00th=[ 198], 95.00th=[ 208], 00:11:25.952 | 99.00th=[ 235], 99.50th=[ 281], 99.90th=[ 310], 99.95th=[ 314], 00:11:25.952 | 99.99th=[ 314] 00:11:25.952 bw ( KiB/s): min= 8192, max= 8192, per=37.31%, avg=8192.00, stdev= 0.00, samples=1 00:11:25.952 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:25.952 lat (usec) : 250=96.98%, 500=1.87% 00:11:25.952 lat (msec) : 50=1.16% 00:11:25.952 cpu : usr=1.07%, sys=1.55%, ctx=1555, majf=0, minf=1 00:11:25.952 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:25.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.952 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.952 issued rwts: total=530,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:25.952 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:25.952 00:11:25.952 Run status group 0 (all jobs): 00:11:25.952 READ: bw=16.0MiB/s (16.7MB/s), 87.3KiB/s-9.99MiB/s (89.4kB/s-10.5MB/s), io=16.5MiB (17.3MB), run=1000-1031msec 00:11:25.952 WRITE: bw=21.4MiB/s (22.5MB/s), 2032KiB/s-10.1MiB/s (2081kB/s-10.6MB/s), io=22.1MiB (23.2MB), run=1000-1031msec 00:11:25.952 00:11:25.952 Disk stats (read/write): 00:11:25.952 nvme0n1: ios=68/512, merge=0/0, ticks=760/89, in_queue=849, util=86.77% 00:11:25.952 nvme0n2: ios=2048/2353, merge=0/0, ticks=449/339, in_queue=788, util=86.90% 00:11:25.952 nvme0n3: ios=942/1024, merge=0/0, ticks=848/162, in_queue=1010, util=98.44% 00:11:25.953 nvme0n4: ios=562/1024, merge=0/0, ticks=1435/150, in_queue=1585, util=96.44% 00:11:25.953 12:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:25.953 [global] 00:11:25.953 thread=1 00:11:25.953 invalidate=1 00:11:25.953 rw=write 00:11:25.953 time_based=1 00:11:25.953 runtime=1 00:11:25.953 ioengine=libaio 00:11:25.953 direct=1 00:11:25.953 bs=4096 00:11:25.953 iodepth=128 00:11:25.953 norandommap=0 00:11:25.953 numjobs=1 00:11:25.953 00:11:25.953 verify_dump=1 00:11:25.953 verify_backlog=512 00:11:25.953 verify_state_save=0 00:11:25.953 do_verify=1 00:11:25.953 verify=crc32c-intel 00:11:25.953 [job0] 00:11:25.953 filename=/dev/nvme0n1 00:11:25.953 [job1] 00:11:25.953 filename=/dev/nvme0n2 00:11:25.953 [job2] 00:11:25.953 filename=/dev/nvme0n3 00:11:25.953 [job3] 00:11:25.953 filename=/dev/nvme0n4 00:11:25.953 Could not set queue depth (nvme0n1) 00:11:25.953 Could not set queue depth (nvme0n2) 00:11:25.953 Could not set queue depth (nvme0n3) 00:11:25.953 Could not set queue depth (nvme0n4) 00:11:26.208 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:26.208 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:26.208 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:26.208 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:26.208 fio-3.35 00:11:26.208 Starting 4 threads 00:11:27.578 00:11:27.578 job0: (groupid=0, jobs=1): err= 0: pid=72362: Wed Nov 20 12:25:32 2024 00:11:27.578 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:11:27.578 slat (nsec): min=1059, max=11956k, avg=93748.03, stdev=553282.85 00:11:27.578 clat (usec): min=4178, max=23968, avg=11900.59, stdev=3361.07 00:11:27.578 lat (usec): min=4183, max=28893, avg=11994.34, stdev=3374.16 00:11:27.578 clat percentiles (usec): 00:11:27.578 | 1.00th=[ 6325], 5.00th=[ 7832], 10.00th=[ 8455], 20.00th=[ 9765], 00:11:27.578 | 30.00th=[10028], 40.00th=[10421], 50.00th=[10814], 60.00th=[11600], 00:11:27.578 | 70.00th=[12387], 80.00th=[15008], 90.00th=[16909], 95.00th=[19268], 00:11:27.578 | 99.00th=[21103], 99.50th=[21890], 99.90th=[23987], 99.95th=[23987], 00:11:27.578 | 99.99th=[23987] 00:11:27.578 write: IOPS=5500, BW=21.5MiB/s (22.5MB/s)(21.5MiB/1002msec); 0 zone resets 00:11:27.578 slat (nsec): min=1811, max=12240k, avg=90532.64, stdev=529268.52 00:11:27.578 clat (usec): min=228, max=34870, avg=11948.46, stdev=4360.49 00:11:27.578 lat (usec): min=1220, max=34879, avg=12038.99, stdev=4385.60 00:11:27.578 clat percentiles (usec): 00:11:27.578 | 1.00th=[ 4359], 5.00th=[ 7177], 10.00th=[ 8455], 20.00th=[ 9634], 00:11:27.578 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10945], 60.00th=[11731], 00:11:27.578 | 70.00th=[11994], 80.00th=[13435], 90.00th=[16188], 95.00th=[22938], 00:11:27.578 | 99.00th=[28443], 99.50th=[31065], 99.90th=[34866], 99.95th=[34866], 00:11:27.578 | 99.99th=[34866] 00:11:27.578 bw ( KiB/s): min=20056, max=23008, per=28.29%, avg=21532.00, stdev=2087.38, samples=2 00:11:27.578 iops : min= 5014, max= 5752, avg=5383.00, stdev=521.84, samples=2 00:11:27.578 lat (usec) : 250=0.01% 00:11:27.578 lat (msec) : 2=0.20%, 4=0.30%, 10=27.87%, 20=66.16%, 50=5.47% 00:11:27.578 cpu : usr=2.70%, sys=4.30%, ctx=533, majf=0, minf=1 00:11:27.578 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:27.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.578 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:27.578 issued rwts: total=5120,5511,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:27.578 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:27.578 job1: (groupid=0, jobs=1): err= 0: pid=72363: Wed Nov 20 12:25:32 2024 00:11:27.578 read: IOPS=3587, BW=14.0MiB/s (14.7MB/s)(14.2MiB/1014msec) 00:11:27.578 slat (nsec): min=1343, max=24310k, avg=126106.54, stdev=974534.96 00:11:27.578 clat (usec): min=4374, max=51206, avg=15859.69, stdev=7399.57 00:11:27.578 lat (usec): min=4380, max=54637, avg=15985.80, stdev=7487.95 00:11:27.578 clat percentiles (usec): 00:11:27.578 | 1.00th=[ 7242], 5.00th=[ 9372], 10.00th=[10683], 20.00th=[11338], 00:11:27.578 | 30.00th=[11600], 40.00th=[11731], 50.00th=[12780], 60.00th=[13173], 00:11:27.578 | 70.00th=[17433], 80.00th=[19792], 90.00th=[26084], 95.00th=[31065], 00:11:27.578 | 99.00th=[49021], 99.50th=[50070], 99.90th=[51119], 99.95th=[51119], 00:11:27.578 | 99.99th=[51119] 00:11:27.578 write: IOPS=4039, BW=15.8MiB/s (16.5MB/s)(16.0MiB/1014msec); 0 zone resets 00:11:27.578 slat (usec): min=2, max=13447, avg=126.25, stdev=796.21 00:11:27.578 clat (usec): min=1363, max=87175, avg=17329.30, stdev=14340.88 00:11:27.578 lat (usec): min=1376, max=93084, avg=17455.55, stdev=14433.30 00:11:27.578 clat percentiles (usec): 00:11:27.578 | 1.00th=[ 4621], 5.00th=[ 7242], 10.00th=[ 8586], 20.00th=[ 9503], 00:11:27.578 | 30.00th=[10159], 40.00th=[11600], 50.00th=[12256], 60.00th=[13435], 00:11:27.578 | 70.00th=[15270], 80.00th=[20317], 90.00th=[31589], 95.00th=[50594], 00:11:27.578 | 99.00th=[84411], 99.50th=[86508], 99.90th=[87557], 99.95th=[87557], 00:11:27.578 | 99.99th=[87557] 00:11:27.578 bw ( KiB/s): min=14960, max=17224, per=21.14%, avg=16092.00, stdev=1600.89, samples=2 00:11:27.578 iops : min= 3740, max= 4306, avg=4023.00, stdev=400.22, samples=2 00:11:27.578 lat (msec) : 2=0.06%, 4=0.26%, 10=16.03%, 20=62.96%, 50=17.65% 00:11:27.578 lat (msec) : 100=3.04% 00:11:27.578 cpu : usr=2.76%, sys=5.03%, ctx=392, majf=0, minf=2 00:11:27.578 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:27.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.578 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:27.579 issued rwts: total=3638,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:27.579 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:27.579 job2: (groupid=0, jobs=1): err= 0: pid=72364: Wed Nov 20 12:25:32 2024 00:11:27.579 read: IOPS=4562, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1010msec) 00:11:27.579 slat (nsec): min=1142, max=22915k, avg=110295.41, stdev=766904.41 00:11:27.579 clat (usec): min=3371, max=77320, avg=14165.51, stdev=8922.10 00:11:27.579 lat (usec): min=5737, max=77346, avg=14275.80, stdev=8990.82 00:11:27.579 clat percentiles (usec): 00:11:27.579 | 1.00th=[ 7111], 5.00th=[ 8848], 10.00th=[ 9896], 20.00th=[10683], 00:11:27.579 | 30.00th=[11207], 40.00th=[11600], 50.00th=[12256], 60.00th=[12780], 00:11:27.579 | 70.00th=[13304], 80.00th=[13960], 90.00th=[15401], 95.00th=[32375], 00:11:27.579 | 99.00th=[62653], 99.50th=[63177], 99.90th=[63701], 99.95th=[70779], 00:11:27.579 | 99.99th=[77071] 00:11:27.579 write: IOPS=5030, BW=19.7MiB/s (20.6MB/s)(19.8MiB/1010msec); 0 zone resets 00:11:27.579 slat (nsec): min=1838, max=14767k, avg=92883.84, stdev=568999.41 00:11:27.579 clat (usec): min=4926, max=34756, avg=12264.44, stdev=3277.39 00:11:27.579 lat (usec): min=4931, max=34769, avg=12357.33, stdev=3304.49 00:11:27.579 clat percentiles (usec): 00:11:27.579 | 1.00th=[ 7832], 5.00th=[ 8717], 10.00th=[ 9372], 20.00th=[10028], 00:11:27.579 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11600], 60.00th=[11863], 00:11:27.579 | 70.00th=[12911], 80.00th=[13566], 90.00th=[15795], 95.00th=[18482], 00:11:27.579 | 99.00th=[28967], 99.50th=[28967], 99.90th=[28967], 99.95th=[28967], 00:11:27.579 | 99.99th=[34866] 00:11:27.579 bw ( KiB/s): min=18976, max=20648, per=26.03%, avg=19812.00, stdev=1182.28, samples=2 00:11:27.579 iops : min= 4744, max= 5162, avg=4953.00, stdev=295.57, samples=2 00:11:27.579 lat (msec) : 4=0.01%, 10=16.16%, 20=79.16%, 50=3.34%, 100=1.32% 00:11:27.579 cpu : usr=2.18%, sys=4.96%, ctx=443, majf=0, minf=1 00:11:27.579 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:27.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.579 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:27.579 issued rwts: total=4608,5081,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:27.579 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:27.579 job3: (groupid=0, jobs=1): err= 0: pid=72365: Wed Nov 20 12:25:32 2024 00:11:27.579 read: IOPS=4160, BW=16.2MiB/s (17.0MB/s)(16.4MiB/1011msec) 00:11:27.579 slat (nsec): min=1159, max=14034k, avg=102854.62, stdev=846312.42 00:11:27.579 clat (usec): min=1576, max=40336, avg=15104.33, stdev=5486.16 00:11:27.579 lat (usec): min=1587, max=40355, avg=15207.18, stdev=5563.49 00:11:27.579 clat percentiles (usec): 00:11:27.579 | 1.00th=[ 4686], 5.00th=[ 8717], 10.00th=[ 9503], 20.00th=[10683], 00:11:27.579 | 30.00th=[11863], 40.00th=[13042], 50.00th=[13698], 60.00th=[14353], 00:11:27.579 | 70.00th=[17433], 80.00th=[19792], 90.00th=[22152], 95.00th=[27132], 00:11:27.579 | 99.00th=[29230], 99.50th=[29754], 99.90th=[34341], 99.95th=[35914], 00:11:27.579 | 99.99th=[40109] 00:11:27.579 write: IOPS=4557, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1011msec); 0 zone resets 00:11:27.579 slat (usec): min=2, max=20070, avg=82.47, stdev=737.76 00:11:27.579 clat (usec): min=577, max=72825, avg=13609.78, stdev=9531.54 00:11:27.579 lat (usec): min=585, max=72834, avg=13692.25, stdev=9571.53 00:11:27.579 clat percentiles (usec): 00:11:27.579 | 1.00th=[ 2737], 5.00th=[ 4621], 10.00th=[ 5735], 20.00th=[ 8160], 00:11:27.579 | 30.00th=[ 9634], 40.00th=[10814], 50.00th=[11731], 60.00th=[12518], 00:11:27.579 | 70.00th=[13435], 80.00th=[15926], 90.00th=[21890], 95.00th=[28443], 00:11:27.579 | 99.00th=[62129], 99.50th=[68682], 99.90th=[72877], 99.95th=[72877], 00:11:27.579 | 99.99th=[72877] 00:11:27.579 bw ( KiB/s): min=18168, max=18560, per=24.13%, avg=18364.00, stdev=277.19, samples=2 00:11:27.579 iops : min= 4542, max= 4640, avg=4591.00, stdev=69.30, samples=2 00:11:27.579 lat (usec) : 750=0.05% 00:11:27.579 lat (msec) : 2=0.16%, 4=2.03%, 10=19.58%, 20=61.95%, 50=15.53% 00:11:27.579 lat (msec) : 100=0.70% 00:11:27.579 cpu : usr=3.07%, sys=5.15%, ctx=345, majf=0, minf=1 00:11:27.579 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:27.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.579 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:27.579 issued rwts: total=4206,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:27.579 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:27.579 00:11:27.579 Run status group 0 (all jobs): 00:11:27.579 READ: bw=67.7MiB/s (71.0MB/s), 14.0MiB/s-20.0MiB/s (14.7MB/s-20.9MB/s), io=68.6MiB (72.0MB), run=1002-1014msec 00:11:27.579 WRITE: bw=74.3MiB/s (77.9MB/s), 15.8MiB/s-21.5MiB/s (16.5MB/s-22.5MB/s), io=75.4MiB (79.0MB), run=1002-1014msec 00:11:27.579 00:11:27.579 Disk stats (read/write): 00:11:27.579 nvme0n1: ios=4146/4606, merge=0/0, ticks=16624/20541, in_queue=37165, util=86.97% 00:11:27.579 nvme0n2: ios=3093/3416, merge=0/0, ticks=48931/56821, in_queue=105752, util=87.31% 00:11:27.579 nvme0n3: ios=4013/4096, merge=0/0, ticks=21563/19449, in_queue=41012, util=98.44% 00:11:27.579 nvme0n4: ios=3938/4096, merge=0/0, ticks=51543/43663, in_queue=95206, util=96.65% 00:11:27.579 12:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:27.579 [global] 00:11:27.579 thread=1 00:11:27.579 invalidate=1 00:11:27.579 rw=randwrite 00:11:27.579 time_based=1 00:11:27.579 runtime=1 00:11:27.579 ioengine=libaio 00:11:27.579 direct=1 00:11:27.579 bs=4096 00:11:27.579 iodepth=128 00:11:27.579 norandommap=0 00:11:27.579 numjobs=1 00:11:27.579 00:11:27.579 verify_dump=1 00:11:27.579 verify_backlog=512 00:11:27.579 verify_state_save=0 00:11:27.579 do_verify=1 00:11:27.579 verify=crc32c-intel 00:11:27.579 [job0] 00:11:27.579 filename=/dev/nvme0n1 00:11:27.579 [job1] 00:11:27.579 filename=/dev/nvme0n2 00:11:27.579 [job2] 00:11:27.579 filename=/dev/nvme0n3 00:11:27.579 [job3] 00:11:27.579 filename=/dev/nvme0n4 00:11:27.579 Could not set queue depth (nvme0n1) 00:11:27.579 Could not set queue depth (nvme0n2) 00:11:27.579 Could not set queue depth (nvme0n3) 00:11:27.579 Could not set queue depth (nvme0n4) 00:11:27.579 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:27.579 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:27.579 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:27.579 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:27.579 fio-3.35 00:11:27.579 Starting 4 threads 00:11:28.951 00:11:28.951 job0: (groupid=0, jobs=1): err= 0: pid=72739: Wed Nov 20 12:25:34 2024 00:11:28.951 read: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec) 00:11:28.951 slat (nsec): min=1324, max=10065k, avg=107934.63, stdev=705852.99 00:11:28.951 clat (usec): min=4350, max=45178, avg=13478.57, stdev=4498.17 00:11:28.951 lat (usec): min=4356, max=45186, avg=13586.50, stdev=4548.14 00:11:28.951 clat percentiles (usec): 00:11:28.951 | 1.00th=[ 6652], 5.00th=[10552], 10.00th=[10945], 20.00th=[11076], 00:11:28.951 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11994], 60.00th=[12780], 00:11:28.951 | 70.00th=[13435], 80.00th=[15008], 90.00th=[17695], 95.00th=[20579], 00:11:28.951 | 99.00th=[34341], 99.50th=[38536], 99.90th=[45351], 99.95th=[45351], 00:11:28.951 | 99.99th=[45351] 00:11:28.952 write: IOPS=4525, BW=17.7MiB/s (18.5MB/s)(17.8MiB/1009msec); 0 zone resets 00:11:28.952 slat (usec): min=2, max=9631, avg=112.88, stdev=583.56 00:11:28.952 clat (usec): min=512, max=38010, avg=15904.50, stdev=6877.63 00:11:28.952 lat (usec): min=525, max=38028, avg=16017.38, stdev=6938.54 00:11:28.952 clat percentiles (usec): 00:11:28.952 | 1.00th=[ 3851], 5.00th=[ 6587], 10.00th=[ 8717], 20.00th=[10552], 00:11:28.952 | 30.00th=[11338], 40.00th=[11994], 50.00th=[14222], 60.00th=[16057], 00:11:28.952 | 70.00th=[19792], 80.00th=[21103], 90.00th=[26870], 95.00th=[28705], 00:11:28.952 | 99.00th=[32900], 99.50th=[33817], 99.90th=[38011], 99.95th=[38011], 00:11:28.952 | 99.99th=[38011] 00:11:28.952 bw ( KiB/s): min=17208, max=18304, per=23.47%, avg=17756.00, stdev=774.99, samples=2 00:11:28.952 iops : min= 4302, max= 4576, avg=4439.00, stdev=193.75, samples=2 00:11:28.952 lat (usec) : 750=0.03% 00:11:28.952 lat (msec) : 4=0.58%, 10=8.02%, 20=72.78%, 50=18.59% 00:11:28.952 cpu : usr=3.77%, sys=5.36%, ctx=426, majf=0, minf=1 00:11:28.952 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:28.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.952 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:28.952 issued rwts: total=4096,4566,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:28.952 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:28.952 job1: (groupid=0, jobs=1): err= 0: pid=72740: Wed Nov 20 12:25:34 2024 00:11:28.952 read: IOPS=5007, BW=19.6MiB/s (20.5MB/s)(19.6MiB/1003msec) 00:11:28.952 slat (nsec): min=1047, max=15423k, avg=105943.52, stdev=652963.32 00:11:28.952 clat (usec): min=2165, max=42044, avg=12873.78, stdev=5596.77 00:11:28.952 lat (usec): min=2168, max=42050, avg=12979.72, stdev=5614.97 00:11:28.952 clat percentiles (usec): 00:11:28.952 | 1.00th=[ 5080], 5.00th=[ 7767], 10.00th=[ 8586], 20.00th=[ 9896], 00:11:28.952 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10814], 60.00th=[11338], 00:11:28.952 | 70.00th=[12780], 80.00th=[15139], 90.00th=[19792], 95.00th=[25035], 00:11:28.952 | 99.00th=[35914], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:28.952 | 99.99th=[42206] 00:11:28.952 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:11:28.952 slat (nsec): min=1858, max=13818k, avg=87769.87, stdev=442052.45 00:11:28.952 clat (usec): min=4207, max=46707, avg=12118.66, stdev=5520.03 00:11:28.952 lat (usec): min=4607, max=46712, avg=12206.43, stdev=5545.34 00:11:28.952 clat percentiles (usec): 00:11:28.952 | 1.00th=[ 6915], 5.00th=[ 8225], 10.00th=[ 8848], 20.00th=[ 9634], 00:11:28.952 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10290], 60.00th=[10683], 00:11:28.952 | 70.00th=[11994], 80.00th=[12911], 90.00th=[16319], 95.00th=[20055], 00:11:28.952 | 99.00th=[40109], 99.50th=[41681], 99.90th=[46924], 99.95th=[46924], 00:11:28.952 | 99.99th=[46924] 00:11:28.952 bw ( KiB/s): min=16384, max=24576, per=27.08%, avg=20480.00, stdev=5792.62, samples=2 00:11:28.952 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:11:28.952 lat (msec) : 4=0.29%, 10=26.85%, 20=65.38%, 50=7.49% 00:11:28.952 cpu : usr=2.00%, sys=3.39%, ctx=656, majf=0, minf=2 00:11:28.952 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:28.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.952 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:28.952 issued rwts: total=5023,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:28.952 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:28.952 job2: (groupid=0, jobs=1): err= 0: pid=72741: Wed Nov 20 12:25:34 2024 00:11:28.952 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:11:28.952 slat (nsec): min=1100, max=14617k, avg=119819.32, stdev=720612.16 00:11:28.952 clat (usec): min=5912, max=53363, avg=15220.09, stdev=6502.04 00:11:28.952 lat (usec): min=6044, max=55941, avg=15339.91, stdev=6534.83 00:11:28.952 clat percentiles (usec): 00:11:28.952 | 1.00th=[ 7111], 5.00th=[ 8848], 10.00th=[ 9503], 20.00th=[10945], 00:11:28.952 | 30.00th=[11863], 40.00th=[12649], 50.00th=[13042], 60.00th=[14877], 00:11:28.952 | 70.00th=[16450], 80.00th=[16909], 90.00th=[21890], 95.00th=[27395], 00:11:28.952 | 99.00th=[40633], 99.50th=[51119], 99.90th=[53216], 99.95th=[53216], 00:11:28.952 | 99.99th=[53216] 00:11:28.952 write: IOPS=4053, BW=15.8MiB/s (16.6MB/s)(15.9MiB/1005msec); 0 zone resets 00:11:28.952 slat (nsec): min=1834, max=16227k, avg=135734.43, stdev=702211.74 00:11:28.952 clat (usec): min=4021, max=54954, avg=17773.74, stdev=9518.28 00:11:28.952 lat (usec): min=5072, max=55629, avg=17909.47, stdev=9566.99 00:11:28.952 clat percentiles (usec): 00:11:28.952 | 1.00th=[ 8356], 5.00th=[ 9241], 10.00th=[10028], 20.00th=[11731], 00:11:28.952 | 30.00th=[12649], 40.00th=[13042], 50.00th=[13435], 60.00th=[15401], 00:11:28.952 | 70.00th=[20317], 80.00th=[23725], 90.00th=[29230], 95.00th=[38536], 00:11:28.952 | 99.00th=[54264], 99.50th=[54789], 99.90th=[54789], 99.95th=[54789], 00:11:28.952 | 99.99th=[54789] 00:11:28.952 bw ( KiB/s): min=14600, max=16976, per=20.87%, avg=15788.00, stdev=1680.09, samples=2 00:11:28.952 iops : min= 3650, max= 4244, avg=3947.00, stdev=420.02, samples=2 00:11:28.952 lat (msec) : 10=11.15%, 20=65.84%, 50=21.28%, 100=1.72% 00:11:28.952 cpu : usr=1.79%, sys=3.69%, ctx=455, majf=0, minf=1 00:11:28.952 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:28.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.952 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:28.952 issued rwts: total=3584,4074,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:28.952 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:28.952 job3: (groupid=0, jobs=1): err= 0: pid=72742: Wed Nov 20 12:25:34 2024 00:11:28.952 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:11:28.952 slat (nsec): min=1083, max=14349k, avg=93045.88, stdev=535841.96 00:11:28.952 clat (usec): min=3874, max=19140, avg=12130.62, stdev=1903.51 00:11:28.952 lat (usec): min=3881, max=24719, avg=12223.67, stdev=1910.28 00:11:28.952 clat percentiles (usec): 00:11:28.952 | 1.00th=[ 6587], 5.00th=[ 9634], 10.00th=[10290], 20.00th=[10945], 00:11:28.952 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11731], 60.00th=[12125], 00:11:28.952 | 70.00th=[12649], 80.00th=[13435], 90.00th=[14091], 95.00th=[15533], 00:11:28.952 | 99.00th=[19006], 99.50th=[19006], 99.90th=[19268], 99.95th=[19268], 00:11:28.952 | 99.99th=[19268] 00:11:28.952 write: IOPS=5314, BW=20.8MiB/s (21.8MB/s)(20.8MiB/1001msec); 0 zone resets 00:11:28.952 slat (nsec): min=1810, max=13010k, avg=93366.50, stdev=500170.19 00:11:28.952 clat (usec): min=263, max=35839, avg=12070.33, stdev=3168.76 00:11:28.952 lat (usec): min=2774, max=35852, avg=12163.69, stdev=3166.45 00:11:28.952 clat percentiles (usec): 00:11:28.952 | 1.00th=[ 6259], 5.00th=[ 8586], 10.00th=[ 9765], 20.00th=[10421], 00:11:28.952 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11600], 60.00th=[11731], 00:11:28.952 | 70.00th=[12256], 80.00th=[13173], 90.00th=[14222], 95.00th=[16712], 00:11:28.952 | 99.00th=[26346], 99.50th=[28181], 99.90th=[33162], 99.95th=[35914], 00:11:28.952 | 99.99th=[35914] 00:11:28.952 bw ( KiB/s): min=20480, max=20480, per=27.08%, avg=20480.00, stdev= 0.00, samples=1 00:11:28.952 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:11:28.952 lat (usec) : 500=0.01% 00:11:28.952 lat (msec) : 4=0.50%, 10=9.12%, 20=88.81%, 50=1.56% 00:11:28.952 cpu : usr=2.80%, sys=3.90%, ctx=555, majf=0, minf=1 00:11:28.952 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:28.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.952 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:28.952 issued rwts: total=5120,5320,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:28.952 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:28.952 00:11:28.952 Run status group 0 (all jobs): 00:11:28.952 READ: bw=69.0MiB/s (72.4MB/s), 13.9MiB/s-20.0MiB/s (14.6MB/s-20.9MB/s), io=69.6MiB (73.0MB), run=1001-1009msec 00:11:28.952 WRITE: bw=73.9MiB/s (77.5MB/s), 15.8MiB/s-20.8MiB/s (16.6MB/s-21.8MB/s), io=74.5MiB (78.2MB), run=1001-1009msec 00:11:28.952 00:11:28.952 Disk stats (read/write): 00:11:28.952 nvme0n1: ios=3636/3639, merge=0/0, ticks=36042/39841, in_queue=75883, util=97.39% 00:11:28.952 nvme0n2: ios=3632/4087, merge=0/0, ticks=15122/12574, in_queue=27696, util=97.23% 00:11:28.952 nvme0n3: ios=3110/3431, merge=0/0, ticks=14621/16790, in_queue=31411, util=98.37% 00:11:28.952 nvme0n4: ios=4096/4257, merge=0/0, ticks=17951/16103, in_queue=34054, util=88.19% 00:11:28.952 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:28.952 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=72974 00:11:28.952 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:28.952 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:28.952 [global] 00:11:28.952 thread=1 00:11:28.952 invalidate=1 00:11:28.952 rw=read 00:11:28.952 time_based=1 00:11:28.952 runtime=10 00:11:28.952 ioengine=libaio 00:11:28.952 direct=1 00:11:28.952 bs=4096 00:11:28.952 iodepth=1 00:11:28.952 norandommap=1 00:11:28.952 numjobs=1 00:11:28.952 00:11:28.952 [job0] 00:11:28.952 filename=/dev/nvme0n1 00:11:28.952 [job1] 00:11:28.952 filename=/dev/nvme0n2 00:11:28.952 [job2] 00:11:28.952 filename=/dev/nvme0n3 00:11:28.952 [job3] 00:11:28.952 filename=/dev/nvme0n4 00:11:28.952 Could not set queue depth (nvme0n1) 00:11:28.952 Could not set queue depth (nvme0n2) 00:11:28.952 Could not set queue depth (nvme0n3) 00:11:28.952 Could not set queue depth (nvme0n4) 00:11:29.209 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:29.209 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:29.209 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:29.209 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:29.209 fio-3.35 00:11:29.209 Starting 4 threads 00:11:32.484 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:32.484 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=46190592, buflen=4096 00:11:32.484 fio: pid=73114, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:32.484 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:32.484 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=44044288, buflen=4096 00:11:32.484 fio: pid=73113, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:32.484 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:32.484 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:32.484 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=53592064, buflen=4096 00:11:32.484 fio: pid=73111, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:32.741 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:32.741 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:32.741 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:32.741 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:32.741 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=32432128, buflen=4096 00:11:32.741 fio: pid=73112, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:33.002 00:11:33.002 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=73111: Wed Nov 20 12:25:38 2024 00:11:33.002 read: IOPS=4118, BW=16.1MiB/s (16.9MB/s)(51.1MiB/3177msec) 00:11:33.002 slat (usec): min=7, max=33327, avg=12.78, stdev=311.65 00:11:33.002 clat (usec): min=166, max=41046, avg=226.50, stdev=359.25 00:11:33.002 lat (usec): min=173, max=41054, avg=239.27, stdev=476.06 00:11:33.002 clat percentiles (usec): 00:11:33.002 | 1.00th=[ 186], 5.00th=[ 196], 10.00th=[ 202], 20.00th=[ 208], 00:11:33.002 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 225], 00:11:33.002 | 70.00th=[ 231], 80.00th=[ 237], 90.00th=[ 247], 95.00th=[ 258], 00:11:33.002 | 99.00th=[ 289], 99.50th=[ 306], 99.90th=[ 355], 99.95th=[ 445], 00:11:33.002 | 99.99th=[ 4228] 00:11:33.002 bw ( KiB/s): min=14240, max=17312, per=32.82%, avg=16583.67, stdev=1191.73, samples=6 00:11:33.002 iops : min= 3560, max= 4328, avg=4145.83, stdev=297.94, samples=6 00:11:33.002 lat (usec) : 250=91.88%, 500=8.09% 00:11:33.002 lat (msec) : 2=0.01%, 10=0.01%, 50=0.01% 00:11:33.002 cpu : usr=1.73%, sys=5.10%, ctx=13089, majf=0, minf=1 00:11:33.002 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:33.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.002 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.002 issued rwts: total=13085,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:33.002 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:33.002 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=73112: Wed Nov 20 12:25:38 2024 00:11:33.002 read: IOPS=2324, BW=9296KiB/s (9519kB/s)(30.9MiB/3407msec) 00:11:33.002 slat (usec): min=6, max=11854, avg=11.41, stdev=185.27 00:11:33.002 clat (usec): min=175, max=42409, avg=414.53, stdev=2554.55 00:11:33.002 lat (usec): min=183, max=48911, avg=425.94, stdev=2577.21 00:11:33.002 clat percentiles (usec): 00:11:33.002 | 1.00th=[ 192], 5.00th=[ 206], 10.00th=[ 212], 20.00th=[ 223], 00:11:33.002 | 30.00th=[ 231], 40.00th=[ 241], 50.00th=[ 249], 60.00th=[ 258], 00:11:33.002 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 310], 00:11:33.002 | 99.00th=[ 445], 99.50th=[ 490], 99.90th=[41157], 99.95th=[41157], 00:11:33.002 | 99.99th=[42206] 00:11:33.002 bw ( KiB/s): min= 3360, max=15992, per=20.02%, avg=10117.17, stdev=5334.72, samples=6 00:11:33.002 iops : min= 840, max= 3998, avg=2529.17, stdev=1333.85, samples=6 00:11:33.002 lat (usec) : 250=51.45%, 500=48.09%, 750=0.05% 00:11:33.002 lat (msec) : 50=0.40% 00:11:33.002 cpu : usr=1.15%, sys=2.61%, ctx=7926, majf=0, minf=2 00:11:33.002 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:33.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.002 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.002 issued rwts: total=7919,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:33.002 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:33.002 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=73113: Wed Nov 20 12:25:38 2024 00:11:33.002 read: IOPS=3638, BW=14.2MiB/s (14.9MB/s)(42.0MiB/2956msec) 00:11:33.002 slat (nsec): min=7012, max=43686, avg=8689.91, stdev=1579.91 00:11:33.002 clat (usec): min=176, max=41812, avg=262.11, stdev=562.79 00:11:33.002 lat (usec): min=194, max=41821, avg=270.80, stdev=562.81 00:11:33.002 clat percentiles (usec): 00:11:33.002 | 1.00th=[ 202], 5.00th=[ 212], 10.00th=[ 219], 20.00th=[ 229], 00:11:33.002 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 253], 00:11:33.002 | 70.00th=[ 262], 80.00th=[ 273], 90.00th=[ 289], 95.00th=[ 306], 00:11:33.002 | 99.00th=[ 433], 99.50th=[ 449], 99.90th=[ 498], 99.95th=[ 537], 00:11:33.002 | 99.99th=[41157] 00:11:33.002 bw ( KiB/s): min=12120, max=15512, per=28.60%, avg=14448.00, stdev=1442.95, samples=5 00:11:33.002 iops : min= 3030, max= 3878, avg=3612.00, stdev=360.74, samples=5 00:11:33.002 lat (usec) : 250=53.50%, 500=46.41%, 750=0.06% 00:11:33.002 lat (msec) : 4=0.01%, 50=0.02% 00:11:33.002 cpu : usr=2.77%, sys=5.35%, ctx=10754, majf=0, minf=2 00:11:33.002 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:33.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.002 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.002 issued rwts: total=10754,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:33.002 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:33.002 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=73114: Wed Nov 20 12:25:38 2024 00:11:33.002 read: IOPS=4135, BW=16.2MiB/s (16.9MB/s)(44.1MiB/2727msec) 00:11:33.002 slat (nsec): min=6995, max=46147, avg=8139.60, stdev=1220.01 00:11:33.002 clat (usec): min=183, max=1343, avg=229.86, stdev=20.17 00:11:33.002 lat (usec): min=190, max=1351, avg=238.00, stdev=20.22 00:11:33.002 clat percentiles (usec): 00:11:33.002 | 1.00th=[ 198], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 217], 00:11:33.002 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 233], 00:11:33.002 | 70.00th=[ 239], 80.00th=[ 243], 90.00th=[ 251], 95.00th=[ 260], 00:11:33.002 | 99.00th=[ 273], 99.50th=[ 277], 99.90th=[ 306], 99.95th=[ 383], 00:11:33.002 | 99.99th=[ 515] 00:11:33.002 bw ( KiB/s): min=16240, max=17048, per=33.23%, avg=16790.40, stdev=323.95, samples=5 00:11:33.002 iops : min= 4060, max= 4262, avg=4197.60, stdev=80.99, samples=5 00:11:33.002 lat (usec) : 250=88.25%, 500=11.72%, 750=0.01% 00:11:33.002 lat (msec) : 2=0.01% 00:11:33.002 cpu : usr=2.13%, sys=6.75%, ctx=11278, majf=0, minf=2 00:11:33.002 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:33.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.002 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.002 issued rwts: total=11278,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:33.002 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:33.002 00:11:33.002 Run status group 0 (all jobs): 00:11:33.002 READ: bw=49.3MiB/s (51.7MB/s), 9296KiB/s-16.2MiB/s (9519kB/s-16.9MB/s), io=168MiB (176MB), run=2727-3407msec 00:11:33.002 00:11:33.002 Disk stats (read/write): 00:11:33.002 nvme0n1: ios=12857/0, merge=0/0, ticks=2793/0, in_queue=2793, util=94.33% 00:11:33.002 nvme0n2: ios=7952/0, merge=0/0, ticks=4100/0, in_queue=4100, util=99.20% 00:11:33.002 nvme0n3: ios=10438/0, merge=0/0, ticks=2599/0, in_queue=2599, util=96.52% 00:11:33.002 nvme0n4: ios=10895/0, merge=0/0, ticks=2352/0, in_queue=2352, util=96.45% 00:11:33.002 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:33.002 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:33.326 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:33.326 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:33.593 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:33.593 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:33.593 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:33.593 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:33.874 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:33.874 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 72974 00:11:33.874 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:33.874 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:33.874 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.874 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:33.874 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:33.874 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:33.874 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:33.874 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:33.874 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:34.151 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:34.151 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:34.151 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:34.151 nvmf hotplug test: fio failed as expected 00:11:34.151 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:34.151 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:34.151 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:34.151 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:34.151 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:34.151 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:34.151 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:34.151 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:34.151 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:34.151 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:34.151 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:34.151 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:34.151 rmmod nvme_tcp 00:11:34.151 rmmod nvme_fabrics 00:11:34.151 rmmod nvme_keyring 00:11:34.151 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:34.151 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:34.151 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:34.151 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 70245 ']' 00:11:34.151 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 70245 00:11:34.151 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 70245 ']' 00:11:34.151 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 70245 00:11:34.151 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:34.151 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:34.151 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70245 00:11:34.410 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:34.410 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:34.410 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70245' 00:11:34.410 killing process with pid 70245 00:11:34.410 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 70245 00:11:34.410 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 70245 00:11:34.410 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:34.410 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:34.410 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:34.410 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:34.410 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:34.410 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:34.410 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:34.410 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:34.410 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:34.410 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.410 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:34.410 12:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:36.944 00:11:36.944 real 0m26.947s 00:11:36.944 user 1m46.299s 00:11:36.944 sys 0m9.202s 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.944 ************************************ 00:11:36.944 END TEST nvmf_fio_target 00:11:36.944 ************************************ 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:36.944 ************************************ 00:11:36.944 START TEST nvmf_bdevio 00:11:36.944 ************************************ 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:36.944 * Looking for test storage... 00:11:36.944 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:36.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.944 --rc genhtml_branch_coverage=1 00:11:36.944 --rc genhtml_function_coverage=1 00:11:36.944 --rc genhtml_legend=1 00:11:36.944 --rc geninfo_all_blocks=1 00:11:36.944 --rc geninfo_unexecuted_blocks=1 00:11:36.944 00:11:36.944 ' 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:36.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.944 --rc genhtml_branch_coverage=1 00:11:36.944 --rc genhtml_function_coverage=1 00:11:36.944 --rc genhtml_legend=1 00:11:36.944 --rc geninfo_all_blocks=1 00:11:36.944 --rc geninfo_unexecuted_blocks=1 00:11:36.944 00:11:36.944 ' 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:36.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.944 --rc genhtml_branch_coverage=1 00:11:36.944 --rc genhtml_function_coverage=1 00:11:36.944 --rc genhtml_legend=1 00:11:36.944 --rc geninfo_all_blocks=1 00:11:36.944 --rc geninfo_unexecuted_blocks=1 00:11:36.944 00:11:36.944 ' 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:36.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.944 --rc genhtml_branch_coverage=1 00:11:36.944 --rc genhtml_function_coverage=1 00:11:36.944 --rc genhtml_legend=1 00:11:36.944 --rc geninfo_all_blocks=1 00:11:36.944 --rc geninfo_unexecuted_blocks=1 00:11:36.944 00:11:36.944 ' 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:36.944 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:36.945 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:36.945 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:36.945 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:36.945 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:36.945 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.945 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.945 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.945 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:36.945 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.945 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:36.945 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:36.945 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:36.945 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:36.945 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:36.945 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:36.945 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:36.945 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:36.945 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:36.945 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:36.945 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:36.945 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:36.945 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:36.945 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:36.945 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:36.945 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:36.945 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:36.945 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:36.945 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:36.945 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.945 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:36.945 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:36.945 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:36.945 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:36.945 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:36.945 12:25:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:43.515 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:43.515 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:43.515 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:43.515 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:43.515 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:43.515 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:43.515 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:43.515 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:43.515 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:43.515 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:43.515 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:43.515 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:43.515 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:43.515 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:43.515 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:43.515 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:43.515 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:43.515 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:43.515 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:43.515 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:43.515 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:43.515 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:43.515 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:43.515 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:43.515 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:43.515 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:43.515 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:43.515 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:43.515 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:43.515 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:43.515 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:43.515 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:43.515 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:43.515 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:43.515 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:43.515 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:43.515 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:43.515 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:43.515 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.515 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.515 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:43.515 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:43.515 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:43.515 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:43.515 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:43.516 Found net devices under 0000:86:00.0: cvl_0_0 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:43.516 Found net devices under 0000:86:00.1: cvl_0_1 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:43.516 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:43.516 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.470 ms 00:11:43.516 00:11:43.516 --- 10.0.0.2 ping statistics --- 00:11:43.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.516 rtt min/avg/max/mdev = 0.470/0.470/0.470/0.000 ms 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:43.516 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:43.516 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:11:43.516 00:11:43.516 --- 10.0.0.1 ping statistics --- 00:11:43.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.516 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=77596 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 77596 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 77596 ']' 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:43.516 [2024-11-20 12:25:48.576026] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:11:43.516 [2024-11-20 12:25:48.576070] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:43.516 [2024-11-20 12:25:48.655602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:43.516 [2024-11-20 12:25:48.699270] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:43.516 [2024-11-20 12:25:48.699302] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:43.516 [2024-11-20 12:25:48.699309] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:43.516 [2024-11-20 12:25:48.699315] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:43.516 [2024-11-20 12:25:48.699321] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:43.516 [2024-11-20 12:25:48.700849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:43.516 [2024-11-20 12:25:48.700956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:43.516 [2024-11-20 12:25:48.701064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:43.516 [2024-11-20 12:25:48.701064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:43.516 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:43.517 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:43.517 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:43.517 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:43.517 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:43.517 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.517 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:43.517 [2024-11-20 12:25:48.836835] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:43.517 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.517 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:43.517 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.517 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:43.517 Malloc0 00:11:43.517 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.517 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:43.517 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.517 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:43.517 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.517 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:43.517 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.517 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:43.517 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.517 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:43.517 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.517 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:43.517 [2024-11-20 12:25:48.908728] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:43.517 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.517 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:43.517 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:43.517 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:43.517 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:43.517 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:43.517 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:43.517 { 00:11:43.517 "params": { 00:11:43.517 "name": "Nvme$subsystem", 00:11:43.517 "trtype": "$TEST_TRANSPORT", 00:11:43.517 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:43.517 "adrfam": "ipv4", 00:11:43.517 "trsvcid": "$NVMF_PORT", 00:11:43.517 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:43.517 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:43.517 "hdgst": ${hdgst:-false}, 00:11:43.517 "ddgst": ${ddgst:-false} 00:11:43.517 }, 00:11:43.517 "method": "bdev_nvme_attach_controller" 00:11:43.517 } 00:11:43.517 EOF 00:11:43.517 )") 00:11:43.517 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:43.517 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:43.517 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:43.517 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:43.517 "params": { 00:11:43.517 "name": "Nvme1", 00:11:43.517 "trtype": "tcp", 00:11:43.517 "traddr": "10.0.0.2", 00:11:43.517 "adrfam": "ipv4", 00:11:43.517 "trsvcid": "4420", 00:11:43.517 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:43.517 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:43.517 "hdgst": false, 00:11:43.517 "ddgst": false 00:11:43.517 }, 00:11:43.517 "method": "bdev_nvme_attach_controller" 00:11:43.517 }' 00:11:43.517 [2024-11-20 12:25:48.960880] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:11:43.517 [2024-11-20 12:25:48.960920] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77621 ] 00:11:43.517 [2024-11-20 12:25:49.036942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:43.517 [2024-11-20 12:25:49.080499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.517 [2024-11-20 12:25:49.080596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:43.517 [2024-11-20 12:25:49.080596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.517 I/O targets: 00:11:43.517 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:43.517 00:11:43.517 00:11:43.517 CUnit - A unit testing framework for C - Version 2.1-3 00:11:43.517 http://cunit.sourceforge.net/ 00:11:43.517 00:11:43.517 00:11:43.517 Suite: bdevio tests on: Nvme1n1 00:11:43.775 Test: blockdev write read block ...passed 00:11:43.775 Test: blockdev write zeroes read block ...passed 00:11:43.775 Test: blockdev write zeroes read no split ...passed 00:11:43.775 Test: blockdev write zeroes read split ...passed 00:11:43.775 Test: blockdev write zeroes read split partial ...passed 00:11:43.775 Test: blockdev reset ...[2024-11-20 12:25:49.351233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:43.775 [2024-11-20 12:25:49.351296] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146e340 (9): Bad file descriptor 00:11:43.775 [2024-11-20 12:25:49.446473] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:43.775 passed 00:11:43.775 Test: blockdev write read 8 blocks ...passed 00:11:43.775 Test: blockdev write read size > 128k ...passed 00:11:43.775 Test: blockdev write read invalid size ...passed 00:11:43.775 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:43.775 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:43.775 Test: blockdev write read max offset ...passed 00:11:44.033 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:44.033 Test: blockdev writev readv 8 blocks ...passed 00:11:44.033 Test: blockdev writev readv 30 x 1block ...passed 00:11:44.033 Test: blockdev writev readv block ...passed 00:11:44.033 Test: blockdev writev readv size > 128k ...passed 00:11:44.033 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:44.033 Test: blockdev comparev and writev ...[2024-11-20 12:25:49.658034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:44.033 [2024-11-20 12:25:49.658061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:44.033 [2024-11-20 12:25:49.658074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:44.033 [2024-11-20 12:25:49.658082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:44.033 [2024-11-20 12:25:49.658317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:44.033 [2024-11-20 12:25:49.658328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:44.033 [2024-11-20 12:25:49.658339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:44.033 [2024-11-20 12:25:49.658346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:44.033 [2024-11-20 12:25:49.658583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:44.033 [2024-11-20 12:25:49.658593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:44.033 [2024-11-20 12:25:49.658609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:44.033 [2024-11-20 12:25:49.658615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:44.033 [2024-11-20 12:25:49.658837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:44.033 [2024-11-20 12:25:49.658847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:44.033 [2024-11-20 12:25:49.658859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:44.033 [2024-11-20 12:25:49.658866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:44.033 passed 00:11:44.033 Test: blockdev nvme passthru rw ...passed 00:11:44.033 Test: blockdev nvme passthru vendor specific ...[2024-11-20 12:25:49.742571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:44.033 [2024-11-20 12:25:49.742587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:44.033 [2024-11-20 12:25:49.742687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:44.033 [2024-11-20 12:25:49.742696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:44.033 [2024-11-20 12:25:49.742794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:44.033 [2024-11-20 12:25:49.742803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:44.033 [2024-11-20 12:25:49.742900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:44.033 [2024-11-20 12:25:49.742909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:44.033 passed 00:11:44.033 Test: blockdev nvme admin passthru ...passed 00:11:44.292 Test: blockdev copy ...passed 00:11:44.292 00:11:44.292 Run Summary: Type Total Ran Passed Failed Inactive 00:11:44.292 suites 1 1 n/a 0 0 00:11:44.292 tests 23 23 23 0 0 00:11:44.292 asserts 152 152 152 0 n/a 00:11:44.292 00:11:44.292 Elapsed time = 1.141 seconds 00:11:44.292 12:25:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:44.293 12:25:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.293 12:25:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:44.293 12:25:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.293 12:25:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:44.293 12:25:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:44.293 12:25:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:44.293 12:25:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:44.293 12:25:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:44.293 12:25:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:44.293 12:25:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:44.293 12:25:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:44.293 rmmod nvme_tcp 00:11:44.293 rmmod nvme_fabrics 00:11:44.293 rmmod nvme_keyring 00:11:44.293 12:25:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:44.293 12:25:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:44.293 12:25:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:44.293 12:25:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 77596 ']' 00:11:44.293 12:25:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 77596 00:11:44.293 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 77596 ']' 00:11:44.293 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 77596 00:11:44.293 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:44.293 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:44.293 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77596 00:11:44.552 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:44.552 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:44.552 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77596' 00:11:44.552 killing process with pid 77596 00:11:44.552 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 77596 00:11:44.552 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 77596 00:11:44.552 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:44.552 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:44.552 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:44.552 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:44.552 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:44.552 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:44.552 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:44.552 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:44.552 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:44.552 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:44.552 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:44.552 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:47.090 00:11:47.090 real 0m10.031s 00:11:47.090 user 0m9.706s 00:11:47.090 sys 0m5.121s 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:47.090 ************************************ 00:11:47.090 END TEST nvmf_bdevio 00:11:47.090 ************************************ 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:47.090 00:11:47.090 real 4m36.066s 00:11:47.090 user 10m27.286s 00:11:47.090 sys 1m40.329s 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:47.090 ************************************ 00:11:47.090 END TEST nvmf_target_core 00:11:47.090 ************************************ 00:11:47.090 12:25:52 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:47.090 12:25:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:47.090 12:25:52 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:47.090 12:25:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:47.090 ************************************ 00:11:47.090 START TEST nvmf_target_extra 00:11:47.090 ************************************ 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:47.090 * Looking for test storage... 00:11:47.090 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:47.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.090 --rc genhtml_branch_coverage=1 00:11:47.090 --rc genhtml_function_coverage=1 00:11:47.090 --rc genhtml_legend=1 00:11:47.090 --rc geninfo_all_blocks=1 00:11:47.090 --rc geninfo_unexecuted_blocks=1 00:11:47.090 00:11:47.090 ' 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:47.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.090 --rc genhtml_branch_coverage=1 00:11:47.090 --rc genhtml_function_coverage=1 00:11:47.090 --rc genhtml_legend=1 00:11:47.090 --rc geninfo_all_blocks=1 00:11:47.090 --rc geninfo_unexecuted_blocks=1 00:11:47.090 00:11:47.090 ' 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:47.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.090 --rc genhtml_branch_coverage=1 00:11:47.090 --rc genhtml_function_coverage=1 00:11:47.090 --rc genhtml_legend=1 00:11:47.090 --rc geninfo_all_blocks=1 00:11:47.090 --rc geninfo_unexecuted_blocks=1 00:11:47.090 00:11:47.090 ' 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:47.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.090 --rc genhtml_branch_coverage=1 00:11:47.090 --rc genhtml_function_coverage=1 00:11:47.090 --rc genhtml_legend=1 00:11:47.090 --rc geninfo_all_blocks=1 00:11:47.090 --rc geninfo_unexecuted_blocks=1 00:11:47.090 00:11:47.090 ' 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:47.090 12:25:52 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:47.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:47.091 ************************************ 00:11:47.091 START TEST nvmf_example 00:11:47.091 ************************************ 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:47.091 * Looking for test storage... 00:11:47.091 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:47.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.091 --rc genhtml_branch_coverage=1 00:11:47.091 --rc genhtml_function_coverage=1 00:11:47.091 --rc genhtml_legend=1 00:11:47.091 --rc geninfo_all_blocks=1 00:11:47.091 --rc geninfo_unexecuted_blocks=1 00:11:47.091 00:11:47.091 ' 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:47.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.091 --rc genhtml_branch_coverage=1 00:11:47.091 --rc genhtml_function_coverage=1 00:11:47.091 --rc genhtml_legend=1 00:11:47.091 --rc geninfo_all_blocks=1 00:11:47.091 --rc geninfo_unexecuted_blocks=1 00:11:47.091 00:11:47.091 ' 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:47.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.091 --rc genhtml_branch_coverage=1 00:11:47.091 --rc genhtml_function_coverage=1 00:11:47.091 --rc genhtml_legend=1 00:11:47.091 --rc geninfo_all_blocks=1 00:11:47.091 --rc geninfo_unexecuted_blocks=1 00:11:47.091 00:11:47.091 ' 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:47.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.091 --rc genhtml_branch_coverage=1 00:11:47.091 --rc genhtml_function_coverage=1 00:11:47.091 --rc genhtml_legend=1 00:11:47.091 --rc geninfo_all_blocks=1 00:11:47.091 --rc geninfo_unexecuted_blocks=1 00:11:47.091 00:11:47.091 ' 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:47.091 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:47.351 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:47.351 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:47.351 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:47.351 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.351 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.351 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.351 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:47.351 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.351 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:47.351 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:47.351 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:47.351 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:47.351 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:47.351 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:47.351 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:47.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:47.351 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:47.351 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:47.351 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:47.351 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:47.351 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:47.351 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:47.351 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:47.351 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:47.351 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:47.351 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:47.351 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:47.351 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:47.351 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:47.351 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:47.351 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:47.351 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:47.351 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:47.351 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:47.351 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:47.352 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.352 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:47.352 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:47.352 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:47.352 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:47.352 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:47.352 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:53.922 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:53.923 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:53.923 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:53.923 Found net devices under 0000:86:00.0: cvl_0_0 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:53.923 Found net devices under 0000:86:00.1: cvl_0_1 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:53.923 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:53.923 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.515 ms 00:11:53.923 00:11:53.923 --- 10.0.0.2 ping statistics --- 00:11:53.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.923 rtt min/avg/max/mdev = 0.515/0.515/0.515/0.000 ms 00:11:53.923 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:53.924 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:53.924 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:11:53.924 00:11:53.924 --- 10.0.0.1 ping statistics --- 00:11:53.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.924 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:11:53.924 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:53.924 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:53.924 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:53.924 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:53.924 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:53.924 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:53.924 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:53.924 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:53.924 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:53.924 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:53.924 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:53.924 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:53.924 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:53.924 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:53.924 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:53.924 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=81443 00:11:53.924 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:53.924 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:53.924 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 81443 00:11:53.924 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 81443 ']' 00:11:53.924 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.924 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:53.924 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.924 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:53.924 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:54.182 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:54.182 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:54.182 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:54.182 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:54.182 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:54.182 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:54.182 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.182 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:54.182 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.182 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:54.182 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.182 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:54.182 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.182 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:54.182 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:54.182 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.182 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:54.182 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.182 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:54.182 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:54.182 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.182 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:54.182 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.182 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:54.182 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.182 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:54.182 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.182 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:54.182 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:06.368 Initializing NVMe Controllers 00:12:06.368 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:06.368 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:06.368 Initialization complete. Launching workers. 00:12:06.368 ======================================================== 00:12:06.368 Latency(us) 00:12:06.368 Device Information : IOPS MiB/s Average min max 00:12:06.368 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18287.91 71.44 3499.16 537.85 16236.06 00:12:06.368 ======================================================== 00:12:06.368 Total : 18287.91 71.44 3499.16 537.85 16236.06 00:12:06.368 00:12:06.368 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:12:06.368 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:12:06.369 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:06.369 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:12:06.369 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:06.369 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:12:06.369 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:06.369 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:06.369 rmmod nvme_tcp 00:12:06.369 rmmod nvme_fabrics 00:12:06.369 rmmod nvme_keyring 00:12:06.369 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:06.369 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:12:06.369 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:12:06.369 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 81443 ']' 00:12:06.369 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 81443 00:12:06.369 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 81443 ']' 00:12:06.369 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 81443 00:12:06.369 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:12:06.369 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:06.369 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81443 00:12:06.369 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:12:06.369 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:12:06.369 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81443' 00:12:06.369 killing process with pid 81443 00:12:06.369 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 81443 00:12:06.369 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 81443 00:12:06.369 nvmf threads initialize successfully 00:12:06.369 bdev subsystem init successfully 00:12:06.369 created a nvmf target service 00:12:06.369 create targets's poll groups done 00:12:06.369 all subsystems of target started 00:12:06.369 nvmf target is running 00:12:06.369 all subsystems of target stopped 00:12:06.369 destroy targets's poll groups done 00:12:06.369 destroyed the nvmf target service 00:12:06.369 bdev subsystem finish successfully 00:12:06.369 nvmf threads destroy successfully 00:12:06.369 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:06.369 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:06.369 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:06.369 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:12:06.369 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:12:06.369 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:06.369 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:12:06.369 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:06.369 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:06.369 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.369 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:06.369 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:06.938 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:06.938 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:12:06.938 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:06.938 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:06.938 00:12:06.938 real 0m19.842s 00:12:06.938 user 0m45.876s 00:12:06.938 sys 0m6.186s 00:12:06.938 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:06.938 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:06.938 ************************************ 00:12:06.938 END TEST nvmf_example 00:12:06.938 ************************************ 00:12:06.938 12:26:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:06.938 12:26:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:06.938 12:26:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:06.938 12:26:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:06.938 ************************************ 00:12:06.938 START TEST nvmf_filesystem 00:12:06.938 ************************************ 00:12:06.938 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:06.938 * Looking for test storage... 00:12:06.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:06.938 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:06.938 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:12:06.938 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:07.200 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:07.200 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:07.200 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:07.200 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:07.200 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:07.200 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:07.200 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:07.200 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:07.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.201 --rc genhtml_branch_coverage=1 00:12:07.201 --rc genhtml_function_coverage=1 00:12:07.201 --rc genhtml_legend=1 00:12:07.201 --rc geninfo_all_blocks=1 00:12:07.201 --rc geninfo_unexecuted_blocks=1 00:12:07.201 00:12:07.201 ' 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:07.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.201 --rc genhtml_branch_coverage=1 00:12:07.201 --rc genhtml_function_coverage=1 00:12:07.201 --rc genhtml_legend=1 00:12:07.201 --rc geninfo_all_blocks=1 00:12:07.201 --rc geninfo_unexecuted_blocks=1 00:12:07.201 00:12:07.201 ' 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:07.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.201 --rc genhtml_branch_coverage=1 00:12:07.201 --rc genhtml_function_coverage=1 00:12:07.201 --rc genhtml_legend=1 00:12:07.201 --rc geninfo_all_blocks=1 00:12:07.201 --rc geninfo_unexecuted_blocks=1 00:12:07.201 00:12:07.201 ' 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:07.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.201 --rc genhtml_branch_coverage=1 00:12:07.201 --rc genhtml_function_coverage=1 00:12:07.201 --rc genhtml_legend=1 00:12:07.201 --rc geninfo_all_blocks=1 00:12:07.201 --rc geninfo_unexecuted_blocks=1 00:12:07.201 00:12:07.201 ' 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:07.201 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:07.202 #define SPDK_CONFIG_H 00:12:07.202 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:07.202 #define SPDK_CONFIG_APPS 1 00:12:07.202 #define SPDK_CONFIG_ARCH native 00:12:07.202 #undef SPDK_CONFIG_ASAN 00:12:07.202 #undef SPDK_CONFIG_AVAHI 00:12:07.202 #undef SPDK_CONFIG_CET 00:12:07.202 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:07.202 #define SPDK_CONFIG_COVERAGE 1 00:12:07.202 #define SPDK_CONFIG_CROSS_PREFIX 00:12:07.202 #undef SPDK_CONFIG_CRYPTO 00:12:07.202 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:07.202 #undef SPDK_CONFIG_CUSTOMOCF 00:12:07.202 #undef SPDK_CONFIG_DAOS 00:12:07.202 #define SPDK_CONFIG_DAOS_DIR 00:12:07.202 #define SPDK_CONFIG_DEBUG 1 00:12:07.202 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:07.202 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:07.202 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:07.202 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:07.202 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:07.202 #undef SPDK_CONFIG_DPDK_UADK 00:12:07.202 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:07.202 #define SPDK_CONFIG_EXAMPLES 1 00:12:07.202 #undef SPDK_CONFIG_FC 00:12:07.202 #define SPDK_CONFIG_FC_PATH 00:12:07.202 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:07.202 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:07.202 #define SPDK_CONFIG_FSDEV 1 00:12:07.202 #undef SPDK_CONFIG_FUSE 00:12:07.202 #undef SPDK_CONFIG_FUZZER 00:12:07.202 #define SPDK_CONFIG_FUZZER_LIB 00:12:07.202 #undef SPDK_CONFIG_GOLANG 00:12:07.202 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:07.202 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:07.202 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:07.202 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:07.202 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:07.202 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:07.202 #undef SPDK_CONFIG_HAVE_LZ4 00:12:07.202 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:07.202 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:07.202 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:07.202 #define SPDK_CONFIG_IDXD 1 00:12:07.202 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:07.202 #undef SPDK_CONFIG_IPSEC_MB 00:12:07.202 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:07.202 #define SPDK_CONFIG_ISAL 1 00:12:07.202 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:07.202 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:07.202 #define SPDK_CONFIG_LIBDIR 00:12:07.202 #undef SPDK_CONFIG_LTO 00:12:07.202 #define SPDK_CONFIG_MAX_LCORES 128 00:12:07.202 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:07.202 #define SPDK_CONFIG_NVME_CUSE 1 00:12:07.202 #undef SPDK_CONFIG_OCF 00:12:07.202 #define SPDK_CONFIG_OCF_PATH 00:12:07.202 #define SPDK_CONFIG_OPENSSL_PATH 00:12:07.202 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:07.202 #define SPDK_CONFIG_PGO_DIR 00:12:07.202 #undef SPDK_CONFIG_PGO_USE 00:12:07.202 #define SPDK_CONFIG_PREFIX /usr/local 00:12:07.202 #undef SPDK_CONFIG_RAID5F 00:12:07.202 #undef SPDK_CONFIG_RBD 00:12:07.202 #define SPDK_CONFIG_RDMA 1 00:12:07.202 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:07.202 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:07.202 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:07.202 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:07.202 #define SPDK_CONFIG_SHARED 1 00:12:07.202 #undef SPDK_CONFIG_SMA 00:12:07.202 #define SPDK_CONFIG_TESTS 1 00:12:07.202 #undef SPDK_CONFIG_TSAN 00:12:07.202 #define SPDK_CONFIG_UBLK 1 00:12:07.202 #define SPDK_CONFIG_UBSAN 1 00:12:07.202 #undef SPDK_CONFIG_UNIT_TESTS 00:12:07.202 #undef SPDK_CONFIG_URING 00:12:07.202 #define SPDK_CONFIG_URING_PATH 00:12:07.202 #undef SPDK_CONFIG_URING_ZNS 00:12:07.202 #undef SPDK_CONFIG_USDT 00:12:07.202 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:07.202 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:07.202 #define SPDK_CONFIG_VFIO_USER 1 00:12:07.202 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:07.202 #define SPDK_CONFIG_VHOST 1 00:12:07.202 #define SPDK_CONFIG_VIRTIO 1 00:12:07.202 #undef SPDK_CONFIG_VTUNE 00:12:07.202 #define SPDK_CONFIG_VTUNE_DIR 00:12:07.202 #define SPDK_CONFIG_WERROR 1 00:12:07.202 #define SPDK_CONFIG_WPDK_DIR 00:12:07.202 #undef SPDK_CONFIG_XNVME 00:12:07.202 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:07.202 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:07.203 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:07.204 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 83845 ]] 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 83845 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.7brQwl 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.7brQwl/tests/target /tmp/spdk.7brQwl 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=189122363392 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=195963973632 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6841610240 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:07.205 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97970618368 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981984768 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39169753088 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=39192797184 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23044096 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97981288448 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981988864 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=700416 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19596382208 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19596394496 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:07.206 * Looking for test storage... 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=189122363392 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9056202752 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:07.206 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:12:07.206 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:07.467 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:07.467 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:07.467 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:07.467 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:07.467 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:07.467 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:07.467 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:07.467 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:07.467 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:07.467 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:07.467 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:07.467 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:07.467 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:07.467 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:07.467 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:07.467 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:07.467 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:07.467 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:07.467 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:07.467 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:07.467 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:07.467 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:07.467 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:07.467 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:07.467 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:07.467 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:07.467 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:07.467 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:07.467 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:07.467 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:07.467 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:07.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.467 --rc genhtml_branch_coverage=1 00:12:07.467 --rc genhtml_function_coverage=1 00:12:07.467 --rc genhtml_legend=1 00:12:07.467 --rc geninfo_all_blocks=1 00:12:07.467 --rc geninfo_unexecuted_blocks=1 00:12:07.467 00:12:07.467 ' 00:12:07.467 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:07.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.467 --rc genhtml_branch_coverage=1 00:12:07.467 --rc genhtml_function_coverage=1 00:12:07.467 --rc genhtml_legend=1 00:12:07.467 --rc geninfo_all_blocks=1 00:12:07.467 --rc geninfo_unexecuted_blocks=1 00:12:07.467 00:12:07.467 ' 00:12:07.467 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:07.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.467 --rc genhtml_branch_coverage=1 00:12:07.467 --rc genhtml_function_coverage=1 00:12:07.467 --rc genhtml_legend=1 00:12:07.467 --rc geninfo_all_blocks=1 00:12:07.468 --rc geninfo_unexecuted_blocks=1 00:12:07.468 00:12:07.468 ' 00:12:07.468 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:07.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.468 --rc genhtml_branch_coverage=1 00:12:07.468 --rc genhtml_function_coverage=1 00:12:07.468 --rc genhtml_legend=1 00:12:07.468 --rc geninfo_all_blocks=1 00:12:07.468 --rc geninfo_unexecuted_blocks=1 00:12:07.468 00:12:07.468 ' 00:12:07.468 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:07.468 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:12:07.468 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:07.468 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:07.468 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:07.468 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:07.468 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:07.468 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:07.468 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:07.468 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:07.468 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:07.468 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:07.468 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:07.468 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:07.468 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:07.468 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:07.468 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:07.468 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:07.468 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:07.468 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:07.468 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:07.468 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:07.468 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:07.468 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.468 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.468 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.468 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:07.468 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.468 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:12:07.468 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:07.468 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:07.468 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:07.468 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:07.468 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:07.468 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:07.468 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:07.468 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:07.468 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:07.468 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:07.468 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:12:07.468 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:07.468 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:12:07.468 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:07.468 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:07.468 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:07.468 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:07.468 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:07.468 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.468 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:07.468 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.468 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:07.468 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:07.468 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:12:07.468 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:14.039 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:14.039 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:14.039 Found net devices under 0000:86:00.0: cvl_0_0 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:14.039 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:14.040 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:14.040 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:14.040 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:14.040 Found net devices under 0000:86:00.1: cvl_0_1 00:12:14.040 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:14.040 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:14.040 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:12:14.040 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:14.040 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:14.040 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:14.040 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:14.040 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:14.040 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:14.040 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:14.040 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:14.040 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:14.040 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:14.040 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:14.040 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:14.040 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:14.040 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:14.040 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:14.040 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:14.040 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:14.040 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:14.040 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:14.040 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:14.040 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:14.040 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:14.040 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:14.040 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:14.040 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:14.040 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:14.040 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:14.040 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.406 ms 00:12:14.040 00:12:14.040 --- 10.0.0.2 ping statistics --- 00:12:14.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.040 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:12:14.040 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:14.040 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:14.040 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:12:14.040 00:12:14.040 --- 10.0.0.1 ping statistics --- 00:12:14.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.040 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:12:14.040 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:14.040 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:12:14.040 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:14.040 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:14.040 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:14.040 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:14.040 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:14.040 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:14.040 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:14.040 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:14.040 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:14.040 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.040 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:14.040 ************************************ 00:12:14.040 START TEST nvmf_filesystem_no_in_capsule 00:12:14.040 ************************************ 00:12:14.040 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:12:14.040 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:14.040 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:14.040 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:14.040 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:14.040 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:14.040 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=87093 00:12:14.040 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 87093 00:12:14.040 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:14.040 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 87093 ']' 00:12:14.040 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.040 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:14.040 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.040 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:14.040 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:14.040 [2024-11-20 12:26:19.163083] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:12:14.040 [2024-11-20 12:26:19.163129] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:14.040 [2024-11-20 12:26:19.241933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:14.040 [2024-11-20 12:26:19.284604] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:14.040 [2024-11-20 12:26:19.284640] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:14.040 [2024-11-20 12:26:19.284647] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:14.040 [2024-11-20 12:26:19.284653] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:14.040 [2024-11-20 12:26:19.284658] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:14.040 [2024-11-20 12:26:19.286046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:14.040 [2024-11-20 12:26:19.286157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:14.040 [2024-11-20 12:26:19.286261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.040 [2024-11-20 12:26:19.286262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:14.040 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:14.040 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:14.040 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:14.040 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:14.040 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:14.040 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:14.040 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:14.040 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:14.040 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.040 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:14.040 [2024-11-20 12:26:19.426898] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:14.040 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.040 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:14.040 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.040 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:14.040 Malloc1 00:12:14.041 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.041 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:14.041 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.041 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:14.041 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.041 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:14.041 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.041 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:14.041 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.041 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:14.041 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.041 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:14.041 [2024-11-20 12:26:19.581597] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:14.041 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.041 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:14.041 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:14.041 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:14.041 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:14.041 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:14.041 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:14.041 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.041 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:14.041 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.041 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:14.041 { 00:12:14.041 "name": "Malloc1", 00:12:14.041 "aliases": [ 00:12:14.041 "87c0f92f-ae39-445a-9028-33d1ecebaab1" 00:12:14.041 ], 00:12:14.041 "product_name": "Malloc disk", 00:12:14.041 "block_size": 512, 00:12:14.041 "num_blocks": 1048576, 00:12:14.041 "uuid": "87c0f92f-ae39-445a-9028-33d1ecebaab1", 00:12:14.041 "assigned_rate_limits": { 00:12:14.041 "rw_ios_per_sec": 0, 00:12:14.041 "rw_mbytes_per_sec": 0, 00:12:14.041 "r_mbytes_per_sec": 0, 00:12:14.041 "w_mbytes_per_sec": 0 00:12:14.041 }, 00:12:14.041 "claimed": true, 00:12:14.041 "claim_type": "exclusive_write", 00:12:14.041 "zoned": false, 00:12:14.041 "supported_io_types": { 00:12:14.041 "read": true, 00:12:14.041 "write": true, 00:12:14.041 "unmap": true, 00:12:14.041 "flush": true, 00:12:14.041 "reset": true, 00:12:14.041 "nvme_admin": false, 00:12:14.041 "nvme_io": false, 00:12:14.041 "nvme_io_md": false, 00:12:14.041 "write_zeroes": true, 00:12:14.041 "zcopy": true, 00:12:14.041 "get_zone_info": false, 00:12:14.041 "zone_management": false, 00:12:14.041 "zone_append": false, 00:12:14.041 "compare": false, 00:12:14.041 "compare_and_write": false, 00:12:14.041 "abort": true, 00:12:14.041 "seek_hole": false, 00:12:14.041 "seek_data": false, 00:12:14.041 "copy": true, 00:12:14.041 "nvme_iov_md": false 00:12:14.041 }, 00:12:14.041 "memory_domains": [ 00:12:14.041 { 00:12:14.041 "dma_device_id": "system", 00:12:14.041 "dma_device_type": 1 00:12:14.041 }, 00:12:14.041 { 00:12:14.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.041 "dma_device_type": 2 00:12:14.041 } 00:12:14.041 ], 00:12:14.041 "driver_specific": {} 00:12:14.041 } 00:12:14.041 ]' 00:12:14.041 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:14.041 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:14.041 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:14.041 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:14.041 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:14.041 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:14.041 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:14.041 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:15.412 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:15.412 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:15.412 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:15.412 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:15.412 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:17.344 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:17.344 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:17.344 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:17.344 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:17.344 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:17.344 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:17.344 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:17.344 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:17.344 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:17.344 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:17.344 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:17.344 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:17.344 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:17.344 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:17.344 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:17.344 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:17.344 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:17.600 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:17.856 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:19.221 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:19.221 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:19.221 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:19.221 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:19.221 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:19.221 ************************************ 00:12:19.221 START TEST filesystem_ext4 00:12:19.221 ************************************ 00:12:19.221 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:19.221 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:19.221 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:19.221 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:19.221 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:19.221 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:19.221 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:19.221 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:19.221 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:19.221 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:19.221 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:19.221 mke2fs 1.47.0 (5-Feb-2023) 00:12:19.221 Discarding device blocks: 0/522240 done 00:12:19.221 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:19.221 Filesystem UUID: 13546e36-5b94-4597-a876-f617fb9a727d 00:12:19.221 Superblock backups stored on blocks: 00:12:19.221 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:19.221 00:12:19.221 Allocating group tables: 0/64 done 00:12:19.221 Writing inode tables: 0/64 done 00:12:19.221 Creating journal (8192 blocks): done 00:12:19.221 Writing superblocks and filesystem accounting information: 0/64 done 00:12:19.221 00:12:19.221 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:19.221 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:24.468 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:24.468 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:24.468 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:24.468 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:24.468 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:24.468 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:24.725 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 87093 00:12:24.725 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:24.725 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:24.725 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:24.725 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:24.725 00:12:24.725 real 0m5.637s 00:12:24.725 user 0m0.026s 00:12:24.725 sys 0m0.071s 00:12:24.725 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:24.725 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:24.725 ************************************ 00:12:24.725 END TEST filesystem_ext4 00:12:24.726 ************************************ 00:12:24.726 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:24.726 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:24.726 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:24.726 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:24.726 ************************************ 00:12:24.726 START TEST filesystem_btrfs 00:12:24.726 ************************************ 00:12:24.726 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:24.726 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:24.726 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:24.726 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:24.726 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:24.726 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:24.726 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:24.726 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:24.726 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:24.726 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:24.726 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:24.983 btrfs-progs v6.8.1 00:12:24.983 See https://btrfs.readthedocs.io for more information. 00:12:24.983 00:12:24.983 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:24.983 NOTE: several default settings have changed in version 5.15, please make sure 00:12:24.983 this does not affect your deployments: 00:12:24.983 - DUP for metadata (-m dup) 00:12:24.983 - enabled no-holes (-O no-holes) 00:12:24.983 - enabled free-space-tree (-R free-space-tree) 00:12:24.983 00:12:24.983 Label: (null) 00:12:24.983 UUID: 001cfc5b-c3b2-44dc-8390-7e40ec38826d 00:12:24.983 Node size: 16384 00:12:24.983 Sector size: 4096 (CPU page size: 4096) 00:12:24.983 Filesystem size: 510.00MiB 00:12:24.983 Block group profiles: 00:12:24.983 Data: single 8.00MiB 00:12:24.983 Metadata: DUP 32.00MiB 00:12:24.983 System: DUP 8.00MiB 00:12:24.983 SSD detected: yes 00:12:24.983 Zoned device: no 00:12:24.983 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:24.983 Checksum: crc32c 00:12:24.983 Number of devices: 1 00:12:24.983 Devices: 00:12:24.983 ID SIZE PATH 00:12:24.983 1 510.00MiB /dev/nvme0n1p1 00:12:24.983 00:12:24.983 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:24.983 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:25.913 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:25.913 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:25.913 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:25.913 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:25.913 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:25.913 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:25.913 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 87093 00:12:25.913 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:25.913 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:25.913 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:25.913 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:25.913 00:12:25.913 real 0m1.333s 00:12:25.913 user 0m0.030s 00:12:25.913 sys 0m0.112s 00:12:25.913 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:25.913 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:25.913 ************************************ 00:12:25.913 END TEST filesystem_btrfs 00:12:25.913 ************************************ 00:12:26.170 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:26.170 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:26.170 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:26.170 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:26.170 ************************************ 00:12:26.170 START TEST filesystem_xfs 00:12:26.170 ************************************ 00:12:26.170 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:26.170 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:26.170 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:26.170 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:26.170 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:26.170 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:26.170 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:26.170 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:12:26.170 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:26.170 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:26.170 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:26.170 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:26.170 = sectsz=512 attr=2, projid32bit=1 00:12:26.170 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:26.170 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:26.171 data = bsize=4096 blocks=130560, imaxpct=25 00:12:26.171 = sunit=0 swidth=0 blks 00:12:26.171 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:26.171 log =internal log bsize=4096 blocks=16384, version=2 00:12:26.171 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:26.171 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:27.101 Discarding blocks...Done. 00:12:27.101 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:27.101 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:29.622 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:29.622 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:29.622 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:29.622 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:29.622 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:29.622 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:29.622 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 87093 00:12:29.622 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:29.622 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:29.622 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:29.622 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:29.622 00:12:29.622 real 0m3.432s 00:12:29.622 user 0m0.033s 00:12:29.622 sys 0m0.067s 00:12:29.622 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:29.622 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:29.622 ************************************ 00:12:29.622 END TEST filesystem_xfs 00:12:29.622 ************************************ 00:12:29.622 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:29.879 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:29.879 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:29.879 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.879 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:29.879 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:29.879 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:29.879 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:29.879 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:29.879 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:29.879 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:29.879 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:29.879 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.879 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:29.879 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.879 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:29.879 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 87093 00:12:29.879 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 87093 ']' 00:12:29.879 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 87093 00:12:29.879 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:29.879 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:29.879 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87093 00:12:30.136 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:30.136 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:30.136 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87093' 00:12:30.136 killing process with pid 87093 00:12:30.136 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 87093 00:12:30.136 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 87093 00:12:30.395 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:30.395 00:12:30.395 real 0m16.885s 00:12:30.395 user 1m6.426s 00:12:30.395 sys 0m1.379s 00:12:30.395 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:30.395 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:30.395 ************************************ 00:12:30.395 END TEST nvmf_filesystem_no_in_capsule 00:12:30.395 ************************************ 00:12:30.395 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:30.395 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:30.395 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:30.395 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:30.395 ************************************ 00:12:30.395 START TEST nvmf_filesystem_in_capsule 00:12:30.395 ************************************ 00:12:30.395 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:12:30.395 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:30.395 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:30.395 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:30.395 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:30.395 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:30.395 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=90093 00:12:30.395 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 90093 00:12:30.395 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:30.395 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 90093 ']' 00:12:30.395 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.395 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:30.395 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.395 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:30.395 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:30.395 [2024-11-20 12:26:36.120516] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:12:30.395 [2024-11-20 12:26:36.120555] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:30.652 [2024-11-20 12:26:36.199466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:30.652 [2024-11-20 12:26:36.241360] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:30.652 [2024-11-20 12:26:36.241397] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:30.652 [2024-11-20 12:26:36.241403] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:30.652 [2024-11-20 12:26:36.241409] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:30.652 [2024-11-20 12:26:36.241414] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:30.652 [2024-11-20 12:26:36.242951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:30.652 [2024-11-20 12:26:36.243061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:30.652 [2024-11-20 12:26:36.243151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.652 [2024-11-20 12:26:36.243150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:31.215 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:31.215 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:31.215 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:31.215 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:31.216 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:31.473 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:31.473 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:31.473 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:31.473 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.473 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:31.473 [2024-11-20 12:26:37.009971] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:31.473 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.473 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:31.473 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.473 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:31.473 Malloc1 00:12:31.473 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.473 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:31.473 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.473 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:31.473 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.473 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:31.473 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.473 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:31.473 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.473 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:31.473 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.473 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:31.473 [2024-11-20 12:26:37.150002] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:31.473 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.473 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:31.473 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:31.473 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:31.473 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:31.473 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:31.473 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:31.473 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.473 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:31.473 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.473 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:31.473 { 00:12:31.473 "name": "Malloc1", 00:12:31.473 "aliases": [ 00:12:31.473 "60944044-489d-4e2b-9511-9526b2bbdb71" 00:12:31.473 ], 00:12:31.473 "product_name": "Malloc disk", 00:12:31.473 "block_size": 512, 00:12:31.473 "num_blocks": 1048576, 00:12:31.473 "uuid": "60944044-489d-4e2b-9511-9526b2bbdb71", 00:12:31.473 "assigned_rate_limits": { 00:12:31.473 "rw_ios_per_sec": 0, 00:12:31.474 "rw_mbytes_per_sec": 0, 00:12:31.474 "r_mbytes_per_sec": 0, 00:12:31.474 "w_mbytes_per_sec": 0 00:12:31.474 }, 00:12:31.474 "claimed": true, 00:12:31.474 "claim_type": "exclusive_write", 00:12:31.474 "zoned": false, 00:12:31.474 "supported_io_types": { 00:12:31.474 "read": true, 00:12:31.474 "write": true, 00:12:31.474 "unmap": true, 00:12:31.474 "flush": true, 00:12:31.474 "reset": true, 00:12:31.474 "nvme_admin": false, 00:12:31.474 "nvme_io": false, 00:12:31.474 "nvme_io_md": false, 00:12:31.474 "write_zeroes": true, 00:12:31.474 "zcopy": true, 00:12:31.474 "get_zone_info": false, 00:12:31.474 "zone_management": false, 00:12:31.474 "zone_append": false, 00:12:31.474 "compare": false, 00:12:31.474 "compare_and_write": false, 00:12:31.474 "abort": true, 00:12:31.474 "seek_hole": false, 00:12:31.474 "seek_data": false, 00:12:31.474 "copy": true, 00:12:31.474 "nvme_iov_md": false 00:12:31.474 }, 00:12:31.474 "memory_domains": [ 00:12:31.474 { 00:12:31.474 "dma_device_id": "system", 00:12:31.474 "dma_device_type": 1 00:12:31.474 }, 00:12:31.474 { 00:12:31.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.474 "dma_device_type": 2 00:12:31.474 } 00:12:31.474 ], 00:12:31.474 "driver_specific": {} 00:12:31.474 } 00:12:31.474 ]' 00:12:31.474 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:31.474 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:31.474 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:31.731 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:31.731 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:31.731 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:31.731 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:31.731 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:32.660 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:32.660 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:32.660 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:32.660 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:32.660 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:35.207 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:35.207 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:35.207 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:35.207 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:35.207 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:35.207 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:35.207 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:35.207 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:35.207 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:35.207 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:35.207 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:35.207 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:35.207 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:35.207 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:35.207 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:35.207 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:35.207 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:35.207 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:35.771 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:36.702 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:36.702 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:36.702 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:36.702 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:36.702 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:36.702 ************************************ 00:12:36.702 START TEST filesystem_in_capsule_ext4 00:12:36.702 ************************************ 00:12:36.702 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:36.702 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:36.702 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:36.702 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:36.702 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:36.702 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:36.702 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:36.702 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:36.702 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:36.702 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:36.702 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:36.702 mke2fs 1.47.0 (5-Feb-2023) 00:12:36.959 Discarding device blocks: 0/522240 done 00:12:36.959 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:36.959 Filesystem UUID: 69baa5be-781a-4a06-87ea-b301d79e0172 00:12:36.959 Superblock backups stored on blocks: 00:12:36.959 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:36.959 00:12:36.959 Allocating group tables: 0/64 done 00:12:36.959 Writing inode tables: 0/64 done 00:12:36.959 Creating journal (8192 blocks): done 00:12:36.959 Writing superblocks and filesystem accounting information: 0/64 done 00:12:36.959 00:12:36.959 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:36.959 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:43.574 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:43.574 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:43.574 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:43.574 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:43.574 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:43.574 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:43.574 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 90093 00:12:43.574 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:43.574 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:43.574 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:43.574 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:43.574 00:12:43.574 real 0m6.288s 00:12:43.574 user 0m0.023s 00:12:43.574 sys 0m0.075s 00:12:43.574 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:43.574 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:43.574 ************************************ 00:12:43.574 END TEST filesystem_in_capsule_ext4 00:12:43.574 ************************************ 00:12:43.574 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:43.574 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:43.574 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:43.574 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:43.574 ************************************ 00:12:43.574 START TEST filesystem_in_capsule_btrfs 00:12:43.574 ************************************ 00:12:43.574 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:43.574 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:43.574 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:43.574 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:43.574 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:43.574 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:43.575 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:43.575 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:43.575 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:43.575 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:43.575 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:43.575 btrfs-progs v6.8.1 00:12:43.575 See https://btrfs.readthedocs.io for more information. 00:12:43.575 00:12:43.575 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:43.575 NOTE: several default settings have changed in version 5.15, please make sure 00:12:43.575 this does not affect your deployments: 00:12:43.575 - DUP for metadata (-m dup) 00:12:43.575 - enabled no-holes (-O no-holes) 00:12:43.575 - enabled free-space-tree (-R free-space-tree) 00:12:43.575 00:12:43.575 Label: (null) 00:12:43.575 UUID: 4bf012b2-2fa3-40b2-a8ae-00c8b93e41a2 00:12:43.575 Node size: 16384 00:12:43.575 Sector size: 4096 (CPU page size: 4096) 00:12:43.575 Filesystem size: 510.00MiB 00:12:43.575 Block group profiles: 00:12:43.575 Data: single 8.00MiB 00:12:43.575 Metadata: DUP 32.00MiB 00:12:43.575 System: DUP 8.00MiB 00:12:43.575 SSD detected: yes 00:12:43.575 Zoned device: no 00:12:43.575 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:43.575 Checksum: crc32c 00:12:43.575 Number of devices: 1 00:12:43.575 Devices: 00:12:43.575 ID SIZE PATH 00:12:43.575 1 510.00MiB /dev/nvme0n1p1 00:12:43.575 00:12:43.575 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:43.575 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:43.906 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:43.906 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:43.906 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:43.906 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:43.906 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:43.906 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:43.906 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 90093 00:12:43.906 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:43.906 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:43.906 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:43.906 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:43.906 00:12:43.906 real 0m0.870s 00:12:43.906 user 0m0.026s 00:12:43.906 sys 0m0.116s 00:12:43.906 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:43.906 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:43.906 ************************************ 00:12:43.906 END TEST filesystem_in_capsule_btrfs 00:12:43.906 ************************************ 00:12:44.193 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:44.193 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:44.193 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:44.193 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:44.193 ************************************ 00:12:44.193 START TEST filesystem_in_capsule_xfs 00:12:44.193 ************************************ 00:12:44.193 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:44.193 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:44.193 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:44.193 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:44.193 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:44.193 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:44.193 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:44.193 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:44.194 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:44.194 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:44.194 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:44.194 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:44.194 = sectsz=512 attr=2, projid32bit=1 00:12:44.194 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:44.194 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:44.194 data = bsize=4096 blocks=130560, imaxpct=25 00:12:44.194 = sunit=0 swidth=0 blks 00:12:44.194 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:44.194 log =internal log bsize=4096 blocks=16384, version=2 00:12:44.194 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:44.194 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:45.138 Discarding blocks...Done. 00:12:45.138 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:45.138 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:47.663 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:47.663 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:47.663 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:47.663 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:47.663 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:47.663 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:47.663 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 90093 00:12:47.663 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:47.663 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:47.663 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:47.663 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:47.663 00:12:47.663 real 0m3.223s 00:12:47.663 user 0m0.024s 00:12:47.663 sys 0m0.077s 00:12:47.663 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:47.663 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:47.663 ************************************ 00:12:47.663 END TEST filesystem_in_capsule_xfs 00:12:47.663 ************************************ 00:12:47.663 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:47.663 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:47.663 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:47.663 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.663 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:47.663 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:47.663 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:47.663 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.663 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:47.663 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.663 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:47.663 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:47.663 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.663 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:47.663 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.663 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:47.663 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 90093 00:12:47.663 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 90093 ']' 00:12:47.663 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 90093 00:12:47.663 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:47.663 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:47.663 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90093 00:12:47.663 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:47.663 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:47.663 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90093' 00:12:47.663 killing process with pid 90093 00:12:47.663 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 90093 00:12:47.663 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 90093 00:12:48.231 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:48.231 00:12:48.231 real 0m17.655s 00:12:48.231 user 1m9.629s 00:12:48.231 sys 0m1.461s 00:12:48.231 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:48.231 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:48.231 ************************************ 00:12:48.232 END TEST nvmf_filesystem_in_capsule 00:12:48.232 ************************************ 00:12:48.232 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:48.232 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:48.232 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:48.232 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:48.232 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:48.232 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:48.232 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:48.232 rmmod nvme_tcp 00:12:48.232 rmmod nvme_fabrics 00:12:48.232 rmmod nvme_keyring 00:12:48.232 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:48.232 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:48.232 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:48.232 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:48.232 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:48.232 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:48.232 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:48.232 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:48.232 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:48.232 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:48.232 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:48.232 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:48.232 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:48.232 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.232 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:48.232 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.137 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:50.137 00:12:50.137 real 0m43.318s 00:12:50.137 user 2m18.147s 00:12:50.137 sys 0m7.547s 00:12:50.137 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:50.137 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:50.137 ************************************ 00:12:50.137 END TEST nvmf_filesystem 00:12:50.137 ************************************ 00:12:50.396 12:26:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:50.396 12:26:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:50.396 12:26:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:50.396 12:26:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:50.396 ************************************ 00:12:50.396 START TEST nvmf_target_discovery 00:12:50.396 ************************************ 00:12:50.396 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:50.396 * Looking for test storage... 00:12:50.396 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:50.396 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:50.396 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:12:50.396 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:50.396 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:50.396 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:50.396 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:50.396 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:50.396 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:50.396 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:50.396 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:50.396 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:50.396 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:50.396 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:50.396 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:50.396 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:50.396 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:50.396 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:50.396 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:50.396 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:50.396 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:50.396 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:50.396 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:50.396 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:50.396 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:50.396 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:50.396 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:50.396 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:50.396 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:50.396 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:50.396 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:50.396 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:50.396 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:50.396 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:50.396 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:50.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.396 --rc genhtml_branch_coverage=1 00:12:50.396 --rc genhtml_function_coverage=1 00:12:50.396 --rc genhtml_legend=1 00:12:50.396 --rc geninfo_all_blocks=1 00:12:50.396 --rc geninfo_unexecuted_blocks=1 00:12:50.396 00:12:50.396 ' 00:12:50.396 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:50.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.396 --rc genhtml_branch_coverage=1 00:12:50.396 --rc genhtml_function_coverage=1 00:12:50.396 --rc genhtml_legend=1 00:12:50.396 --rc geninfo_all_blocks=1 00:12:50.396 --rc geninfo_unexecuted_blocks=1 00:12:50.396 00:12:50.397 ' 00:12:50.397 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:50.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.397 --rc genhtml_branch_coverage=1 00:12:50.397 --rc genhtml_function_coverage=1 00:12:50.397 --rc genhtml_legend=1 00:12:50.397 --rc geninfo_all_blocks=1 00:12:50.397 --rc geninfo_unexecuted_blocks=1 00:12:50.397 00:12:50.397 ' 00:12:50.397 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:50.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.397 --rc genhtml_branch_coverage=1 00:12:50.397 --rc genhtml_function_coverage=1 00:12:50.397 --rc genhtml_legend=1 00:12:50.397 --rc geninfo_all_blocks=1 00:12:50.397 --rc geninfo_unexecuted_blocks=1 00:12:50.397 00:12:50.397 ' 00:12:50.397 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:50.397 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:50.397 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:50.397 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:50.397 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:50.397 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:50.397 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:50.397 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:50.397 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:50.397 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:50.397 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:50.656 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:50.656 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:50.656 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:50.656 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:50.656 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:50.656 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:50.656 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:50.656 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:50.656 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:50.656 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:50.656 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:50.656 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:50.656 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.656 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.656 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.656 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:50.656 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.656 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:50.656 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:50.656 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:50.656 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:50.656 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:50.656 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:50.656 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:50.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:50.656 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:50.656 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:50.656 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:50.656 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:50.656 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:50.656 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:50.656 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:50.656 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:50.656 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:50.656 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:50.656 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:50.656 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:50.656 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:50.656 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.656 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:50.656 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.656 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:50.656 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:50.656 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:50.656 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:57.229 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:57.229 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:57.229 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:57.229 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:57.229 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:57.229 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:57.229 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:57.229 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:57.229 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:57.229 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:57.229 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:57.229 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:57.229 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:57.229 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:57.229 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:57.229 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:57.229 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:57.229 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:57.229 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:57.229 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:57.229 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:57.229 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:57.229 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:57.229 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:57.229 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:57.229 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:57.229 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:57.229 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:57.229 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:57.229 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:57.229 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:57.229 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:57.229 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:57.229 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:57.230 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:57.230 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:57.230 Found net devices under 0000:86:00.0: cvl_0_0 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:57.230 Found net devices under 0000:86:00.1: cvl_0_1 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:57.230 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:57.230 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:57.230 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:57.230 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:57.230 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:57.230 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:57.230 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:57.230 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:57.230 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:57.230 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:57.230 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.394 ms 00:12:57.230 00:12:57.230 --- 10.0.0.2 ping statistics --- 00:12:57.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.230 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:12:57.230 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:57.230 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:57.230 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:12:57.230 00:12:57.230 --- 10.0.0.1 ping statistics --- 00:12:57.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.230 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:12:57.230 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:57.230 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:57.230 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:57.230 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:57.230 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:57.230 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:57.230 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:57.230 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:57.230 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:57.230 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:57.230 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:57.230 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:57.230 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:57.230 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=96750 00:12:57.230 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:57.230 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 96750 00:12:57.230 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 96750 ']' 00:12:57.230 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:57.231 [2024-11-20 12:27:02.239667] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:12:57.231 [2024-11-20 12:27:02.239709] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:57.231 [2024-11-20 12:27:02.319358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:57.231 [2024-11-20 12:27:02.361680] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:57.231 [2024-11-20 12:27:02.361715] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:57.231 [2024-11-20 12:27:02.361722] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:57.231 [2024-11-20 12:27:02.361727] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:57.231 [2024-11-20 12:27:02.361732] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:57.231 [2024-11-20 12:27:02.363173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:57.231 [2024-11-20 12:27:02.363302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:57.231 [2024-11-20 12:27:02.363334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.231 [2024-11-20 12:27:02.363334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:57.231 [2024-11-20 12:27:02.500308] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:57.231 Null1 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:57.231 [2024-11-20 12:27:02.545691] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:57.231 Null2 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:57.231 Null3 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:57.231 Null4 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.231 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:57.232 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.232 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:57.232 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.232 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:57.232 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.232 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:57.232 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.232 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:12:57.232 00:12:57.232 Discovery Log Number of Records 6, Generation counter 6 00:12:57.232 =====Discovery Log Entry 0====== 00:12:57.232 trtype: tcp 00:12:57.232 adrfam: ipv4 00:12:57.232 subtype: current discovery subsystem 00:12:57.232 treq: not required 00:12:57.232 portid: 0 00:12:57.232 trsvcid: 4420 00:12:57.232 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:57.232 traddr: 10.0.0.2 00:12:57.232 eflags: explicit discovery connections, duplicate discovery information 00:12:57.232 sectype: none 00:12:57.232 =====Discovery Log Entry 1====== 00:12:57.232 trtype: tcp 00:12:57.232 adrfam: ipv4 00:12:57.232 subtype: nvme subsystem 00:12:57.232 treq: not required 00:12:57.232 portid: 0 00:12:57.232 trsvcid: 4420 00:12:57.232 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:57.232 traddr: 10.0.0.2 00:12:57.232 eflags: none 00:12:57.232 sectype: none 00:12:57.232 =====Discovery Log Entry 2====== 00:12:57.232 trtype: tcp 00:12:57.232 adrfam: ipv4 00:12:57.232 subtype: nvme subsystem 00:12:57.232 treq: not required 00:12:57.232 portid: 0 00:12:57.232 trsvcid: 4420 00:12:57.232 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:57.232 traddr: 10.0.0.2 00:12:57.232 eflags: none 00:12:57.232 sectype: none 00:12:57.232 =====Discovery Log Entry 3====== 00:12:57.232 trtype: tcp 00:12:57.232 adrfam: ipv4 00:12:57.232 subtype: nvme subsystem 00:12:57.232 treq: not required 00:12:57.232 portid: 0 00:12:57.232 trsvcid: 4420 00:12:57.232 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:57.232 traddr: 10.0.0.2 00:12:57.232 eflags: none 00:12:57.232 sectype: none 00:12:57.232 =====Discovery Log Entry 4====== 00:12:57.232 trtype: tcp 00:12:57.232 adrfam: ipv4 00:12:57.232 subtype: nvme subsystem 00:12:57.232 treq: not required 00:12:57.232 portid: 0 00:12:57.232 trsvcid: 4420 00:12:57.232 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:57.232 traddr: 10.0.0.2 00:12:57.232 eflags: none 00:12:57.232 sectype: none 00:12:57.232 =====Discovery Log Entry 5====== 00:12:57.232 trtype: tcp 00:12:57.232 adrfam: ipv4 00:12:57.232 subtype: discovery subsystem referral 00:12:57.232 treq: not required 00:12:57.232 portid: 0 00:12:57.232 trsvcid: 4430 00:12:57.232 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:57.232 traddr: 10.0.0.2 00:12:57.232 eflags: none 00:12:57.232 sectype: none 00:12:57.232 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:57.232 Perform nvmf subsystem discovery via RPC 00:12:57.232 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:57.232 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.232 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:57.232 [ 00:12:57.232 { 00:12:57.232 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:57.232 "subtype": "Discovery", 00:12:57.232 "listen_addresses": [ 00:12:57.232 { 00:12:57.232 "trtype": "TCP", 00:12:57.232 "adrfam": "IPv4", 00:12:57.232 "traddr": "10.0.0.2", 00:12:57.232 "trsvcid": "4420" 00:12:57.232 } 00:12:57.232 ], 00:12:57.232 "allow_any_host": true, 00:12:57.232 "hosts": [] 00:12:57.232 }, 00:12:57.232 { 00:12:57.232 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:57.232 "subtype": "NVMe", 00:12:57.232 "listen_addresses": [ 00:12:57.232 { 00:12:57.232 "trtype": "TCP", 00:12:57.232 "adrfam": "IPv4", 00:12:57.232 "traddr": "10.0.0.2", 00:12:57.232 "trsvcid": "4420" 00:12:57.232 } 00:12:57.232 ], 00:12:57.232 "allow_any_host": true, 00:12:57.232 "hosts": [], 00:12:57.232 "serial_number": "SPDK00000000000001", 00:12:57.232 "model_number": "SPDK bdev Controller", 00:12:57.232 "max_namespaces": 32, 00:12:57.232 "min_cntlid": 1, 00:12:57.232 "max_cntlid": 65519, 00:12:57.232 "namespaces": [ 00:12:57.232 { 00:12:57.232 "nsid": 1, 00:12:57.232 "bdev_name": "Null1", 00:12:57.232 "name": "Null1", 00:12:57.232 "nguid": "5B8BF9BC1B7E4C9A950EB70412A87D98", 00:12:57.232 "uuid": "5b8bf9bc-1b7e-4c9a-950e-b70412a87d98" 00:12:57.232 } 00:12:57.232 ] 00:12:57.232 }, 00:12:57.232 { 00:12:57.232 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:57.232 "subtype": "NVMe", 00:12:57.232 "listen_addresses": [ 00:12:57.232 { 00:12:57.232 "trtype": "TCP", 00:12:57.232 "adrfam": "IPv4", 00:12:57.232 "traddr": "10.0.0.2", 00:12:57.232 "trsvcid": "4420" 00:12:57.232 } 00:12:57.232 ], 00:12:57.232 "allow_any_host": true, 00:12:57.232 "hosts": [], 00:12:57.232 "serial_number": "SPDK00000000000002", 00:12:57.232 "model_number": "SPDK bdev Controller", 00:12:57.232 "max_namespaces": 32, 00:12:57.232 "min_cntlid": 1, 00:12:57.232 "max_cntlid": 65519, 00:12:57.232 "namespaces": [ 00:12:57.232 { 00:12:57.232 "nsid": 1, 00:12:57.232 "bdev_name": "Null2", 00:12:57.232 "name": "Null2", 00:12:57.232 "nguid": "1EEC4C406D2C4F4ABFDE93AA57BC0561", 00:12:57.232 "uuid": "1eec4c40-6d2c-4f4a-bfde-93aa57bc0561" 00:12:57.232 } 00:12:57.232 ] 00:12:57.232 }, 00:12:57.232 { 00:12:57.232 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:57.232 "subtype": "NVMe", 00:12:57.232 "listen_addresses": [ 00:12:57.232 { 00:12:57.232 "trtype": "TCP", 00:12:57.232 "adrfam": "IPv4", 00:12:57.232 "traddr": "10.0.0.2", 00:12:57.232 "trsvcid": "4420" 00:12:57.232 } 00:12:57.232 ], 00:12:57.232 "allow_any_host": true, 00:12:57.232 "hosts": [], 00:12:57.232 "serial_number": "SPDK00000000000003", 00:12:57.232 "model_number": "SPDK bdev Controller", 00:12:57.232 "max_namespaces": 32, 00:12:57.232 "min_cntlid": 1, 00:12:57.232 "max_cntlid": 65519, 00:12:57.232 "namespaces": [ 00:12:57.232 { 00:12:57.232 "nsid": 1, 00:12:57.232 "bdev_name": "Null3", 00:12:57.232 "name": "Null3", 00:12:57.232 "nguid": "4F6B2DC89DBF4D1F8EB7D6082545FF0C", 00:12:57.232 "uuid": "4f6b2dc8-9dbf-4d1f-8eb7-d6082545ff0c" 00:12:57.232 } 00:12:57.232 ] 00:12:57.232 }, 00:12:57.232 { 00:12:57.232 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:57.232 "subtype": "NVMe", 00:12:57.232 "listen_addresses": [ 00:12:57.232 { 00:12:57.232 "trtype": "TCP", 00:12:57.232 "adrfam": "IPv4", 00:12:57.232 "traddr": "10.0.0.2", 00:12:57.232 "trsvcid": "4420" 00:12:57.232 } 00:12:57.232 ], 00:12:57.232 "allow_any_host": true, 00:12:57.232 "hosts": [], 00:12:57.232 "serial_number": "SPDK00000000000004", 00:12:57.232 "model_number": "SPDK bdev Controller", 00:12:57.232 "max_namespaces": 32, 00:12:57.232 "min_cntlid": 1, 00:12:57.232 "max_cntlid": 65519, 00:12:57.232 "namespaces": [ 00:12:57.232 { 00:12:57.232 "nsid": 1, 00:12:57.232 "bdev_name": "Null4", 00:12:57.232 "name": "Null4", 00:12:57.232 "nguid": "86CCC09E3BF343AAA8B0E4E52D7D45B1", 00:12:57.232 "uuid": "86ccc09e-3bf3-43aa-a8b0-e4e52d7d45b1" 00:12:57.232 } 00:12:57.232 ] 00:12:57.232 } 00:12:57.232 ] 00:12:57.232 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.232 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:57.232 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:57.232 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.232 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.232 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:57.232 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.232 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:57.232 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.232 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:57.232 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.232 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:57.232 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:57.232 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.232 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:57.232 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.232 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:57.232 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.233 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:57.233 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.233 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:57.233 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:57.233 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.233 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:57.233 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.233 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:57.233 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.233 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:57.233 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.233 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:57.233 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:57.233 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.233 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:57.233 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.233 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:57.233 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.233 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:57.233 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.233 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:57.233 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.233 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:57.491 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.491 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:57.491 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:57.491 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.491 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:57.491 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.491 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:57.491 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:57.491 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:57.491 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:57.491 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:57.491 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:57.491 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:57.491 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:57.491 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:57.491 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:57.491 rmmod nvme_tcp 00:12:57.491 rmmod nvme_fabrics 00:12:57.491 rmmod nvme_keyring 00:12:57.491 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:57.491 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:57.491 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:57.491 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 96750 ']' 00:12:57.491 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 96750 00:12:57.491 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 96750 ']' 00:12:57.491 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 96750 00:12:57.491 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:57.491 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:57.491 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96750 00:12:57.491 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:57.491 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:57.491 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96750' 00:12:57.491 killing process with pid 96750 00:12:57.491 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 96750 00:12:57.491 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 96750 00:12:57.749 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:57.749 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:57.749 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:57.749 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:57.749 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:57.749 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:57.749 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:57.749 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:57.749 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:57.749 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.749 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:57.749 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.654 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:59.654 00:12:59.654 real 0m9.408s 00:12:59.654 user 0m5.707s 00:12:59.654 sys 0m4.830s 00:12:59.654 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:59.654 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.654 ************************************ 00:12:59.654 END TEST nvmf_target_discovery 00:12:59.654 ************************************ 00:12:59.654 12:27:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:59.654 12:27:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:59.914 12:27:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:59.914 12:27:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:59.914 ************************************ 00:12:59.914 START TEST nvmf_referrals 00:12:59.914 ************************************ 00:12:59.914 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:59.914 * Looking for test storage... 00:12:59.914 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:59.914 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:59.914 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:12:59.914 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:59.914 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:59.914 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:59.914 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:59.914 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:59.914 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:59.914 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:59.914 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:59.914 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:59.914 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:59.914 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:59.914 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:59.914 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:59.914 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:59.914 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:59.914 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:59.914 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:59.914 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:59.914 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:59.914 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:59.914 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:59.914 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:59.914 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:59.914 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:59.914 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:59.914 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:59.914 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:59.914 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:59.914 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:59.914 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:59.914 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:59.914 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:59.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.914 --rc genhtml_branch_coverage=1 00:12:59.914 --rc genhtml_function_coverage=1 00:12:59.914 --rc genhtml_legend=1 00:12:59.914 --rc geninfo_all_blocks=1 00:12:59.914 --rc geninfo_unexecuted_blocks=1 00:12:59.914 00:12:59.914 ' 00:12:59.914 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:59.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.914 --rc genhtml_branch_coverage=1 00:12:59.914 --rc genhtml_function_coverage=1 00:12:59.914 --rc genhtml_legend=1 00:12:59.914 --rc geninfo_all_blocks=1 00:12:59.914 --rc geninfo_unexecuted_blocks=1 00:12:59.914 00:12:59.914 ' 00:12:59.914 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:59.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.914 --rc genhtml_branch_coverage=1 00:12:59.914 --rc genhtml_function_coverage=1 00:12:59.914 --rc genhtml_legend=1 00:12:59.914 --rc geninfo_all_blocks=1 00:12:59.914 --rc geninfo_unexecuted_blocks=1 00:12:59.914 00:12:59.914 ' 00:12:59.914 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:59.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.914 --rc genhtml_branch_coverage=1 00:12:59.915 --rc genhtml_function_coverage=1 00:12:59.915 --rc genhtml_legend=1 00:12:59.915 --rc geninfo_all_blocks=1 00:12:59.915 --rc geninfo_unexecuted_blocks=1 00:12:59.915 00:12:59.915 ' 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:59.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:59.915 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.175 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:00.175 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:00.175 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:13:00.175 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:06.767 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:06.767 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:13:06.767 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:06.767 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:06.767 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:06.767 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:06.767 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:06.767 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:13:06.767 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:06.767 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:13:06.767 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:13:06.767 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:13:06.767 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:06.768 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:06.768 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:06.768 Found net devices under 0000:86:00.0: cvl_0_0 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:06.768 Found net devices under 0000:86:00.1: cvl_0_1 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:06.768 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:06.768 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.508 ms 00:13:06.768 00:13:06.768 --- 10.0.0.2 ping statistics --- 00:13:06.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.768 rtt min/avg/max/mdev = 0.508/0.508/0.508/0.000 ms 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:06.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:06.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:13:06.768 00:13:06.768 --- 10.0.0.1 ping statistics --- 00:13:06.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.768 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:06.768 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:06.769 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:06.769 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=100909 00:13:06.769 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 100909 00:13:06.769 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:06.769 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 100909 ']' 00:13:06.769 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.769 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:06.769 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.769 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:06.769 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:06.769 [2024-11-20 12:27:11.702792] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:13:06.769 [2024-11-20 12:27:11.702839] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:06.769 [2024-11-20 12:27:11.779356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:06.769 [2024-11-20 12:27:11.822751] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:06.769 [2024-11-20 12:27:11.822786] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:06.769 [2024-11-20 12:27:11.822793] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:06.769 [2024-11-20 12:27:11.822799] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:06.769 [2024-11-20 12:27:11.822804] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:06.769 [2024-11-20 12:27:11.824229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:06.769 [2024-11-20 12:27:11.824333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:06.769 [2024-11-20 12:27:11.824440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.769 [2024-11-20 12:27:11.824441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:06.769 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:06.769 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:13:06.769 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:06.769 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:06.769 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:06.769 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:06.769 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:06.769 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.769 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:06.769 [2024-11-20 12:27:11.961217] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:06.769 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.769 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:13:06.769 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.769 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:06.769 [2024-11-20 12:27:11.974445] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:13:06.769 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.769 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:13:06.769 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.769 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:06.769 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.769 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:13:06.769 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.769 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:06.769 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.769 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:13:06.769 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.769 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:06.769 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.769 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:06.769 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:13:06.769 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.769 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:06.769 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.769 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:13:06.769 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:13:06.769 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:06.769 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:06.769 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:06.769 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.769 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:06.769 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:06.769 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.769 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:06.769 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:06.769 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:13:06.769 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:06.769 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:06.769 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:06.769 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:06.769 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:06.769 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:06.769 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:06.769 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:13:06.769 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.769 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:06.769 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.769 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:13:06.769 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.769 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:06.769 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.769 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:13:06.769 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.769 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:06.769 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.769 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:06.769 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:13:06.769 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.769 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:06.769 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.769 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:13:06.769 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:13:06.769 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:06.769 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:06.769 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:06.770 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:06.770 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:07.027 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:07.027 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:13:07.027 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:13:07.027 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.027 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:07.027 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.027 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:07.027 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.027 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:07.027 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.027 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:13:07.027 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:07.027 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:07.027 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:07.027 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.027 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:07.027 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:07.027 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.027 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:13:07.027 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:07.027 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:13:07.027 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:07.027 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:07.027 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:07.027 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:07.027 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:07.284 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:13:07.284 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:07.284 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:13:07.284 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:13:07.284 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:07.284 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:07.284 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:07.284 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:07.284 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:13:07.284 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:13:07.284 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:07.284 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:07.284 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:07.541 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:07.542 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:07.542 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.542 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:07.542 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.542 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:13:07.542 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:07.542 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:07.542 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:07.542 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.542 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:07.542 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:07.542 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.542 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:13:07.542 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:07.542 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:13:07.542 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:07.542 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:07.542 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:07.542 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:07.542 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:07.799 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:13:07.799 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:07.799 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:13:07.799 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:13:07.799 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:07.799 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:07.799 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:08.056 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:13:08.056 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:13:08.056 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:13:08.056 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:08.056 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:08.056 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:08.056 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:08.056 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:13:08.056 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.056 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:08.056 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.056 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:08.056 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:13:08.056 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.056 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:08.056 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.313 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:13:08.313 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:13:08.313 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:08.313 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:08.313 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:08.314 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:08.314 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:08.314 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:08.314 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:13:08.314 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:13:08.314 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:13:08.314 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:08.314 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:13:08.314 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:08.314 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:13:08.314 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:08.314 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:08.314 rmmod nvme_tcp 00:13:08.572 rmmod nvme_fabrics 00:13:08.572 rmmod nvme_keyring 00:13:08.572 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:08.572 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:13:08.572 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:13:08.572 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 100909 ']' 00:13:08.572 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 100909 00:13:08.572 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 100909 ']' 00:13:08.572 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 100909 00:13:08.572 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:13:08.572 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:08.572 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100909 00:13:08.572 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:08.572 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:08.572 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100909' 00:13:08.572 killing process with pid 100909 00:13:08.572 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 100909 00:13:08.572 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 100909 00:13:08.831 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:08.831 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:08.831 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:08.831 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:13:08.831 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:13:08.831 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:08.831 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:13:08.831 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:08.831 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:08.831 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.831 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:08.831 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:10.736 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:10.736 00:13:10.736 real 0m10.954s 00:13:10.736 user 0m12.674s 00:13:10.736 sys 0m5.157s 00:13:10.736 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:10.736 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:10.736 ************************************ 00:13:10.736 END TEST nvmf_referrals 00:13:10.736 ************************************ 00:13:10.736 12:27:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:10.736 12:27:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:10.736 12:27:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:10.736 12:27:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:10.736 ************************************ 00:13:10.736 START TEST nvmf_connect_disconnect 00:13:10.736 ************************************ 00:13:10.736 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:10.996 * Looking for test storage... 00:13:10.996 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:10.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:10.996 --rc genhtml_branch_coverage=1 00:13:10.996 --rc genhtml_function_coverage=1 00:13:10.996 --rc genhtml_legend=1 00:13:10.996 --rc geninfo_all_blocks=1 00:13:10.996 --rc geninfo_unexecuted_blocks=1 00:13:10.996 00:13:10.996 ' 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:10.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:10.996 --rc genhtml_branch_coverage=1 00:13:10.996 --rc genhtml_function_coverage=1 00:13:10.996 --rc genhtml_legend=1 00:13:10.996 --rc geninfo_all_blocks=1 00:13:10.996 --rc geninfo_unexecuted_blocks=1 00:13:10.996 00:13:10.996 ' 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:10.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:10.996 --rc genhtml_branch_coverage=1 00:13:10.996 --rc genhtml_function_coverage=1 00:13:10.996 --rc genhtml_legend=1 00:13:10.996 --rc geninfo_all_blocks=1 00:13:10.996 --rc geninfo_unexecuted_blocks=1 00:13:10.996 00:13:10.996 ' 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:10.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:10.996 --rc genhtml_branch_coverage=1 00:13:10.996 --rc genhtml_function_coverage=1 00:13:10.996 --rc genhtml_legend=1 00:13:10.996 --rc geninfo_all_blocks=1 00:13:10.996 --rc geninfo_unexecuted_blocks=1 00:13:10.996 00:13:10.996 ' 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:10.996 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:10.997 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.997 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.997 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.997 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:13:10.997 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.997 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:13:10.997 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:10.997 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:10.997 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:10.997 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:10.997 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:10.997 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:10.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:10.997 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:10.997 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:10.997 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:10.997 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:10.997 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:10.997 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:13:10.997 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:10.997 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:10.997 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:10.997 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:10.997 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:10.997 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:10.997 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:10.997 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:10.997 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:10.997 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:10.997 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:13:10.997 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:17.570 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:17.570 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:13:17.570 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:17.570 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:17.570 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:17.570 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:17.570 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:17.570 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:13:17.570 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:17.570 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:13:17.570 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:13:17.570 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:13:17.570 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:13:17.570 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:13:17.570 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:13:17.570 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:17.570 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:17.570 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:17.570 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:17.570 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:17.570 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:17.570 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:17.570 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:17.570 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:17.570 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:17.570 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:17.570 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:17.570 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:17.570 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:17.570 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:17.570 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:17.570 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:17.571 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:17.571 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:17.571 Found net devices under 0000:86:00.0: cvl_0_0 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:17.571 Found net devices under 0000:86:00.1: cvl_0_1 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:17.571 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:17.571 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.395 ms 00:13:17.571 00:13:17.571 --- 10.0.0.2 ping statistics --- 00:13:17.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.571 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:17.571 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:17.571 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:13:17.571 00:13:17.571 --- 10.0.0.1 ping statistics --- 00:13:17.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.571 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=105006 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 105006 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 105006 ']' 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:17.571 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:17.571 [2024-11-20 12:27:22.793672] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:13:17.571 [2024-11-20 12:27:22.793713] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:17.571 [2024-11-20 12:27:22.856659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:17.571 [2024-11-20 12:27:22.899248] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:17.572 [2024-11-20 12:27:22.899282] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:17.572 [2024-11-20 12:27:22.899289] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:17.572 [2024-11-20 12:27:22.899295] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:17.572 [2024-11-20 12:27:22.899300] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:17.572 [2024-11-20 12:27:22.900679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:17.572 [2024-11-20 12:27:22.900715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:17.572 [2024-11-20 12:27:22.900827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.572 [2024-11-20 12:27:22.900827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:17.572 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:17.572 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:13:17.572 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:17.572 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:17.572 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:17.572 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:17.572 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:17.572 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.572 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:17.572 [2024-11-20 12:27:23.039713] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:17.572 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.572 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:17.572 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.572 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:17.572 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.572 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:17.572 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:17.572 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.572 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:17.572 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.572 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:17.572 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.572 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:17.572 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.572 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:17.572 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.572 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:17.572 [2024-11-20 12:27:23.096250] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:17.572 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.572 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:13:17.572 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:13:17.572 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:20.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.119 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.392 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.661 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.931 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:33.931 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:33.931 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:33.931 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:13:33.931 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:33.931 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:13:33.931 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:33.931 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:33.931 rmmod nvme_tcp 00:13:33.931 rmmod nvme_fabrics 00:13:33.931 rmmod nvme_keyring 00:13:33.931 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:33.931 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:13:33.931 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:13:33.931 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 105006 ']' 00:13:33.931 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 105006 00:13:33.931 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 105006 ']' 00:13:33.931 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 105006 00:13:33.931 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:13:33.931 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:33.931 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105006 00:13:33.931 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:33.931 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:33.931 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105006' 00:13:33.931 killing process with pid 105006 00:13:33.931 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 105006 00:13:33.931 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 105006 00:13:34.190 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:34.190 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:34.190 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:34.190 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:13:34.190 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:13:34.190 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:34.190 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:13:34.190 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:34.190 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:34.190 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.190 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:34.190 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.096 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:36.096 00:13:36.096 real 0m25.294s 00:13:36.096 user 1m8.439s 00:13:36.096 sys 0m5.889s 00:13:36.096 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:36.096 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:36.096 ************************************ 00:13:36.096 END TEST nvmf_connect_disconnect 00:13:36.096 ************************************ 00:13:36.096 12:27:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:36.096 12:27:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:36.096 12:27:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:36.096 12:27:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:36.096 ************************************ 00:13:36.096 START TEST nvmf_multitarget 00:13:36.096 ************************************ 00:13:36.096 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:36.357 * Looking for test storage... 00:13:36.357 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:36.357 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:36.357 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:13:36.357 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:36.357 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:36.357 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:36.357 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:36.357 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:36.357 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:13:36.357 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:13:36.357 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:13:36.357 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:13:36.357 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:13:36.357 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:13:36.357 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:13:36.357 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:36.357 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:13:36.357 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:13:36.357 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:36.357 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:36.357 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:13:36.357 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:13:36.357 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:36.357 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:13:36.357 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:13:36.357 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:13:36.357 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:13:36.357 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:36.357 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:13:36.357 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:13:36.357 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:36.357 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:36.357 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:13:36.357 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:36.357 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:36.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.357 --rc genhtml_branch_coverage=1 00:13:36.357 --rc genhtml_function_coverage=1 00:13:36.357 --rc genhtml_legend=1 00:13:36.357 --rc geninfo_all_blocks=1 00:13:36.357 --rc geninfo_unexecuted_blocks=1 00:13:36.357 00:13:36.357 ' 00:13:36.357 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:36.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.357 --rc genhtml_branch_coverage=1 00:13:36.357 --rc genhtml_function_coverage=1 00:13:36.357 --rc genhtml_legend=1 00:13:36.357 --rc geninfo_all_blocks=1 00:13:36.357 --rc geninfo_unexecuted_blocks=1 00:13:36.357 00:13:36.357 ' 00:13:36.357 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:36.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.357 --rc genhtml_branch_coverage=1 00:13:36.357 --rc genhtml_function_coverage=1 00:13:36.357 --rc genhtml_legend=1 00:13:36.357 --rc geninfo_all_blocks=1 00:13:36.357 --rc geninfo_unexecuted_blocks=1 00:13:36.357 00:13:36.357 ' 00:13:36.357 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:36.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.358 --rc genhtml_branch_coverage=1 00:13:36.358 --rc genhtml_function_coverage=1 00:13:36.358 --rc genhtml_legend=1 00:13:36.358 --rc geninfo_all_blocks=1 00:13:36.358 --rc geninfo_unexecuted_blocks=1 00:13:36.358 00:13:36.358 ' 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:36.358 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:13:36.358 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:42.930 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:42.930 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:42.930 Found net devices under 0000:86:00.0: cvl_0_0 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:42.930 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:42.930 Found net devices under 0000:86:00.1: cvl_0_1 00:13:42.931 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:42.931 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:42.931 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:13:42.931 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:42.931 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:42.931 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:42.931 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:42.931 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:42.931 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:42.931 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:42.931 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:42.931 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:42.931 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:42.931 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:42.931 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:42.931 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:42.931 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:42.931 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:42.931 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:42.931 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:42.931 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:42.931 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:42.931 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:42.931 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:42.931 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:42.931 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:42.931 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:42.931 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:42.931 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:42.931 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:42.931 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.515 ms 00:13:42.931 00:13:42.931 --- 10.0.0.2 ping statistics --- 00:13:42.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.931 rtt min/avg/max/mdev = 0.515/0.515/0.515/0.000 ms 00:13:42.931 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:42.931 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:42.931 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:13:42.931 00:13:42.931 --- 10.0.0.1 ping statistics --- 00:13:42.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.931 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:13:42.931 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:42.931 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:13:42.931 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:42.931 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:42.931 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:42.931 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:42.931 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:42.931 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:42.931 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:42.931 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:42.931 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:42.931 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:42.931 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:42.931 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=111406 00:13:42.931 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 111406 00:13:42.931 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:42.931 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 111406 ']' 00:13:42.931 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.931 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:42.931 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.931 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:42.931 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:42.931 [2024-11-20 12:27:48.158690] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:13:42.931 [2024-11-20 12:27:48.158734] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:42.931 [2024-11-20 12:27:48.237756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:42.931 [2024-11-20 12:27:48.279555] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:42.931 [2024-11-20 12:27:48.279591] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:42.931 [2024-11-20 12:27:48.279598] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:42.931 [2024-11-20 12:27:48.279604] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:42.931 [2024-11-20 12:27:48.279609] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:42.931 [2024-11-20 12:27:48.281218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:42.931 [2024-11-20 12:27:48.281319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:42.931 [2024-11-20 12:27:48.281428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.931 [2024-11-20 12:27:48.281429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:42.931 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:42.931 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:13:42.931 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:42.931 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:42.931 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:42.931 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:42.931 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:42.931 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:42.931 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:42.931 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:42.931 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:42.931 "nvmf_tgt_1" 00:13:42.931 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:43.191 "nvmf_tgt_2" 00:13:43.191 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:43.191 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:43.191 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:43.191 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:43.191 true 00:13:43.191 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:43.450 true 00:13:43.450 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:43.450 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:43.450 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:43.450 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:43.450 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:43.450 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:43.450 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:13:43.450 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:43.451 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:13:43.451 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:43.451 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:43.451 rmmod nvme_tcp 00:13:43.451 rmmod nvme_fabrics 00:13:43.451 rmmod nvme_keyring 00:13:43.709 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:43.709 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:13:43.709 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:13:43.709 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 111406 ']' 00:13:43.709 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 111406 00:13:43.710 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 111406 ']' 00:13:43.710 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 111406 00:13:43.710 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:13:43.710 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:43.710 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 111406 00:13:43.710 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:43.710 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:43.710 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 111406' 00:13:43.710 killing process with pid 111406 00:13:43.710 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 111406 00:13:43.710 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 111406 00:13:43.710 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:43.710 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:43.710 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:43.710 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:13:43.710 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:13:43.710 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:13:43.710 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:43.710 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:43.710 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:43.710 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.710 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:43.710 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:46.248 00:13:46.248 real 0m9.671s 00:13:46.248 user 0m7.248s 00:13:46.248 sys 0m4.872s 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:46.248 ************************************ 00:13:46.248 END TEST nvmf_multitarget 00:13:46.248 ************************************ 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:46.248 ************************************ 00:13:46.248 START TEST nvmf_rpc 00:13:46.248 ************************************ 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:46.248 * Looking for test storage... 00:13:46.248 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:46.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.248 --rc genhtml_branch_coverage=1 00:13:46.248 --rc genhtml_function_coverage=1 00:13:46.248 --rc genhtml_legend=1 00:13:46.248 --rc geninfo_all_blocks=1 00:13:46.248 --rc geninfo_unexecuted_blocks=1 00:13:46.248 00:13:46.248 ' 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:46.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.248 --rc genhtml_branch_coverage=1 00:13:46.248 --rc genhtml_function_coverage=1 00:13:46.248 --rc genhtml_legend=1 00:13:46.248 --rc geninfo_all_blocks=1 00:13:46.248 --rc geninfo_unexecuted_blocks=1 00:13:46.248 00:13:46.248 ' 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:46.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.248 --rc genhtml_branch_coverage=1 00:13:46.248 --rc genhtml_function_coverage=1 00:13:46.248 --rc genhtml_legend=1 00:13:46.248 --rc geninfo_all_blocks=1 00:13:46.248 --rc geninfo_unexecuted_blocks=1 00:13:46.248 00:13:46.248 ' 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:46.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.248 --rc genhtml_branch_coverage=1 00:13:46.248 --rc genhtml_function_coverage=1 00:13:46.248 --rc genhtml_legend=1 00:13:46.248 --rc geninfo_all_blocks=1 00:13:46.248 --rc geninfo_unexecuted_blocks=1 00:13:46.248 00:13:46.248 ' 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:46.248 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:46.249 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:46.249 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:46.249 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:46.249 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:46.249 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:46.249 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:46.249 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:46.249 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:46.249 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:46.249 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:46.249 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:46.249 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:46.249 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:13:46.249 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:46.249 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:46.249 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:46.249 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.249 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.249 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.249 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:46.249 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.249 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:13:46.249 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:46.249 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:46.249 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:46.249 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:46.249 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:46.249 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:46.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:46.249 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:46.249 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:46.249 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:46.249 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:46.249 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:46.249 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:46.249 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:46.249 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:46.249 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:46.249 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:46.249 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.249 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:46.249 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.249 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:46.249 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:46.249 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:13:46.249 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:52.891 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:52.891 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:52.891 Found net devices under 0000:86:00.0: cvl_0_0 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:52.891 Found net devices under 0000:86:00.1: cvl_0_1 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:13:52.891 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:52.892 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:52.892 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:52.892 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:52.892 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:52.892 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:52.892 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:52.892 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:52.892 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:52.892 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:52.892 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:52.892 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:52.892 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:52.892 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:52.892 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:52.892 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:52.892 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:52.892 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:52.892 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:52.892 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:52.892 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:52.892 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:52.892 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:52.892 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:52.892 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:52.892 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:52.892 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:52.892 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.447 ms 00:13:52.892 00:13:52.892 --- 10.0.0.2 ping statistics --- 00:13:52.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.892 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:13:52.892 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:52.892 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:52.892 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:13:52.892 00:13:52.892 --- 10.0.0.1 ping statistics --- 00:13:52.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.892 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:13:52.892 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:52.892 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:13:52.892 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:52.892 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:52.892 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:52.892 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:52.892 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:52.892 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:52.892 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:52.892 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:52.892 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:52.892 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:52.892 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.892 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=115199 00:13:52.892 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 115199 00:13:52.892 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:52.892 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 115199 ']' 00:13:52.892 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.892 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:52.892 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.892 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:52.892 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.892 [2024-11-20 12:27:57.905112] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:13:52.892 [2024-11-20 12:27:57.905159] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:52.892 [2024-11-20 12:27:57.981146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:52.892 [2024-11-20 12:27:58.020993] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:52.892 [2024-11-20 12:27:58.021028] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:52.892 [2024-11-20 12:27:58.021034] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:52.892 [2024-11-20 12:27:58.021040] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:52.892 [2024-11-20 12:27:58.021044] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:52.892 [2024-11-20 12:27:58.022647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:52.892 [2024-11-20 12:27:58.022762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:52.892 [2024-11-20 12:27:58.022848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.892 [2024-11-20 12:27:58.022848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:52.892 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:52.892 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:52.892 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:52.892 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:52.892 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.892 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:52.892 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:52.892 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.892 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.892 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.892 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:52.892 "tick_rate": 2100000000, 00:13:52.892 "poll_groups": [ 00:13:52.892 { 00:13:52.892 "name": "nvmf_tgt_poll_group_000", 00:13:52.892 "admin_qpairs": 0, 00:13:52.892 "io_qpairs": 0, 00:13:52.892 "current_admin_qpairs": 0, 00:13:52.892 "current_io_qpairs": 0, 00:13:52.892 "pending_bdev_io": 0, 00:13:52.892 "completed_nvme_io": 0, 00:13:52.892 "transports": [] 00:13:52.892 }, 00:13:52.892 { 00:13:52.892 "name": "nvmf_tgt_poll_group_001", 00:13:52.892 "admin_qpairs": 0, 00:13:52.892 "io_qpairs": 0, 00:13:52.892 "current_admin_qpairs": 0, 00:13:52.892 "current_io_qpairs": 0, 00:13:52.892 "pending_bdev_io": 0, 00:13:52.892 "completed_nvme_io": 0, 00:13:52.892 "transports": [] 00:13:52.892 }, 00:13:52.892 { 00:13:52.892 "name": "nvmf_tgt_poll_group_002", 00:13:52.892 "admin_qpairs": 0, 00:13:52.892 "io_qpairs": 0, 00:13:52.892 "current_admin_qpairs": 0, 00:13:52.892 "current_io_qpairs": 0, 00:13:52.892 "pending_bdev_io": 0, 00:13:52.892 "completed_nvme_io": 0, 00:13:52.892 "transports": [] 00:13:52.892 }, 00:13:52.892 { 00:13:52.892 "name": "nvmf_tgt_poll_group_003", 00:13:52.892 "admin_qpairs": 0, 00:13:52.892 "io_qpairs": 0, 00:13:52.892 "current_admin_qpairs": 0, 00:13:52.892 "current_io_qpairs": 0, 00:13:52.892 "pending_bdev_io": 0, 00:13:52.892 "completed_nvme_io": 0, 00:13:52.892 "transports": [] 00:13:52.892 } 00:13:52.892 ] 00:13:52.892 }' 00:13:52.892 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:52.892 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:52.892 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:52.892 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:52.892 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:52.892 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:52.892 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:52.892 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:52.892 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.892 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.892 [2024-11-20 12:27:58.279793] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:52.892 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.892 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:52.893 "tick_rate": 2100000000, 00:13:52.893 "poll_groups": [ 00:13:52.893 { 00:13:52.893 "name": "nvmf_tgt_poll_group_000", 00:13:52.893 "admin_qpairs": 0, 00:13:52.893 "io_qpairs": 0, 00:13:52.893 "current_admin_qpairs": 0, 00:13:52.893 "current_io_qpairs": 0, 00:13:52.893 "pending_bdev_io": 0, 00:13:52.893 "completed_nvme_io": 0, 00:13:52.893 "transports": [ 00:13:52.893 { 00:13:52.893 "trtype": "TCP" 00:13:52.893 } 00:13:52.893 ] 00:13:52.893 }, 00:13:52.893 { 00:13:52.893 "name": "nvmf_tgt_poll_group_001", 00:13:52.893 "admin_qpairs": 0, 00:13:52.893 "io_qpairs": 0, 00:13:52.893 "current_admin_qpairs": 0, 00:13:52.893 "current_io_qpairs": 0, 00:13:52.893 "pending_bdev_io": 0, 00:13:52.893 "completed_nvme_io": 0, 00:13:52.893 "transports": [ 00:13:52.893 { 00:13:52.893 "trtype": "TCP" 00:13:52.893 } 00:13:52.893 ] 00:13:52.893 }, 00:13:52.893 { 00:13:52.893 "name": "nvmf_tgt_poll_group_002", 00:13:52.893 "admin_qpairs": 0, 00:13:52.893 "io_qpairs": 0, 00:13:52.893 "current_admin_qpairs": 0, 00:13:52.893 "current_io_qpairs": 0, 00:13:52.893 "pending_bdev_io": 0, 00:13:52.893 "completed_nvme_io": 0, 00:13:52.893 "transports": [ 00:13:52.893 { 00:13:52.893 "trtype": "TCP" 00:13:52.893 } 00:13:52.893 ] 00:13:52.893 }, 00:13:52.893 { 00:13:52.893 "name": "nvmf_tgt_poll_group_003", 00:13:52.893 "admin_qpairs": 0, 00:13:52.893 "io_qpairs": 0, 00:13:52.893 "current_admin_qpairs": 0, 00:13:52.893 "current_io_qpairs": 0, 00:13:52.893 "pending_bdev_io": 0, 00:13:52.893 "completed_nvme_io": 0, 00:13:52.893 "transports": [ 00:13:52.893 { 00:13:52.893 "trtype": "TCP" 00:13:52.893 } 00:13:52.893 ] 00:13:52.893 } 00:13:52.893 ] 00:13:52.893 }' 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.893 Malloc1 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.893 [2024-11-20 12:27:58.467912] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:52.893 [2024-11-20 12:27:58.496642] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:13:52.893 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:52.893 could not add new controller: failed to write to nvme-fabrics device 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.893 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:54.271 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:54.271 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:54.271 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:54.271 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:54.271 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:56.174 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:56.174 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:56.174 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:56.174 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:56.174 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:56.174 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:56.174 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:56.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.174 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:56.174 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:56.174 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:56.174 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:56.174 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:56.174 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:56.174 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:56.174 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:56.174 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.174 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.174 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.174 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:56.174 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:56.175 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:56.175 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:13:56.175 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:56.175 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:13:56.175 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:56.175 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:13:56.175 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:56.175 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:13:56.175 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:13:56.175 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:56.175 [2024-11-20 12:28:01.790056] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:13:56.175 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:56.175 could not add new controller: failed to write to nvme-fabrics device 00:13:56.175 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:56.175 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:56.175 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:56.175 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:56.175 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:56.175 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.175 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.175 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.175 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:57.550 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:57.550 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:57.550 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:57.550 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:57.550 12:28:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:59.453 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:59.453 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:59.453 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:59.453 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:59.453 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:59.453 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:59.453 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:59.453 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.453 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:59.453 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:59.453 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:59.453 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:59.453 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:59.453 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:59.453 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:59.453 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:59.453 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.453 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.453 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.453 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:59.453 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:59.453 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:59.453 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.454 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.454 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.454 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:59.454 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.454 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.454 [2024-11-20 12:28:05.153175] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:59.454 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.454 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:59.454 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.454 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.454 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.454 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:59.454 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.454 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.454 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.454 12:28:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:00.829 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:00.829 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:00.829 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:00.829 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:00.829 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:02.732 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:02.732 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:02.732 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:02.732 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:02.732 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:02.732 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:02.732 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:02.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.732 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:02.732 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:02.732 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:02.732 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:02.732 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:02.732 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:02.732 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:02.732 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:02.732 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.732 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.732 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.732 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:02.732 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.732 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.732 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.732 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:02.732 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:02.732 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.732 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.732 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.732 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:02.732 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.732 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.732 [2024-11-20 12:28:08.459245] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:02.732 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.732 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:02.732 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.732 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.732 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.732 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:02.732 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.732 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.732 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.732 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:04.108 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:04.108 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:04.108 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:04.108 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:04.108 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:06.012 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:06.012 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:06.012 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:06.013 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:06.013 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:06.013 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:06.013 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:06.013 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.013 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:06.013 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:06.013 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:06.013 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:06.013 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:06.013 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:06.013 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:06.013 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:06.013 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.013 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.013 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.013 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:06.013 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.013 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.013 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.013 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:06.013 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:06.013 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.013 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.013 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.013 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:06.013 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.013 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.013 [2024-11-20 12:28:11.766327] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:06.013 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.013 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:06.013 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.013 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.272 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.272 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:06.272 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.272 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.272 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.272 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:07.207 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:07.207 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:07.207 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:07.207 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:07.207 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:09.737 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:09.738 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:09.738 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:09.738 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:09.738 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:09.738 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:09.738 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:09.738 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.738 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:09.738 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:09.738 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:09.738 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:09.738 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:09.738 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:09.738 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:09.738 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:09.738 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.738 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.738 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.738 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:09.738 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.738 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.738 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.738 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:09.738 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:09.738 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.738 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.738 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.738 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:09.738 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.738 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.738 [2024-11-20 12:28:15.117333] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:09.738 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.738 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:09.738 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.738 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.738 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.738 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:09.738 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.738 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.738 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.738 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:10.674 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:10.674 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:10.674 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:10.674 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:10.674 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:12.572 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:12.572 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:12.572 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:12.830 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:12.830 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:12.830 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:12.830 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:12.830 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.830 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:12.830 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:12.830 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:12.830 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:12.830 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:12.830 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:12.830 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:12.830 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:12.830 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.830 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.830 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.830 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:12.830 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.830 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.830 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.830 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:12.830 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:12.830 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.830 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.830 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.830 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:12.830 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.830 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.830 [2024-11-20 12:28:18.516997] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:12.830 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.830 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:12.830 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.830 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.830 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.830 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:12.830 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.830 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.830 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.830 12:28:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:14.203 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:14.203 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:14.203 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:14.203 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:14.203 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:16.109 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.109 [2024-11-20 12:28:21.797447] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.109 [2024-11-20 12:28:21.845547] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.109 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.369 [2024-11-20 12:28:21.893698] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.369 [2024-11-20 12:28:21.941857] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.369 [2024-11-20 12:28:21.990015] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.369 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.369 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.369 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:16.369 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.369 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.369 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.370 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:16.370 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.370 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.370 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.370 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:16.370 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.370 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.370 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.370 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:16.370 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.370 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.370 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.370 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:14:16.370 "tick_rate": 2100000000, 00:14:16.370 "poll_groups": [ 00:14:16.370 { 00:14:16.370 "name": "nvmf_tgt_poll_group_000", 00:14:16.370 "admin_qpairs": 2, 00:14:16.370 "io_qpairs": 168, 00:14:16.370 "current_admin_qpairs": 0, 00:14:16.370 "current_io_qpairs": 0, 00:14:16.370 "pending_bdev_io": 0, 00:14:16.370 "completed_nvme_io": 267, 00:14:16.370 "transports": [ 00:14:16.370 { 00:14:16.370 "trtype": "TCP" 00:14:16.370 } 00:14:16.370 ] 00:14:16.370 }, 00:14:16.370 { 00:14:16.370 "name": "nvmf_tgt_poll_group_001", 00:14:16.370 "admin_qpairs": 2, 00:14:16.370 "io_qpairs": 168, 00:14:16.370 "current_admin_qpairs": 0, 00:14:16.370 "current_io_qpairs": 0, 00:14:16.370 "pending_bdev_io": 0, 00:14:16.370 "completed_nvme_io": 269, 00:14:16.370 "transports": [ 00:14:16.370 { 00:14:16.370 "trtype": "TCP" 00:14:16.370 } 00:14:16.370 ] 00:14:16.370 }, 00:14:16.370 { 00:14:16.370 "name": "nvmf_tgt_poll_group_002", 00:14:16.370 "admin_qpairs": 1, 00:14:16.370 "io_qpairs": 168, 00:14:16.370 "current_admin_qpairs": 0, 00:14:16.370 "current_io_qpairs": 0, 00:14:16.370 "pending_bdev_io": 0, 00:14:16.370 "completed_nvme_io": 220, 00:14:16.370 "transports": [ 00:14:16.370 { 00:14:16.370 "trtype": "TCP" 00:14:16.370 } 00:14:16.370 ] 00:14:16.370 }, 00:14:16.370 { 00:14:16.370 "name": "nvmf_tgt_poll_group_003", 00:14:16.370 "admin_qpairs": 2, 00:14:16.370 "io_qpairs": 168, 00:14:16.370 "current_admin_qpairs": 0, 00:14:16.370 "current_io_qpairs": 0, 00:14:16.370 "pending_bdev_io": 0, 00:14:16.370 "completed_nvme_io": 266, 00:14:16.370 "transports": [ 00:14:16.370 { 00:14:16.370 "trtype": "TCP" 00:14:16.370 } 00:14:16.370 ] 00:14:16.370 } 00:14:16.370 ] 00:14:16.370 }' 00:14:16.370 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:16.370 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:16.370 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:16.370 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:16.370 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:16.370 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:16.370 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:16.370 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:16.370 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:16.629 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:14:16.629 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:14:16.629 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:16.629 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:14:16.629 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:16.629 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:14:16.629 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:16.629 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:14:16.629 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:16.629 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:16.629 rmmod nvme_tcp 00:14:16.629 rmmod nvme_fabrics 00:14:16.629 rmmod nvme_keyring 00:14:16.629 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:16.629 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:14:16.629 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:14:16.629 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 115199 ']' 00:14:16.629 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 115199 00:14:16.629 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 115199 ']' 00:14:16.629 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 115199 00:14:16.629 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:14:16.629 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:16.629 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 115199 00:14:16.629 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:16.629 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:16.629 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 115199' 00:14:16.629 killing process with pid 115199 00:14:16.629 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 115199 00:14:16.630 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 115199 00:14:16.889 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:16.889 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:16.889 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:16.889 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:14:16.889 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:14:16.889 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:16.889 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:14:16.889 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:16.889 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:16.889 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:16.889 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:16.889 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:18.795 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:18.795 00:14:18.795 real 0m32.919s 00:14:18.795 user 1m39.058s 00:14:18.795 sys 0m6.518s 00:14:18.795 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:18.795 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.795 ************************************ 00:14:18.795 END TEST nvmf_rpc 00:14:18.795 ************************************ 00:14:18.795 12:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:18.795 12:28:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:18.795 12:28:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:18.795 12:28:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:19.055 ************************************ 00:14:19.055 START TEST nvmf_invalid 00:14:19.055 ************************************ 00:14:19.055 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:19.055 * Looking for test storage... 00:14:19.055 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:19.055 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:19.055 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:14:19.055 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:19.055 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:19.055 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:19.055 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:19.055 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:19.055 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:14:19.055 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:14:19.055 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:14:19.055 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:14:19.055 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:14:19.055 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:14:19.055 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:14:19.055 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:19.055 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:14:19.055 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:14:19.055 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:19.055 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:19.055 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:14:19.055 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:14:19.055 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:19.055 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:14:19.055 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:14:19.055 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:14:19.055 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:14:19.055 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:19.055 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:14:19.055 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:14:19.055 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:19.055 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:19.055 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:14:19.055 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:19.055 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:19.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.055 --rc genhtml_branch_coverage=1 00:14:19.055 --rc genhtml_function_coverage=1 00:14:19.055 --rc genhtml_legend=1 00:14:19.055 --rc geninfo_all_blocks=1 00:14:19.055 --rc geninfo_unexecuted_blocks=1 00:14:19.055 00:14:19.055 ' 00:14:19.055 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:19.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.056 --rc genhtml_branch_coverage=1 00:14:19.056 --rc genhtml_function_coverage=1 00:14:19.056 --rc genhtml_legend=1 00:14:19.056 --rc geninfo_all_blocks=1 00:14:19.056 --rc geninfo_unexecuted_blocks=1 00:14:19.056 00:14:19.056 ' 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:19.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.056 --rc genhtml_branch_coverage=1 00:14:19.056 --rc genhtml_function_coverage=1 00:14:19.056 --rc genhtml_legend=1 00:14:19.056 --rc geninfo_all_blocks=1 00:14:19.056 --rc geninfo_unexecuted_blocks=1 00:14:19.056 00:14:19.056 ' 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:19.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.056 --rc genhtml_branch_coverage=1 00:14:19.056 --rc genhtml_function_coverage=1 00:14:19.056 --rc genhtml_legend=1 00:14:19.056 --rc geninfo_all_blocks=1 00:14:19.056 --rc geninfo_unexecuted_blocks=1 00:14:19.056 00:14:19.056 ' 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:19.056 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:14:19.056 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:25.627 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:25.627 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:14:25.627 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:25.627 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:25.627 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:25.627 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:25.627 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:25.627 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:14:25.627 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:25.627 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:14:25.627 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:14:25.627 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:14:25.627 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:14:25.627 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:14:25.627 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:14:25.627 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:25.627 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:25.627 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:25.627 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:25.627 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:25.627 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:25.627 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:25.627 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:25.627 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:25.627 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:25.627 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:25.627 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:25.627 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:25.627 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:25.627 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:25.627 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:25.628 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:25.628 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:25.628 Found net devices under 0000:86:00.0: cvl_0_0 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:25.628 Found net devices under 0000:86:00.1: cvl_0_1 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:25.628 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:25.628 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.330 ms 00:14:25.628 00:14:25.628 --- 10.0.0.2 ping statistics --- 00:14:25.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.628 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:25.628 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:25.628 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:14:25.628 00:14:25.628 --- 10.0.0.1 ping statistics --- 00:14:25.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.628 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:25.628 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:25.629 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:25.629 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:25.629 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:25.629 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:25.629 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:25.629 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:25.629 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:25.629 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=122804 00:14:25.629 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:25.629 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 122804 00:14:25.629 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 122804 ']' 00:14:25.629 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.629 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:25.629 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.629 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:25.629 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:25.629 [2024-11-20 12:28:30.847647] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:14:25.629 [2024-11-20 12:28:30.847693] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:25.629 [2024-11-20 12:28:30.926642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:25.629 [2024-11-20 12:28:30.969163] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:25.629 [2024-11-20 12:28:30.969205] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:25.629 [2024-11-20 12:28:30.969213] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:25.629 [2024-11-20 12:28:30.969218] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:25.629 [2024-11-20 12:28:30.969222] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:25.629 [2024-11-20 12:28:30.970797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:25.629 [2024-11-20 12:28:30.970903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:25.629 [2024-11-20 12:28:30.971012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.629 [2024-11-20 12:28:30.971013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:25.629 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:25.629 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:14:25.629 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:25.629 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:25.629 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:25.629 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:25.629 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:25.629 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode7348 00:14:25.629 [2024-11-20 12:28:31.292061] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:25.629 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:14:25.629 { 00:14:25.629 "nqn": "nqn.2016-06.io.spdk:cnode7348", 00:14:25.629 "tgt_name": "foobar", 00:14:25.629 "method": "nvmf_create_subsystem", 00:14:25.629 "req_id": 1 00:14:25.629 } 00:14:25.629 Got JSON-RPC error response 00:14:25.629 response: 00:14:25.629 { 00:14:25.629 "code": -32603, 00:14:25.629 "message": "Unable to find target foobar" 00:14:25.629 }' 00:14:25.629 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:14:25.629 { 00:14:25.629 "nqn": "nqn.2016-06.io.spdk:cnode7348", 00:14:25.629 "tgt_name": "foobar", 00:14:25.629 "method": "nvmf_create_subsystem", 00:14:25.629 "req_id": 1 00:14:25.629 } 00:14:25.629 Got JSON-RPC error response 00:14:25.629 response: 00:14:25.629 { 00:14:25.629 "code": -32603, 00:14:25.629 "message": "Unable to find target foobar" 00:14:25.629 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:25.629 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:25.629 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode14418 00:14:25.886 [2024-11-20 12:28:31.504809] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14418: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:25.886 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:14:25.886 { 00:14:25.886 "nqn": "nqn.2016-06.io.spdk:cnode14418", 00:14:25.886 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:25.886 "method": "nvmf_create_subsystem", 00:14:25.886 "req_id": 1 00:14:25.886 } 00:14:25.886 Got JSON-RPC error response 00:14:25.886 response: 00:14:25.886 { 00:14:25.886 "code": -32602, 00:14:25.886 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:25.886 }' 00:14:25.886 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:14:25.886 { 00:14:25.886 "nqn": "nqn.2016-06.io.spdk:cnode14418", 00:14:25.886 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:25.886 "method": "nvmf_create_subsystem", 00:14:25.886 "req_id": 1 00:14:25.886 } 00:14:25.886 Got JSON-RPC error response 00:14:25.886 response: 00:14:25.886 { 00:14:25.886 "code": -32602, 00:14:25.886 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:25.886 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:25.886 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:25.886 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode20303 00:14:26.143 [2024-11-20 12:28:31.709511] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20303: invalid model number 'SPDK_Controller' 00:14:26.143 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:14:26.143 { 00:14:26.143 "nqn": "nqn.2016-06.io.spdk:cnode20303", 00:14:26.143 "model_number": "SPDK_Controller\u001f", 00:14:26.143 "method": "nvmf_create_subsystem", 00:14:26.143 "req_id": 1 00:14:26.143 } 00:14:26.143 Got JSON-RPC error response 00:14:26.143 response: 00:14:26.143 { 00:14:26.143 "code": -32602, 00:14:26.143 "message": "Invalid MN SPDK_Controller\u001f" 00:14:26.143 }' 00:14:26.143 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:14:26.143 { 00:14:26.143 "nqn": "nqn.2016-06.io.spdk:cnode20303", 00:14:26.143 "model_number": "SPDK_Controller\u001f", 00:14:26.143 "method": "nvmf_create_subsystem", 00:14:26.143 "req_id": 1 00:14:26.143 } 00:14:26.143 Got JSON-RPC error response 00:14:26.143 response: 00:14:26.143 { 00:14:26.143 "code": -32602, 00:14:26.143 "message": "Invalid MN SPDK_Controller\u001f" 00:14:26.143 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:26.143 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:26.143 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:26.143 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:26.143 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:26.143 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:26.143 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:26.143 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.143 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:14:26.143 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:14:26.143 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:14:26.143 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.143 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.143 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:14:26.143 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ U == \- ]] 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'U5taGE|/GnoCC_kQdcD1' 00:14:26.144 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'U5taGE|/GnoCC_kQdcD1' nqn.2016-06.io.spdk:cnode8230 00:14:26.403 [2024-11-20 12:28:32.050694] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8230: invalid serial number 'U5taGE|/GnoCC_kQdcD1' 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:14:26.403 { 00:14:26.403 "nqn": "nqn.2016-06.io.spdk:cnode8230", 00:14:26.403 "serial_number": "U5taG\u007fE|/GnoCC_kQdcD1", 00:14:26.403 "method": "nvmf_create_subsystem", 00:14:26.403 "req_id": 1 00:14:26.403 } 00:14:26.403 Got JSON-RPC error response 00:14:26.403 response: 00:14:26.403 { 00:14:26.403 "code": -32602, 00:14:26.403 "message": "Invalid SN U5taG\u007fE|/GnoCC_kQdcD1" 00:14:26.403 }' 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:14:26.403 { 00:14:26.403 "nqn": "nqn.2016-06.io.spdk:cnode8230", 00:14:26.403 "serial_number": "U5taG\u007fE|/GnoCC_kQdcD1", 00:14:26.403 "method": "nvmf_create_subsystem", 00:14:26.403 "req_id": 1 00:14:26.403 } 00:14:26.403 Got JSON-RPC error response 00:14:26.403 response: 00:14:26.403 { 00:14:26.403 "code": -32602, 00:14:26.403 "message": "Invalid SN U5taG\u007fE|/GnoCC_kQdcD1" 00:14:26.403 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.403 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:14:26.663 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:14:26.664 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:14:26.664 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.664 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.664 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:14:26.664 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:14:26.664 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:14:26.664 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.664 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.664 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:14:26.664 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:14:26.664 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:14:26.664 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.664 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.664 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:14:26.664 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:14:26.664 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:14:26.664 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.664 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.664 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:14:26.664 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:14:26.664 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:14:26.664 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.664 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.664 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:14:26.664 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:14:26.664 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:14:26.664 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.664 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.664 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:14:26.664 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:14:26.664 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:14:26.664 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.664 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.664 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:14:26.664 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:14:26.664 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:14:26.664 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.664 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.664 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:14:26.664 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:14:26.664 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:14:26.664 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:26.664 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:26.664 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ N == \- ]] 00:14:26.664 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'NJ8P@CsK8ulEL$H<~gs^I;.A9YxR(>z=;'\''|jaa<$' 00:14:26.664 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'NJ8P@CsK8ulEL$H<~gs^I;.A9YxR(>z=;'\''|jaa<$' nqn.2016-06.io.spdk:cnode1990 00:14:26.923 [2024-11-20 12:28:32.520206] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1990: invalid model number 'NJ8P@CsK8ulEL$H<~gs^I;.A9YxR(>z=;'|jaa<$' 00:14:26.923 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:14:26.923 { 00:14:26.923 "nqn": "nqn.2016-06.io.spdk:cnode1990", 00:14:26.923 "model_number": "NJ8P@CsK8ulEL$H<~g\u007fs^I;.A9YxR(>z=;'\''|jaa<$", 00:14:26.923 "method": "nvmf_create_subsystem", 00:14:26.923 "req_id": 1 00:14:26.923 } 00:14:26.923 Got JSON-RPC error response 00:14:26.923 response: 00:14:26.923 { 00:14:26.923 "code": -32602, 00:14:26.923 "message": "Invalid MN NJ8P@CsK8ulEL$H<~g\u007fs^I;.A9YxR(>z=;'\''|jaa<$" 00:14:26.923 }' 00:14:26.923 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:14:26.923 { 00:14:26.923 "nqn": "nqn.2016-06.io.spdk:cnode1990", 00:14:26.923 "model_number": "NJ8P@CsK8ulEL$H<~g\u007fs^I;.A9YxR(>z=;'|jaa<$", 00:14:26.923 "method": "nvmf_create_subsystem", 00:14:26.923 "req_id": 1 00:14:26.923 } 00:14:26.923 Got JSON-RPC error response 00:14:26.923 response: 00:14:26.923 { 00:14:26.923 "code": -32602, 00:14:26.923 "message": "Invalid MN NJ8P@CsK8ulEL$H<~g\u007fs^I;.A9YxR(>z=;'|jaa<$" 00:14:26.923 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:26.923 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:14:27.182 [2024-11-20 12:28:32.736980] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:27.182 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:14:27.440 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:14:27.440 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:14:27.440 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:14:27.440 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:14:27.440 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:14:27.440 [2024-11-20 12:28:33.150313] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:14:27.440 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:14:27.440 { 00:14:27.440 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:27.440 "listen_address": { 00:14:27.440 "trtype": "tcp", 00:14:27.440 "traddr": "", 00:14:27.440 "trsvcid": "4421" 00:14:27.440 }, 00:14:27.440 "method": "nvmf_subsystem_remove_listener", 00:14:27.440 "req_id": 1 00:14:27.440 } 00:14:27.440 Got JSON-RPC error response 00:14:27.440 response: 00:14:27.440 { 00:14:27.440 "code": -32602, 00:14:27.440 "message": "Invalid parameters" 00:14:27.440 }' 00:14:27.440 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:14:27.440 { 00:14:27.440 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:27.440 "listen_address": { 00:14:27.440 "trtype": "tcp", 00:14:27.440 "traddr": "", 00:14:27.440 "trsvcid": "4421" 00:14:27.440 }, 00:14:27.440 "method": "nvmf_subsystem_remove_listener", 00:14:27.440 "req_id": 1 00:14:27.440 } 00:14:27.440 Got JSON-RPC error response 00:14:27.440 response: 00:14:27.440 { 00:14:27.440 "code": -32602, 00:14:27.440 "message": "Invalid parameters" 00:14:27.440 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:14:27.440 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29718 -i 0 00:14:27.698 [2024-11-20 12:28:33.346896] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29718: invalid cntlid range [0-65519] 00:14:27.698 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:14:27.698 { 00:14:27.698 "nqn": "nqn.2016-06.io.spdk:cnode29718", 00:14:27.698 "min_cntlid": 0, 00:14:27.698 "method": "nvmf_create_subsystem", 00:14:27.698 "req_id": 1 00:14:27.698 } 00:14:27.698 Got JSON-RPC error response 00:14:27.699 response: 00:14:27.699 { 00:14:27.699 "code": -32602, 00:14:27.699 "message": "Invalid cntlid range [0-65519]" 00:14:27.699 }' 00:14:27.699 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:14:27.699 { 00:14:27.699 "nqn": "nqn.2016-06.io.spdk:cnode29718", 00:14:27.699 "min_cntlid": 0, 00:14:27.699 "method": "nvmf_create_subsystem", 00:14:27.699 "req_id": 1 00:14:27.699 } 00:14:27.699 Got JSON-RPC error response 00:14:27.699 response: 00:14:27.699 { 00:14:27.699 "code": -32602, 00:14:27.699 "message": "Invalid cntlid range [0-65519]" 00:14:27.699 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:27.699 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20119 -i 65520 00:14:27.957 [2024-11-20 12:28:33.539544] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20119: invalid cntlid range [65520-65519] 00:14:27.957 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:14:27.957 { 00:14:27.957 "nqn": "nqn.2016-06.io.spdk:cnode20119", 00:14:27.957 "min_cntlid": 65520, 00:14:27.957 "method": "nvmf_create_subsystem", 00:14:27.957 "req_id": 1 00:14:27.957 } 00:14:27.957 Got JSON-RPC error response 00:14:27.957 response: 00:14:27.957 { 00:14:27.957 "code": -32602, 00:14:27.957 "message": "Invalid cntlid range [65520-65519]" 00:14:27.957 }' 00:14:27.957 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:14:27.957 { 00:14:27.957 "nqn": "nqn.2016-06.io.spdk:cnode20119", 00:14:27.957 "min_cntlid": 65520, 00:14:27.957 "method": "nvmf_create_subsystem", 00:14:27.957 "req_id": 1 00:14:27.957 } 00:14:27.957 Got JSON-RPC error response 00:14:27.957 response: 00:14:27.957 { 00:14:27.957 "code": -32602, 00:14:27.957 "message": "Invalid cntlid range [65520-65519]" 00:14:27.957 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:27.957 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22405 -I 0 00:14:28.215 [2024-11-20 12:28:33.740194] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22405: invalid cntlid range [1-0] 00:14:28.215 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:14:28.215 { 00:14:28.215 "nqn": "nqn.2016-06.io.spdk:cnode22405", 00:14:28.215 "max_cntlid": 0, 00:14:28.215 "method": "nvmf_create_subsystem", 00:14:28.215 "req_id": 1 00:14:28.215 } 00:14:28.215 Got JSON-RPC error response 00:14:28.215 response: 00:14:28.215 { 00:14:28.215 "code": -32602, 00:14:28.215 "message": "Invalid cntlid range [1-0]" 00:14:28.215 }' 00:14:28.215 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:14:28.215 { 00:14:28.215 "nqn": "nqn.2016-06.io.spdk:cnode22405", 00:14:28.215 "max_cntlid": 0, 00:14:28.215 "method": "nvmf_create_subsystem", 00:14:28.215 "req_id": 1 00:14:28.215 } 00:14:28.215 Got JSON-RPC error response 00:14:28.215 response: 00:14:28.215 { 00:14:28.215 "code": -32602, 00:14:28.215 "message": "Invalid cntlid range [1-0]" 00:14:28.215 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:28.215 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22767 -I 65520 00:14:28.215 [2024-11-20 12:28:33.948909] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22767: invalid cntlid range [1-65520] 00:14:28.474 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:14:28.474 { 00:14:28.474 "nqn": "nqn.2016-06.io.spdk:cnode22767", 00:14:28.474 "max_cntlid": 65520, 00:14:28.474 "method": "nvmf_create_subsystem", 00:14:28.474 "req_id": 1 00:14:28.474 } 00:14:28.474 Got JSON-RPC error response 00:14:28.474 response: 00:14:28.474 { 00:14:28.474 "code": -32602, 00:14:28.474 "message": "Invalid cntlid range [1-65520]" 00:14:28.474 }' 00:14:28.474 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:14:28.474 { 00:14:28.474 "nqn": "nqn.2016-06.io.spdk:cnode22767", 00:14:28.474 "max_cntlid": 65520, 00:14:28.474 "method": "nvmf_create_subsystem", 00:14:28.474 "req_id": 1 00:14:28.474 } 00:14:28.474 Got JSON-RPC error response 00:14:28.474 response: 00:14:28.474 { 00:14:28.474 "code": -32602, 00:14:28.474 "message": "Invalid cntlid range [1-65520]" 00:14:28.474 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:28.474 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3563 -i 6 -I 5 00:14:28.474 [2024-11-20 12:28:34.169659] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3563: invalid cntlid range [6-5] 00:14:28.474 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:14:28.474 { 00:14:28.474 "nqn": "nqn.2016-06.io.spdk:cnode3563", 00:14:28.474 "min_cntlid": 6, 00:14:28.474 "max_cntlid": 5, 00:14:28.474 "method": "nvmf_create_subsystem", 00:14:28.474 "req_id": 1 00:14:28.474 } 00:14:28.474 Got JSON-RPC error response 00:14:28.474 response: 00:14:28.474 { 00:14:28.474 "code": -32602, 00:14:28.474 "message": "Invalid cntlid range [6-5]" 00:14:28.474 }' 00:14:28.474 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:14:28.474 { 00:14:28.474 "nqn": "nqn.2016-06.io.spdk:cnode3563", 00:14:28.474 "min_cntlid": 6, 00:14:28.474 "max_cntlid": 5, 00:14:28.474 "method": "nvmf_create_subsystem", 00:14:28.474 "req_id": 1 00:14:28.474 } 00:14:28.474 Got JSON-RPC error response 00:14:28.474 response: 00:14:28.474 { 00:14:28.474 "code": -32602, 00:14:28.474 "message": "Invalid cntlid range [6-5]" 00:14:28.474 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:28.474 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:14:28.733 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:14:28.733 { 00:14:28.733 "name": "foobar", 00:14:28.733 "method": "nvmf_delete_target", 00:14:28.733 "req_id": 1 00:14:28.733 } 00:14:28.733 Got JSON-RPC error response 00:14:28.733 response: 00:14:28.733 { 00:14:28.733 "code": -32602, 00:14:28.733 "message": "The specified target doesn'\''t exist, cannot delete it." 00:14:28.733 }' 00:14:28.733 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:14:28.733 { 00:14:28.733 "name": "foobar", 00:14:28.733 "method": "nvmf_delete_target", 00:14:28.733 "req_id": 1 00:14:28.733 } 00:14:28.733 Got JSON-RPC error response 00:14:28.733 response: 00:14:28.733 { 00:14:28.733 "code": -32602, 00:14:28.733 "message": "The specified target doesn't exist, cannot delete it." 00:14:28.733 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:14:28.733 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:14:28.733 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:14:28.733 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:28.733 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:14:28.733 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:28.733 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:14:28.733 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:28.733 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:28.733 rmmod nvme_tcp 00:14:28.733 rmmod nvme_fabrics 00:14:28.733 rmmod nvme_keyring 00:14:28.733 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:28.733 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:14:28.733 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:14:28.733 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 122804 ']' 00:14:28.733 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 122804 00:14:28.733 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 122804 ']' 00:14:28.733 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 122804 00:14:28.733 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:14:28.733 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:28.733 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 122804 00:14:28.733 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:28.733 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:28.733 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 122804' 00:14:28.733 killing process with pid 122804 00:14:28.733 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 122804 00:14:28.733 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 122804 00:14:28.993 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:28.993 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:28.993 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:28.993 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:14:28.993 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:14:28.993 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:28.993 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:14:28.993 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:28.993 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:28.993 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.993 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:28.993 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.897 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:31.156 00:14:31.156 real 0m12.072s 00:14:31.156 user 0m18.649s 00:14:31.156 sys 0m5.463s 00:14:31.157 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:31.157 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:31.157 ************************************ 00:14:31.157 END TEST nvmf_invalid 00:14:31.157 ************************************ 00:14:31.157 12:28:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:31.157 12:28:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:31.157 12:28:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:31.157 12:28:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:31.157 ************************************ 00:14:31.157 START TEST nvmf_connect_stress 00:14:31.157 ************************************ 00:14:31.157 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:31.157 * Looking for test storage... 00:14:31.157 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:31.157 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:31.157 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:14:31.157 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:31.157 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:31.157 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:31.157 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:31.157 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:31.157 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:14:31.157 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:14:31.157 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:14:31.157 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:14:31.157 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:14:31.157 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:14:31.157 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:14:31.157 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:31.157 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:14:31.157 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:14:31.157 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:31.157 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:31.157 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:14:31.157 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:14:31.157 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:31.157 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:14:31.157 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:14:31.157 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:14:31.157 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:14:31.157 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:31.157 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:14:31.157 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:14:31.157 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:31.157 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:31.157 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:14:31.157 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:31.157 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:31.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.157 --rc genhtml_branch_coverage=1 00:14:31.157 --rc genhtml_function_coverage=1 00:14:31.157 --rc genhtml_legend=1 00:14:31.157 --rc geninfo_all_blocks=1 00:14:31.157 --rc geninfo_unexecuted_blocks=1 00:14:31.157 00:14:31.157 ' 00:14:31.157 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:31.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.157 --rc genhtml_branch_coverage=1 00:14:31.157 --rc genhtml_function_coverage=1 00:14:31.157 --rc genhtml_legend=1 00:14:31.157 --rc geninfo_all_blocks=1 00:14:31.157 --rc geninfo_unexecuted_blocks=1 00:14:31.157 00:14:31.157 ' 00:14:31.157 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:31.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.157 --rc genhtml_branch_coverage=1 00:14:31.157 --rc genhtml_function_coverage=1 00:14:31.157 --rc genhtml_legend=1 00:14:31.157 --rc geninfo_all_blocks=1 00:14:31.157 --rc geninfo_unexecuted_blocks=1 00:14:31.157 00:14:31.157 ' 00:14:31.157 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:31.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.157 --rc genhtml_branch_coverage=1 00:14:31.157 --rc genhtml_function_coverage=1 00:14:31.157 --rc genhtml_legend=1 00:14:31.157 --rc geninfo_all_blocks=1 00:14:31.157 --rc geninfo_unexecuted_blocks=1 00:14:31.157 00:14:31.157 ' 00:14:31.157 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:31.416 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:31.416 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:31.416 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:31.416 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:31.416 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:31.416 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:31.416 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:31.416 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:31.416 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:31.416 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:31.416 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:31.416 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:31.416 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:31.416 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:31.416 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:31.416 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:31.416 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:31.416 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:31.416 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:14:31.416 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:31.416 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:31.416 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:31.417 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.417 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.417 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.417 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:31.417 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.417 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:14:31.417 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:31.417 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:31.417 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:31.417 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:31.417 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:31.417 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:31.417 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:31.417 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:31.417 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:31.417 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:31.417 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:31.417 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:31.417 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:31.417 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:31.417 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:31.417 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:31.417 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.417 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:31.417 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.417 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:31.417 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:31.417 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:14:31.417 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.982 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:37.982 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:14:37.982 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:37.982 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:37.982 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:37.982 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:37.982 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:37.982 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:14:37.982 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:37.982 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:14:37.982 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:14:37.982 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:14:37.982 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:14:37.982 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:14:37.982 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:14:37.982 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:37.982 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:37.982 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:37.982 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:37.982 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:37.983 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:37.983 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:37.983 Found net devices under 0000:86:00.0: cvl_0_0 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:37.983 Found net devices under 0000:86:00.1: cvl_0_1 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:37.983 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:37.983 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.385 ms 00:14:37.983 00:14:37.983 --- 10.0.0.2 ping statistics --- 00:14:37.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.983 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:37.983 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:37.983 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:14:37.983 00:14:37.983 --- 10.0.0.1 ping statistics --- 00:14:37.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.983 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:37.983 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:37.984 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:37.984 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:37.984 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.984 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=127205 00:14:37.984 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:37.984 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 127205 00:14:37.984 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 127205 ']' 00:14:37.984 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.984 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:37.984 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.984 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:37.984 12:28:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.984 [2024-11-20 12:28:43.028422] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:14:37.984 [2024-11-20 12:28:43.028469] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:37.984 [2024-11-20 12:28:43.105116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:37.984 [2024-11-20 12:28:43.144335] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:37.984 [2024-11-20 12:28:43.144370] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:37.984 [2024-11-20 12:28:43.144377] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:37.984 [2024-11-20 12:28:43.144383] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:37.984 [2024-11-20 12:28:43.144387] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:37.984 [2024-11-20 12:28:43.145783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:37.984 [2024-11-20 12:28:43.145871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:37.984 [2024-11-20 12:28:43.145872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.243 [2024-11-20 12:28:43.895976] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.243 [2024-11-20 12:28:43.916199] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.243 NULL1 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=127309 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.243 12:28:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:38.243 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.243 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:38.502 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.502 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:38.502 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.502 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:38.502 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.502 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:38.502 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:38.502 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:38.502 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 127309 00:14:38.502 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.502 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.502 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.760 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.760 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 127309 00:14:38.760 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.760 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.760 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.018 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.018 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 127309 00:14:39.018 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.018 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.018 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.276 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.276 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 127309 00:14:39.276 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.276 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.276 12:28:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.842 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.842 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 127309 00:14:39.842 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.842 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.842 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:40.100 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.100 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 127309 00:14:40.100 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.100 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.100 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:40.359 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.359 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 127309 00:14:40.359 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.359 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.359 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:40.617 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.617 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 127309 00:14:40.617 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.617 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.617 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:40.875 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.875 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 127309 00:14:40.875 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.875 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.875 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:41.441 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.441 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 127309 00:14:41.441 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.441 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.441 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:41.699 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.699 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 127309 00:14:41.699 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.699 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.699 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:41.957 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.957 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 127309 00:14:41.957 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.957 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.957 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:42.215 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.215 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 127309 00:14:42.215 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:42.215 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.215 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:42.781 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.781 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 127309 00:14:42.781 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:42.781 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.781 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:43.040 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.040 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 127309 00:14:43.040 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.040 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.040 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:43.320 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.320 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 127309 00:14:43.320 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.320 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.320 12:28:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:43.595 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.595 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 127309 00:14:43.595 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.595 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.595 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:43.909 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.909 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 127309 00:14:43.909 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.909 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.909 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:44.168 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.168 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 127309 00:14:44.168 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:44.168 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.168 12:28:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:44.735 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.735 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 127309 00:14:44.735 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:44.735 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.735 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:44.994 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.994 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 127309 00:14:44.994 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:44.994 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.994 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:45.252 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.252 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 127309 00:14:45.252 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:45.253 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.253 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:45.511 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.511 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 127309 00:14:45.511 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:45.511 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.511 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:45.770 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.770 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 127309 00:14:45.770 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:45.770 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.770 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.339 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.339 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 127309 00:14:46.339 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.339 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.339 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.597 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.597 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 127309 00:14:46.597 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.597 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.597 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.856 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.856 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 127309 00:14:46.856 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.856 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.856 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:47.115 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.115 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 127309 00:14:47.115 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:47.115 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.115 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:47.377 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.377 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 127309 00:14:47.377 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:47.377 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.377 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:47.945 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.945 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 127309 00:14:47.945 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:47.945 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.945 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:48.204 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.204 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 127309 00:14:48.204 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:48.204 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.204 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:48.479 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:48.479 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.479 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 127309 00:14:48.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (127309) - No such process 00:14:48.479 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 127309 00:14:48.479 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:48.479 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:48.479 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:48.479 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:48.479 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:14:48.479 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:48.479 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:14:48.479 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:48.479 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:48.479 rmmod nvme_tcp 00:14:48.479 rmmod nvme_fabrics 00:14:48.479 rmmod nvme_keyring 00:14:48.479 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:48.479 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:14:48.479 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:14:48.479 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 127205 ']' 00:14:48.479 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 127205 00:14:48.479 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 127205 ']' 00:14:48.479 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 127205 00:14:48.479 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:14:48.479 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:48.479 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 127205 00:14:48.479 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:48.479 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:48.479 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 127205' 00:14:48.479 killing process with pid 127205 00:14:48.479 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 127205 00:14:48.479 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 127205 00:14:48.746 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:48.746 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:48.746 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:48.746 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:14:48.746 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:48.746 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:14:48.746 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:14:48.746 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:48.746 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:48.746 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.746 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:48.746 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:51.284 00:14:51.284 real 0m19.723s 00:14:51.284 user 0m41.341s 00:14:51.284 sys 0m8.705s 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:51.284 ************************************ 00:14:51.284 END TEST nvmf_connect_stress 00:14:51.284 ************************************ 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:51.284 ************************************ 00:14:51.284 START TEST nvmf_fused_ordering 00:14:51.284 ************************************ 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:51.284 * Looking for test storage... 00:14:51.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:51.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.284 --rc genhtml_branch_coverage=1 00:14:51.284 --rc genhtml_function_coverage=1 00:14:51.284 --rc genhtml_legend=1 00:14:51.284 --rc geninfo_all_blocks=1 00:14:51.284 --rc geninfo_unexecuted_blocks=1 00:14:51.284 00:14:51.284 ' 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:51.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.284 --rc genhtml_branch_coverage=1 00:14:51.284 --rc genhtml_function_coverage=1 00:14:51.284 --rc genhtml_legend=1 00:14:51.284 --rc geninfo_all_blocks=1 00:14:51.284 --rc geninfo_unexecuted_blocks=1 00:14:51.284 00:14:51.284 ' 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:51.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.284 --rc genhtml_branch_coverage=1 00:14:51.284 --rc genhtml_function_coverage=1 00:14:51.284 --rc genhtml_legend=1 00:14:51.284 --rc geninfo_all_blocks=1 00:14:51.284 --rc geninfo_unexecuted_blocks=1 00:14:51.284 00:14:51.284 ' 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:51.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.284 --rc genhtml_branch_coverage=1 00:14:51.284 --rc genhtml_function_coverage=1 00:14:51.284 --rc genhtml_legend=1 00:14:51.284 --rc geninfo_all_blocks=1 00:14:51.284 --rc geninfo_unexecuted_blocks=1 00:14:51.284 00:14:51.284 ' 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:51.284 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:51.285 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:51.285 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:51.285 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:51.285 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:51.285 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:51.285 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:51.285 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:51.285 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:51.285 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:51.285 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:51.285 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:51.285 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:51.285 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:51.285 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:14:51.285 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:51.285 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:51.285 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:51.285 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.285 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.285 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.285 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:51.285 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.285 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:14:51.285 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:51.285 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:51.285 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:51.285 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:51.285 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:51.285 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:51.285 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:51.285 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:51.285 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:51.285 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:51.285 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:51.285 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:51.285 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:51.285 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:51.285 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:51.285 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:51.285 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.285 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:51.285 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.285 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:51.285 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:51.285 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:14:51.285 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:57.858 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:57.858 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:14:57.858 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:57.858 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:57.858 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:57.858 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:57.858 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:57.859 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:57.859 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:57.859 Found net devices under 0000:86:00.0: cvl_0_0 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:57.859 Found net devices under 0000:86:00.1: cvl_0_1 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:57.859 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:57.860 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:57.860 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:57.860 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:57.860 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:57.860 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:57.860 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:57.860 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:57.860 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:57.860 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:57.860 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:57.860 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:57.860 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:57.860 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:57.860 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:57.860 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:57.860 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:57.860 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:57.860 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:57.860 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:57.860 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:57.860 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:57.860 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.496 ms 00:14:57.860 00:14:57.860 --- 10.0.0.2 ping statistics --- 00:14:57.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.860 rtt min/avg/max/mdev = 0.496/0.496/0.496/0.000 ms 00:14:57.860 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:57.860 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:57.860 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:14:57.860 00:14:57.860 --- 10.0.0.1 ping statistics --- 00:14:57.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.860 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:14:57.860 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:57.860 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:14:57.860 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:57.860 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:57.860 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:57.860 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:57.860 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:57.860 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:57.860 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:57.860 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:57.860 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:57.860 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:57.860 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:57.860 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=132614 00:14:57.860 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 132614 00:14:57.860 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 132614 ']' 00:14:57.860 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:57.860 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.860 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:57.860 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.860 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:57.860 12:29:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:57.860 [2024-11-20 12:29:02.811412] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:14:57.860 [2024-11-20 12:29:02.811463] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.860 [2024-11-20 12:29:02.889577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.860 [2024-11-20 12:29:02.930180] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.860 [2024-11-20 12:29:02.930217] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.860 [2024-11-20 12:29:02.930224] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:57.860 [2024-11-20 12:29:02.930230] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:57.860 [2024-11-20 12:29:02.930235] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.860 [2024-11-20 12:29:02.930796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.860 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:57.860 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:14:57.860 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:57.860 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:57.860 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:57.860 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:57.860 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:57.860 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.860 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:57.860 [2024-11-20 12:29:03.065453] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:57.860 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.860 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:57.860 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.860 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:57.860 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.860 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:57.860 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.861 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:57.861 [2024-11-20 12:29:03.085629] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:57.861 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.861 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:57.861 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.861 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:57.861 NULL1 00:14:57.861 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.861 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:57.861 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.861 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:57.861 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.861 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:57.861 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.861 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:57.861 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.861 12:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:57.861 [2024-11-20 12:29:03.145985] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:14:57.861 [2024-11-20 12:29:03.146030] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132638 ] 00:14:57.861 Attached to nqn.2016-06.io.spdk:cnode1 00:14:57.861 Namespace ID: 1 size: 1GB 00:14:57.861 fused_ordering(0) 00:14:57.861 fused_ordering(1) 00:14:57.861 fused_ordering(2) 00:14:57.861 fused_ordering(3) 00:14:57.861 fused_ordering(4) 00:14:57.861 fused_ordering(5) 00:14:57.861 fused_ordering(6) 00:14:57.861 fused_ordering(7) 00:14:57.861 fused_ordering(8) 00:14:57.861 fused_ordering(9) 00:14:57.861 fused_ordering(10) 00:14:57.861 fused_ordering(11) 00:14:57.861 fused_ordering(12) 00:14:57.861 fused_ordering(13) 00:14:57.861 fused_ordering(14) 00:14:57.861 fused_ordering(15) 00:14:57.861 fused_ordering(16) 00:14:57.861 fused_ordering(17) 00:14:57.861 fused_ordering(18) 00:14:57.861 fused_ordering(19) 00:14:57.861 fused_ordering(20) 00:14:57.861 fused_ordering(21) 00:14:57.861 fused_ordering(22) 00:14:57.861 fused_ordering(23) 00:14:57.861 fused_ordering(24) 00:14:57.861 fused_ordering(25) 00:14:57.861 fused_ordering(26) 00:14:57.861 fused_ordering(27) 00:14:57.861 fused_ordering(28) 00:14:57.861 fused_ordering(29) 00:14:57.861 fused_ordering(30) 00:14:57.861 fused_ordering(31) 00:14:57.861 fused_ordering(32) 00:14:57.861 fused_ordering(33) 00:14:57.861 fused_ordering(34) 00:14:57.861 fused_ordering(35) 00:14:57.861 fused_ordering(36) 00:14:57.861 fused_ordering(37) 00:14:57.861 fused_ordering(38) 00:14:57.861 fused_ordering(39) 00:14:57.861 fused_ordering(40) 00:14:57.861 fused_ordering(41) 00:14:57.861 fused_ordering(42) 00:14:57.861 fused_ordering(43) 00:14:57.861 fused_ordering(44) 00:14:57.861 fused_ordering(45) 00:14:57.861 fused_ordering(46) 00:14:57.861 fused_ordering(47) 00:14:57.861 fused_ordering(48) 00:14:57.861 fused_ordering(49) 00:14:57.861 fused_ordering(50) 00:14:57.861 fused_ordering(51) 00:14:57.861 fused_ordering(52) 00:14:57.861 fused_ordering(53) 00:14:57.861 fused_ordering(54) 00:14:57.861 fused_ordering(55) 00:14:57.861 fused_ordering(56) 00:14:57.861 fused_ordering(57) 00:14:57.861 fused_ordering(58) 00:14:57.861 fused_ordering(59) 00:14:57.861 fused_ordering(60) 00:14:57.861 fused_ordering(61) 00:14:57.861 fused_ordering(62) 00:14:57.861 fused_ordering(63) 00:14:57.861 fused_ordering(64) 00:14:57.861 fused_ordering(65) 00:14:57.861 fused_ordering(66) 00:14:57.861 fused_ordering(67) 00:14:57.861 fused_ordering(68) 00:14:57.861 fused_ordering(69) 00:14:57.861 fused_ordering(70) 00:14:57.861 fused_ordering(71) 00:14:57.861 fused_ordering(72) 00:14:57.861 fused_ordering(73) 00:14:57.861 fused_ordering(74) 00:14:57.861 fused_ordering(75) 00:14:57.861 fused_ordering(76) 00:14:57.861 fused_ordering(77) 00:14:57.861 fused_ordering(78) 00:14:57.861 fused_ordering(79) 00:14:57.861 fused_ordering(80) 00:14:57.861 fused_ordering(81) 00:14:57.861 fused_ordering(82) 00:14:57.861 fused_ordering(83) 00:14:57.861 fused_ordering(84) 00:14:57.861 fused_ordering(85) 00:14:57.861 fused_ordering(86) 00:14:57.861 fused_ordering(87) 00:14:57.861 fused_ordering(88) 00:14:57.861 fused_ordering(89) 00:14:57.861 fused_ordering(90) 00:14:57.861 fused_ordering(91) 00:14:57.861 fused_ordering(92) 00:14:57.861 fused_ordering(93) 00:14:57.861 fused_ordering(94) 00:14:57.861 fused_ordering(95) 00:14:57.861 fused_ordering(96) 00:14:57.861 fused_ordering(97) 00:14:57.861 fused_ordering(98) 00:14:57.861 fused_ordering(99) 00:14:57.861 fused_ordering(100) 00:14:57.861 fused_ordering(101) 00:14:57.861 fused_ordering(102) 00:14:57.861 fused_ordering(103) 00:14:57.861 fused_ordering(104) 00:14:57.861 fused_ordering(105) 00:14:57.861 fused_ordering(106) 00:14:57.861 fused_ordering(107) 00:14:57.861 fused_ordering(108) 00:14:57.861 fused_ordering(109) 00:14:57.861 fused_ordering(110) 00:14:57.861 fused_ordering(111) 00:14:57.861 fused_ordering(112) 00:14:57.861 fused_ordering(113) 00:14:57.861 fused_ordering(114) 00:14:57.861 fused_ordering(115) 00:14:57.861 fused_ordering(116) 00:14:57.861 fused_ordering(117) 00:14:57.861 fused_ordering(118) 00:14:57.861 fused_ordering(119) 00:14:57.861 fused_ordering(120) 00:14:57.861 fused_ordering(121) 00:14:57.861 fused_ordering(122) 00:14:57.861 fused_ordering(123) 00:14:57.861 fused_ordering(124) 00:14:57.861 fused_ordering(125) 00:14:57.861 fused_ordering(126) 00:14:57.861 fused_ordering(127) 00:14:57.861 fused_ordering(128) 00:14:57.861 fused_ordering(129) 00:14:57.861 fused_ordering(130) 00:14:57.861 fused_ordering(131) 00:14:57.861 fused_ordering(132) 00:14:57.861 fused_ordering(133) 00:14:57.861 fused_ordering(134) 00:14:57.861 fused_ordering(135) 00:14:57.861 fused_ordering(136) 00:14:57.862 fused_ordering(137) 00:14:57.862 fused_ordering(138) 00:14:57.862 fused_ordering(139) 00:14:57.862 fused_ordering(140) 00:14:57.862 fused_ordering(141) 00:14:57.862 fused_ordering(142) 00:14:57.862 fused_ordering(143) 00:14:57.862 fused_ordering(144) 00:14:57.862 fused_ordering(145) 00:14:57.862 fused_ordering(146) 00:14:57.862 fused_ordering(147) 00:14:57.862 fused_ordering(148) 00:14:57.862 fused_ordering(149) 00:14:57.862 fused_ordering(150) 00:14:57.862 fused_ordering(151) 00:14:57.862 fused_ordering(152) 00:14:57.862 fused_ordering(153) 00:14:57.862 fused_ordering(154) 00:14:57.862 fused_ordering(155) 00:14:57.862 fused_ordering(156) 00:14:57.862 fused_ordering(157) 00:14:57.862 fused_ordering(158) 00:14:57.862 fused_ordering(159) 00:14:57.862 fused_ordering(160) 00:14:57.862 fused_ordering(161) 00:14:57.862 fused_ordering(162) 00:14:57.862 fused_ordering(163) 00:14:57.862 fused_ordering(164) 00:14:57.862 fused_ordering(165) 00:14:57.862 fused_ordering(166) 00:14:57.862 fused_ordering(167) 00:14:57.862 fused_ordering(168) 00:14:57.862 fused_ordering(169) 00:14:57.862 fused_ordering(170) 00:14:57.862 fused_ordering(171) 00:14:57.862 fused_ordering(172) 00:14:57.862 fused_ordering(173) 00:14:57.862 fused_ordering(174) 00:14:57.862 fused_ordering(175) 00:14:57.862 fused_ordering(176) 00:14:57.862 fused_ordering(177) 00:14:57.862 fused_ordering(178) 00:14:57.862 fused_ordering(179) 00:14:57.862 fused_ordering(180) 00:14:57.862 fused_ordering(181) 00:14:57.862 fused_ordering(182) 00:14:57.862 fused_ordering(183) 00:14:57.862 fused_ordering(184) 00:14:57.862 fused_ordering(185) 00:14:57.862 fused_ordering(186) 00:14:57.862 fused_ordering(187) 00:14:57.862 fused_ordering(188) 00:14:57.862 fused_ordering(189) 00:14:57.862 fused_ordering(190) 00:14:57.862 fused_ordering(191) 00:14:57.862 fused_ordering(192) 00:14:57.862 fused_ordering(193) 00:14:57.862 fused_ordering(194) 00:14:57.862 fused_ordering(195) 00:14:57.862 fused_ordering(196) 00:14:57.862 fused_ordering(197) 00:14:57.862 fused_ordering(198) 00:14:57.862 fused_ordering(199) 00:14:57.862 fused_ordering(200) 00:14:57.862 fused_ordering(201) 00:14:57.862 fused_ordering(202) 00:14:57.862 fused_ordering(203) 00:14:57.862 fused_ordering(204) 00:14:57.862 fused_ordering(205) 00:14:58.121 fused_ordering(206) 00:14:58.121 fused_ordering(207) 00:14:58.121 fused_ordering(208) 00:14:58.121 fused_ordering(209) 00:14:58.121 fused_ordering(210) 00:14:58.121 fused_ordering(211) 00:14:58.121 fused_ordering(212) 00:14:58.121 fused_ordering(213) 00:14:58.121 fused_ordering(214) 00:14:58.121 fused_ordering(215) 00:14:58.121 fused_ordering(216) 00:14:58.121 fused_ordering(217) 00:14:58.121 fused_ordering(218) 00:14:58.121 fused_ordering(219) 00:14:58.121 fused_ordering(220) 00:14:58.121 fused_ordering(221) 00:14:58.121 fused_ordering(222) 00:14:58.121 fused_ordering(223) 00:14:58.121 fused_ordering(224) 00:14:58.121 fused_ordering(225) 00:14:58.121 fused_ordering(226) 00:14:58.121 fused_ordering(227) 00:14:58.121 fused_ordering(228) 00:14:58.121 fused_ordering(229) 00:14:58.121 fused_ordering(230) 00:14:58.121 fused_ordering(231) 00:14:58.121 fused_ordering(232) 00:14:58.121 fused_ordering(233) 00:14:58.121 fused_ordering(234) 00:14:58.121 fused_ordering(235) 00:14:58.121 fused_ordering(236) 00:14:58.121 fused_ordering(237) 00:14:58.121 fused_ordering(238) 00:14:58.121 fused_ordering(239) 00:14:58.121 fused_ordering(240) 00:14:58.121 fused_ordering(241) 00:14:58.121 fused_ordering(242) 00:14:58.121 fused_ordering(243) 00:14:58.121 fused_ordering(244) 00:14:58.121 fused_ordering(245) 00:14:58.121 fused_ordering(246) 00:14:58.121 fused_ordering(247) 00:14:58.121 fused_ordering(248) 00:14:58.121 fused_ordering(249) 00:14:58.121 fused_ordering(250) 00:14:58.121 fused_ordering(251) 00:14:58.121 fused_ordering(252) 00:14:58.121 fused_ordering(253) 00:14:58.121 fused_ordering(254) 00:14:58.121 fused_ordering(255) 00:14:58.121 fused_ordering(256) 00:14:58.121 fused_ordering(257) 00:14:58.121 fused_ordering(258) 00:14:58.121 fused_ordering(259) 00:14:58.121 fused_ordering(260) 00:14:58.121 fused_ordering(261) 00:14:58.121 fused_ordering(262) 00:14:58.121 fused_ordering(263) 00:14:58.121 fused_ordering(264) 00:14:58.121 fused_ordering(265) 00:14:58.121 fused_ordering(266) 00:14:58.121 fused_ordering(267) 00:14:58.121 fused_ordering(268) 00:14:58.121 fused_ordering(269) 00:14:58.121 fused_ordering(270) 00:14:58.121 fused_ordering(271) 00:14:58.121 fused_ordering(272) 00:14:58.121 fused_ordering(273) 00:14:58.122 fused_ordering(274) 00:14:58.122 fused_ordering(275) 00:14:58.122 fused_ordering(276) 00:14:58.122 fused_ordering(277) 00:14:58.122 fused_ordering(278) 00:14:58.122 fused_ordering(279) 00:14:58.122 fused_ordering(280) 00:14:58.122 fused_ordering(281) 00:14:58.122 fused_ordering(282) 00:14:58.122 fused_ordering(283) 00:14:58.122 fused_ordering(284) 00:14:58.122 fused_ordering(285) 00:14:58.122 fused_ordering(286) 00:14:58.122 fused_ordering(287) 00:14:58.122 fused_ordering(288) 00:14:58.122 fused_ordering(289) 00:14:58.122 fused_ordering(290) 00:14:58.122 fused_ordering(291) 00:14:58.122 fused_ordering(292) 00:14:58.122 fused_ordering(293) 00:14:58.122 fused_ordering(294) 00:14:58.122 fused_ordering(295) 00:14:58.122 fused_ordering(296) 00:14:58.122 fused_ordering(297) 00:14:58.122 fused_ordering(298) 00:14:58.122 fused_ordering(299) 00:14:58.122 fused_ordering(300) 00:14:58.122 fused_ordering(301) 00:14:58.122 fused_ordering(302) 00:14:58.122 fused_ordering(303) 00:14:58.122 fused_ordering(304) 00:14:58.122 fused_ordering(305) 00:14:58.122 fused_ordering(306) 00:14:58.122 fused_ordering(307) 00:14:58.122 fused_ordering(308) 00:14:58.122 fused_ordering(309) 00:14:58.122 fused_ordering(310) 00:14:58.122 fused_ordering(311) 00:14:58.122 fused_ordering(312) 00:14:58.122 fused_ordering(313) 00:14:58.122 fused_ordering(314) 00:14:58.122 fused_ordering(315) 00:14:58.122 fused_ordering(316) 00:14:58.122 fused_ordering(317) 00:14:58.122 fused_ordering(318) 00:14:58.122 fused_ordering(319) 00:14:58.122 fused_ordering(320) 00:14:58.122 fused_ordering(321) 00:14:58.122 fused_ordering(322) 00:14:58.122 fused_ordering(323) 00:14:58.122 fused_ordering(324) 00:14:58.122 fused_ordering(325) 00:14:58.122 fused_ordering(326) 00:14:58.122 fused_ordering(327) 00:14:58.122 fused_ordering(328) 00:14:58.122 fused_ordering(329) 00:14:58.122 fused_ordering(330) 00:14:58.122 fused_ordering(331) 00:14:58.122 fused_ordering(332) 00:14:58.122 fused_ordering(333) 00:14:58.122 fused_ordering(334) 00:14:58.122 fused_ordering(335) 00:14:58.122 fused_ordering(336) 00:14:58.122 fused_ordering(337) 00:14:58.122 fused_ordering(338) 00:14:58.122 fused_ordering(339) 00:14:58.122 fused_ordering(340) 00:14:58.122 fused_ordering(341) 00:14:58.122 fused_ordering(342) 00:14:58.122 fused_ordering(343) 00:14:58.122 fused_ordering(344) 00:14:58.122 fused_ordering(345) 00:14:58.122 fused_ordering(346) 00:14:58.122 fused_ordering(347) 00:14:58.122 fused_ordering(348) 00:14:58.122 fused_ordering(349) 00:14:58.122 fused_ordering(350) 00:14:58.122 fused_ordering(351) 00:14:58.122 fused_ordering(352) 00:14:58.122 fused_ordering(353) 00:14:58.122 fused_ordering(354) 00:14:58.122 fused_ordering(355) 00:14:58.122 fused_ordering(356) 00:14:58.122 fused_ordering(357) 00:14:58.122 fused_ordering(358) 00:14:58.122 fused_ordering(359) 00:14:58.122 fused_ordering(360) 00:14:58.122 fused_ordering(361) 00:14:58.122 fused_ordering(362) 00:14:58.122 fused_ordering(363) 00:14:58.122 fused_ordering(364) 00:14:58.122 fused_ordering(365) 00:14:58.122 fused_ordering(366) 00:14:58.122 fused_ordering(367) 00:14:58.122 fused_ordering(368) 00:14:58.122 fused_ordering(369) 00:14:58.122 fused_ordering(370) 00:14:58.122 fused_ordering(371) 00:14:58.122 fused_ordering(372) 00:14:58.122 fused_ordering(373) 00:14:58.122 fused_ordering(374) 00:14:58.122 fused_ordering(375) 00:14:58.122 fused_ordering(376) 00:14:58.122 fused_ordering(377) 00:14:58.122 fused_ordering(378) 00:14:58.122 fused_ordering(379) 00:14:58.122 fused_ordering(380) 00:14:58.122 fused_ordering(381) 00:14:58.122 fused_ordering(382) 00:14:58.122 fused_ordering(383) 00:14:58.122 fused_ordering(384) 00:14:58.122 fused_ordering(385) 00:14:58.122 fused_ordering(386) 00:14:58.122 fused_ordering(387) 00:14:58.122 fused_ordering(388) 00:14:58.122 fused_ordering(389) 00:14:58.122 fused_ordering(390) 00:14:58.122 fused_ordering(391) 00:14:58.122 fused_ordering(392) 00:14:58.122 fused_ordering(393) 00:14:58.122 fused_ordering(394) 00:14:58.122 fused_ordering(395) 00:14:58.122 fused_ordering(396) 00:14:58.122 fused_ordering(397) 00:14:58.122 fused_ordering(398) 00:14:58.122 fused_ordering(399) 00:14:58.122 fused_ordering(400) 00:14:58.122 fused_ordering(401) 00:14:58.122 fused_ordering(402) 00:14:58.122 fused_ordering(403) 00:14:58.122 fused_ordering(404) 00:14:58.122 fused_ordering(405) 00:14:58.122 fused_ordering(406) 00:14:58.122 fused_ordering(407) 00:14:58.122 fused_ordering(408) 00:14:58.122 fused_ordering(409) 00:14:58.122 fused_ordering(410) 00:14:58.381 fused_ordering(411) 00:14:58.381 fused_ordering(412) 00:14:58.381 fused_ordering(413) 00:14:58.381 fused_ordering(414) 00:14:58.381 fused_ordering(415) 00:14:58.381 fused_ordering(416) 00:14:58.381 fused_ordering(417) 00:14:58.381 fused_ordering(418) 00:14:58.381 fused_ordering(419) 00:14:58.381 fused_ordering(420) 00:14:58.381 fused_ordering(421) 00:14:58.381 fused_ordering(422) 00:14:58.381 fused_ordering(423) 00:14:58.381 fused_ordering(424) 00:14:58.381 fused_ordering(425) 00:14:58.381 fused_ordering(426) 00:14:58.381 fused_ordering(427) 00:14:58.381 fused_ordering(428) 00:14:58.382 fused_ordering(429) 00:14:58.382 fused_ordering(430) 00:14:58.382 fused_ordering(431) 00:14:58.382 fused_ordering(432) 00:14:58.382 fused_ordering(433) 00:14:58.382 fused_ordering(434) 00:14:58.382 fused_ordering(435) 00:14:58.382 fused_ordering(436) 00:14:58.382 fused_ordering(437) 00:14:58.382 fused_ordering(438) 00:14:58.382 fused_ordering(439) 00:14:58.382 fused_ordering(440) 00:14:58.382 fused_ordering(441) 00:14:58.382 fused_ordering(442) 00:14:58.382 fused_ordering(443) 00:14:58.382 fused_ordering(444) 00:14:58.382 fused_ordering(445) 00:14:58.382 fused_ordering(446) 00:14:58.382 fused_ordering(447) 00:14:58.382 fused_ordering(448) 00:14:58.382 fused_ordering(449) 00:14:58.382 fused_ordering(450) 00:14:58.382 fused_ordering(451) 00:14:58.382 fused_ordering(452) 00:14:58.382 fused_ordering(453) 00:14:58.382 fused_ordering(454) 00:14:58.382 fused_ordering(455) 00:14:58.382 fused_ordering(456) 00:14:58.382 fused_ordering(457) 00:14:58.382 fused_ordering(458) 00:14:58.382 fused_ordering(459) 00:14:58.382 fused_ordering(460) 00:14:58.382 fused_ordering(461) 00:14:58.382 fused_ordering(462) 00:14:58.382 fused_ordering(463) 00:14:58.382 fused_ordering(464) 00:14:58.382 fused_ordering(465) 00:14:58.382 fused_ordering(466) 00:14:58.382 fused_ordering(467) 00:14:58.382 fused_ordering(468) 00:14:58.382 fused_ordering(469) 00:14:58.382 fused_ordering(470) 00:14:58.382 fused_ordering(471) 00:14:58.382 fused_ordering(472) 00:14:58.382 fused_ordering(473) 00:14:58.382 fused_ordering(474) 00:14:58.382 fused_ordering(475) 00:14:58.382 fused_ordering(476) 00:14:58.382 fused_ordering(477) 00:14:58.382 fused_ordering(478) 00:14:58.382 fused_ordering(479) 00:14:58.382 fused_ordering(480) 00:14:58.382 fused_ordering(481) 00:14:58.382 fused_ordering(482) 00:14:58.382 fused_ordering(483) 00:14:58.382 fused_ordering(484) 00:14:58.382 fused_ordering(485) 00:14:58.382 fused_ordering(486) 00:14:58.382 fused_ordering(487) 00:14:58.382 fused_ordering(488) 00:14:58.382 fused_ordering(489) 00:14:58.382 fused_ordering(490) 00:14:58.382 fused_ordering(491) 00:14:58.382 fused_ordering(492) 00:14:58.382 fused_ordering(493) 00:14:58.382 fused_ordering(494) 00:14:58.382 fused_ordering(495) 00:14:58.382 fused_ordering(496) 00:14:58.382 fused_ordering(497) 00:14:58.382 fused_ordering(498) 00:14:58.382 fused_ordering(499) 00:14:58.382 fused_ordering(500) 00:14:58.382 fused_ordering(501) 00:14:58.382 fused_ordering(502) 00:14:58.382 fused_ordering(503) 00:14:58.382 fused_ordering(504) 00:14:58.382 fused_ordering(505) 00:14:58.382 fused_ordering(506) 00:14:58.382 fused_ordering(507) 00:14:58.382 fused_ordering(508) 00:14:58.382 fused_ordering(509) 00:14:58.382 fused_ordering(510) 00:14:58.382 fused_ordering(511) 00:14:58.382 fused_ordering(512) 00:14:58.382 fused_ordering(513) 00:14:58.382 fused_ordering(514) 00:14:58.382 fused_ordering(515) 00:14:58.382 fused_ordering(516) 00:14:58.382 fused_ordering(517) 00:14:58.382 fused_ordering(518) 00:14:58.382 fused_ordering(519) 00:14:58.382 fused_ordering(520) 00:14:58.382 fused_ordering(521) 00:14:58.382 fused_ordering(522) 00:14:58.382 fused_ordering(523) 00:14:58.382 fused_ordering(524) 00:14:58.382 fused_ordering(525) 00:14:58.382 fused_ordering(526) 00:14:58.382 fused_ordering(527) 00:14:58.382 fused_ordering(528) 00:14:58.382 fused_ordering(529) 00:14:58.382 fused_ordering(530) 00:14:58.382 fused_ordering(531) 00:14:58.382 fused_ordering(532) 00:14:58.382 fused_ordering(533) 00:14:58.382 fused_ordering(534) 00:14:58.382 fused_ordering(535) 00:14:58.382 fused_ordering(536) 00:14:58.382 fused_ordering(537) 00:14:58.382 fused_ordering(538) 00:14:58.382 fused_ordering(539) 00:14:58.382 fused_ordering(540) 00:14:58.382 fused_ordering(541) 00:14:58.382 fused_ordering(542) 00:14:58.382 fused_ordering(543) 00:14:58.382 fused_ordering(544) 00:14:58.382 fused_ordering(545) 00:14:58.382 fused_ordering(546) 00:14:58.382 fused_ordering(547) 00:14:58.382 fused_ordering(548) 00:14:58.382 fused_ordering(549) 00:14:58.382 fused_ordering(550) 00:14:58.382 fused_ordering(551) 00:14:58.382 fused_ordering(552) 00:14:58.382 fused_ordering(553) 00:14:58.382 fused_ordering(554) 00:14:58.382 fused_ordering(555) 00:14:58.382 fused_ordering(556) 00:14:58.382 fused_ordering(557) 00:14:58.382 fused_ordering(558) 00:14:58.382 fused_ordering(559) 00:14:58.382 fused_ordering(560) 00:14:58.382 fused_ordering(561) 00:14:58.382 fused_ordering(562) 00:14:58.382 fused_ordering(563) 00:14:58.382 fused_ordering(564) 00:14:58.382 fused_ordering(565) 00:14:58.382 fused_ordering(566) 00:14:58.382 fused_ordering(567) 00:14:58.382 fused_ordering(568) 00:14:58.382 fused_ordering(569) 00:14:58.382 fused_ordering(570) 00:14:58.382 fused_ordering(571) 00:14:58.382 fused_ordering(572) 00:14:58.382 fused_ordering(573) 00:14:58.382 fused_ordering(574) 00:14:58.382 fused_ordering(575) 00:14:58.382 fused_ordering(576) 00:14:58.382 fused_ordering(577) 00:14:58.382 fused_ordering(578) 00:14:58.382 fused_ordering(579) 00:14:58.382 fused_ordering(580) 00:14:58.382 fused_ordering(581) 00:14:58.382 fused_ordering(582) 00:14:58.382 fused_ordering(583) 00:14:58.382 fused_ordering(584) 00:14:58.382 fused_ordering(585) 00:14:58.382 fused_ordering(586) 00:14:58.382 fused_ordering(587) 00:14:58.382 fused_ordering(588) 00:14:58.382 fused_ordering(589) 00:14:58.382 fused_ordering(590) 00:14:58.382 fused_ordering(591) 00:14:58.382 fused_ordering(592) 00:14:58.382 fused_ordering(593) 00:14:58.382 fused_ordering(594) 00:14:58.382 fused_ordering(595) 00:14:58.382 fused_ordering(596) 00:14:58.382 fused_ordering(597) 00:14:58.382 fused_ordering(598) 00:14:58.382 fused_ordering(599) 00:14:58.382 fused_ordering(600) 00:14:58.382 fused_ordering(601) 00:14:58.382 fused_ordering(602) 00:14:58.382 fused_ordering(603) 00:14:58.382 fused_ordering(604) 00:14:58.382 fused_ordering(605) 00:14:58.382 fused_ordering(606) 00:14:58.382 fused_ordering(607) 00:14:58.382 fused_ordering(608) 00:14:58.382 fused_ordering(609) 00:14:58.382 fused_ordering(610) 00:14:58.382 fused_ordering(611) 00:14:58.382 fused_ordering(612) 00:14:58.382 fused_ordering(613) 00:14:58.382 fused_ordering(614) 00:14:58.382 fused_ordering(615) 00:14:58.950 fused_ordering(616) 00:14:58.950 fused_ordering(617) 00:14:58.950 fused_ordering(618) 00:14:58.950 fused_ordering(619) 00:14:58.950 fused_ordering(620) 00:14:58.950 fused_ordering(621) 00:14:58.950 fused_ordering(622) 00:14:58.950 fused_ordering(623) 00:14:58.950 fused_ordering(624) 00:14:58.950 fused_ordering(625) 00:14:58.950 fused_ordering(626) 00:14:58.950 fused_ordering(627) 00:14:58.950 fused_ordering(628) 00:14:58.950 fused_ordering(629) 00:14:58.950 fused_ordering(630) 00:14:58.950 fused_ordering(631) 00:14:58.950 fused_ordering(632) 00:14:58.950 fused_ordering(633) 00:14:58.950 fused_ordering(634) 00:14:58.950 fused_ordering(635) 00:14:58.950 fused_ordering(636) 00:14:58.950 fused_ordering(637) 00:14:58.950 fused_ordering(638) 00:14:58.950 fused_ordering(639) 00:14:58.950 fused_ordering(640) 00:14:58.950 fused_ordering(641) 00:14:58.950 fused_ordering(642) 00:14:58.950 fused_ordering(643) 00:14:58.950 fused_ordering(644) 00:14:58.950 fused_ordering(645) 00:14:58.950 fused_ordering(646) 00:14:58.950 fused_ordering(647) 00:14:58.950 fused_ordering(648) 00:14:58.950 fused_ordering(649) 00:14:58.950 fused_ordering(650) 00:14:58.950 fused_ordering(651) 00:14:58.950 fused_ordering(652) 00:14:58.950 fused_ordering(653) 00:14:58.950 fused_ordering(654) 00:14:58.950 fused_ordering(655) 00:14:58.950 fused_ordering(656) 00:14:58.950 fused_ordering(657) 00:14:58.950 fused_ordering(658) 00:14:58.950 fused_ordering(659) 00:14:58.950 fused_ordering(660) 00:14:58.950 fused_ordering(661) 00:14:58.950 fused_ordering(662) 00:14:58.950 fused_ordering(663) 00:14:58.950 fused_ordering(664) 00:14:58.950 fused_ordering(665) 00:14:58.950 fused_ordering(666) 00:14:58.950 fused_ordering(667) 00:14:58.950 fused_ordering(668) 00:14:58.950 fused_ordering(669) 00:14:58.950 fused_ordering(670) 00:14:58.950 fused_ordering(671) 00:14:58.950 fused_ordering(672) 00:14:58.950 fused_ordering(673) 00:14:58.950 fused_ordering(674) 00:14:58.950 fused_ordering(675) 00:14:58.950 fused_ordering(676) 00:14:58.950 fused_ordering(677) 00:14:58.950 fused_ordering(678) 00:14:58.950 fused_ordering(679) 00:14:58.950 fused_ordering(680) 00:14:58.950 fused_ordering(681) 00:14:58.950 fused_ordering(682) 00:14:58.950 fused_ordering(683) 00:14:58.950 fused_ordering(684) 00:14:58.950 fused_ordering(685) 00:14:58.950 fused_ordering(686) 00:14:58.950 fused_ordering(687) 00:14:58.950 fused_ordering(688) 00:14:58.950 fused_ordering(689) 00:14:58.950 fused_ordering(690) 00:14:58.950 fused_ordering(691) 00:14:58.950 fused_ordering(692) 00:14:58.950 fused_ordering(693) 00:14:58.950 fused_ordering(694) 00:14:58.950 fused_ordering(695) 00:14:58.950 fused_ordering(696) 00:14:58.950 fused_ordering(697) 00:14:58.950 fused_ordering(698) 00:14:58.950 fused_ordering(699) 00:14:58.950 fused_ordering(700) 00:14:58.950 fused_ordering(701) 00:14:58.950 fused_ordering(702) 00:14:58.950 fused_ordering(703) 00:14:58.950 fused_ordering(704) 00:14:58.950 fused_ordering(705) 00:14:58.950 fused_ordering(706) 00:14:58.950 fused_ordering(707) 00:14:58.950 fused_ordering(708) 00:14:58.950 fused_ordering(709) 00:14:58.950 fused_ordering(710) 00:14:58.950 fused_ordering(711) 00:14:58.950 fused_ordering(712) 00:14:58.950 fused_ordering(713) 00:14:58.950 fused_ordering(714) 00:14:58.950 fused_ordering(715) 00:14:58.950 fused_ordering(716) 00:14:58.950 fused_ordering(717) 00:14:58.950 fused_ordering(718) 00:14:58.950 fused_ordering(719) 00:14:58.950 fused_ordering(720) 00:14:58.950 fused_ordering(721) 00:14:58.950 fused_ordering(722) 00:14:58.950 fused_ordering(723) 00:14:58.950 fused_ordering(724) 00:14:58.950 fused_ordering(725) 00:14:58.950 fused_ordering(726) 00:14:58.950 fused_ordering(727) 00:14:58.950 fused_ordering(728) 00:14:58.950 fused_ordering(729) 00:14:58.950 fused_ordering(730) 00:14:58.950 fused_ordering(731) 00:14:58.950 fused_ordering(732) 00:14:58.950 fused_ordering(733) 00:14:58.950 fused_ordering(734) 00:14:58.950 fused_ordering(735) 00:14:58.950 fused_ordering(736) 00:14:58.950 fused_ordering(737) 00:14:58.950 fused_ordering(738) 00:14:58.950 fused_ordering(739) 00:14:58.950 fused_ordering(740) 00:14:58.950 fused_ordering(741) 00:14:58.950 fused_ordering(742) 00:14:58.950 fused_ordering(743) 00:14:58.950 fused_ordering(744) 00:14:58.950 fused_ordering(745) 00:14:58.950 fused_ordering(746) 00:14:58.950 fused_ordering(747) 00:14:58.950 fused_ordering(748) 00:14:58.950 fused_ordering(749) 00:14:58.950 fused_ordering(750) 00:14:58.950 fused_ordering(751) 00:14:58.950 fused_ordering(752) 00:14:58.950 fused_ordering(753) 00:14:58.950 fused_ordering(754) 00:14:58.950 fused_ordering(755) 00:14:58.950 fused_ordering(756) 00:14:58.950 fused_ordering(757) 00:14:58.950 fused_ordering(758) 00:14:58.950 fused_ordering(759) 00:14:58.950 fused_ordering(760) 00:14:58.950 fused_ordering(761) 00:14:58.950 fused_ordering(762) 00:14:58.950 fused_ordering(763) 00:14:58.950 fused_ordering(764) 00:14:58.950 fused_ordering(765) 00:14:58.950 fused_ordering(766) 00:14:58.950 fused_ordering(767) 00:14:58.950 fused_ordering(768) 00:14:58.950 fused_ordering(769) 00:14:58.950 fused_ordering(770) 00:14:58.950 fused_ordering(771) 00:14:58.950 fused_ordering(772) 00:14:58.950 fused_ordering(773) 00:14:58.950 fused_ordering(774) 00:14:58.950 fused_ordering(775) 00:14:58.950 fused_ordering(776) 00:14:58.950 fused_ordering(777) 00:14:58.950 fused_ordering(778) 00:14:58.950 fused_ordering(779) 00:14:58.950 fused_ordering(780) 00:14:58.950 fused_ordering(781) 00:14:58.950 fused_ordering(782) 00:14:58.950 fused_ordering(783) 00:14:58.950 fused_ordering(784) 00:14:58.950 fused_ordering(785) 00:14:58.950 fused_ordering(786) 00:14:58.950 fused_ordering(787) 00:14:58.950 fused_ordering(788) 00:14:58.950 fused_ordering(789) 00:14:58.950 fused_ordering(790) 00:14:58.950 fused_ordering(791) 00:14:58.950 fused_ordering(792) 00:14:58.950 fused_ordering(793) 00:14:58.950 fused_ordering(794) 00:14:58.950 fused_ordering(795) 00:14:58.950 fused_ordering(796) 00:14:58.950 fused_ordering(797) 00:14:58.950 fused_ordering(798) 00:14:58.950 fused_ordering(799) 00:14:58.950 fused_ordering(800) 00:14:58.950 fused_ordering(801) 00:14:58.950 fused_ordering(802) 00:14:58.950 fused_ordering(803) 00:14:58.950 fused_ordering(804) 00:14:58.950 fused_ordering(805) 00:14:58.950 fused_ordering(806) 00:14:58.950 fused_ordering(807) 00:14:58.950 fused_ordering(808) 00:14:58.950 fused_ordering(809) 00:14:58.950 fused_ordering(810) 00:14:58.950 fused_ordering(811) 00:14:58.950 fused_ordering(812) 00:14:58.950 fused_ordering(813) 00:14:58.950 fused_ordering(814) 00:14:58.950 fused_ordering(815) 00:14:58.950 fused_ordering(816) 00:14:58.950 fused_ordering(817) 00:14:58.950 fused_ordering(818) 00:14:58.950 fused_ordering(819) 00:14:58.950 fused_ordering(820) 00:14:59.210 fused_ordering(821) 00:14:59.210 fused_ordering(822) 00:14:59.210 fused_ordering(823) 00:14:59.210 fused_ordering(824) 00:14:59.210 fused_ordering(825) 00:14:59.210 fused_ordering(826) 00:14:59.210 fused_ordering(827) 00:14:59.210 fused_ordering(828) 00:14:59.210 fused_ordering(829) 00:14:59.210 fused_ordering(830) 00:14:59.210 fused_ordering(831) 00:14:59.210 fused_ordering(832) 00:14:59.210 fused_ordering(833) 00:14:59.210 fused_ordering(834) 00:14:59.210 fused_ordering(835) 00:14:59.210 fused_ordering(836) 00:14:59.210 fused_ordering(837) 00:14:59.210 fused_ordering(838) 00:14:59.210 fused_ordering(839) 00:14:59.210 fused_ordering(840) 00:14:59.210 fused_ordering(841) 00:14:59.210 fused_ordering(842) 00:14:59.210 fused_ordering(843) 00:14:59.210 fused_ordering(844) 00:14:59.210 fused_ordering(845) 00:14:59.210 fused_ordering(846) 00:14:59.210 fused_ordering(847) 00:14:59.210 fused_ordering(848) 00:14:59.210 fused_ordering(849) 00:14:59.210 fused_ordering(850) 00:14:59.210 fused_ordering(851) 00:14:59.210 fused_ordering(852) 00:14:59.210 fused_ordering(853) 00:14:59.210 fused_ordering(854) 00:14:59.210 fused_ordering(855) 00:14:59.210 fused_ordering(856) 00:14:59.210 fused_ordering(857) 00:14:59.210 fused_ordering(858) 00:14:59.210 fused_ordering(859) 00:14:59.210 fused_ordering(860) 00:14:59.210 fused_ordering(861) 00:14:59.210 fused_ordering(862) 00:14:59.210 fused_ordering(863) 00:14:59.210 fused_ordering(864) 00:14:59.210 fused_ordering(865) 00:14:59.210 fused_ordering(866) 00:14:59.210 fused_ordering(867) 00:14:59.210 fused_ordering(868) 00:14:59.210 fused_ordering(869) 00:14:59.210 fused_ordering(870) 00:14:59.210 fused_ordering(871) 00:14:59.210 fused_ordering(872) 00:14:59.210 fused_ordering(873) 00:14:59.210 fused_ordering(874) 00:14:59.210 fused_ordering(875) 00:14:59.210 fused_ordering(876) 00:14:59.210 fused_ordering(877) 00:14:59.210 fused_ordering(878) 00:14:59.210 fused_ordering(879) 00:14:59.210 fused_ordering(880) 00:14:59.210 fused_ordering(881) 00:14:59.210 fused_ordering(882) 00:14:59.210 fused_ordering(883) 00:14:59.210 fused_ordering(884) 00:14:59.210 fused_ordering(885) 00:14:59.210 fused_ordering(886) 00:14:59.210 fused_ordering(887) 00:14:59.210 fused_ordering(888) 00:14:59.210 fused_ordering(889) 00:14:59.210 fused_ordering(890) 00:14:59.210 fused_ordering(891) 00:14:59.210 fused_ordering(892) 00:14:59.210 fused_ordering(893) 00:14:59.210 fused_ordering(894) 00:14:59.210 fused_ordering(895) 00:14:59.210 fused_ordering(896) 00:14:59.210 fused_ordering(897) 00:14:59.210 fused_ordering(898) 00:14:59.210 fused_ordering(899) 00:14:59.210 fused_ordering(900) 00:14:59.210 fused_ordering(901) 00:14:59.210 fused_ordering(902) 00:14:59.210 fused_ordering(903) 00:14:59.210 fused_ordering(904) 00:14:59.210 fused_ordering(905) 00:14:59.210 fused_ordering(906) 00:14:59.210 fused_ordering(907) 00:14:59.210 fused_ordering(908) 00:14:59.210 fused_ordering(909) 00:14:59.210 fused_ordering(910) 00:14:59.210 fused_ordering(911) 00:14:59.210 fused_ordering(912) 00:14:59.210 fused_ordering(913) 00:14:59.210 fused_ordering(914) 00:14:59.210 fused_ordering(915) 00:14:59.210 fused_ordering(916) 00:14:59.210 fused_ordering(917) 00:14:59.210 fused_ordering(918) 00:14:59.210 fused_ordering(919) 00:14:59.210 fused_ordering(920) 00:14:59.210 fused_ordering(921) 00:14:59.210 fused_ordering(922) 00:14:59.210 fused_ordering(923) 00:14:59.210 fused_ordering(924) 00:14:59.210 fused_ordering(925) 00:14:59.210 fused_ordering(926) 00:14:59.210 fused_ordering(927) 00:14:59.210 fused_ordering(928) 00:14:59.210 fused_ordering(929) 00:14:59.210 fused_ordering(930) 00:14:59.210 fused_ordering(931) 00:14:59.210 fused_ordering(932) 00:14:59.210 fused_ordering(933) 00:14:59.210 fused_ordering(934) 00:14:59.210 fused_ordering(935) 00:14:59.210 fused_ordering(936) 00:14:59.210 fused_ordering(937) 00:14:59.210 fused_ordering(938) 00:14:59.210 fused_ordering(939) 00:14:59.210 fused_ordering(940) 00:14:59.210 fused_ordering(941) 00:14:59.210 fused_ordering(942) 00:14:59.210 fused_ordering(943) 00:14:59.210 fused_ordering(944) 00:14:59.210 fused_ordering(945) 00:14:59.210 fused_ordering(946) 00:14:59.210 fused_ordering(947) 00:14:59.210 fused_ordering(948) 00:14:59.210 fused_ordering(949) 00:14:59.210 fused_ordering(950) 00:14:59.210 fused_ordering(951) 00:14:59.210 fused_ordering(952) 00:14:59.210 fused_ordering(953) 00:14:59.210 fused_ordering(954) 00:14:59.210 fused_ordering(955) 00:14:59.210 fused_ordering(956) 00:14:59.210 fused_ordering(957) 00:14:59.210 fused_ordering(958) 00:14:59.210 fused_ordering(959) 00:14:59.210 fused_ordering(960) 00:14:59.210 fused_ordering(961) 00:14:59.210 fused_ordering(962) 00:14:59.210 fused_ordering(963) 00:14:59.210 fused_ordering(964) 00:14:59.210 fused_ordering(965) 00:14:59.210 fused_ordering(966) 00:14:59.210 fused_ordering(967) 00:14:59.210 fused_ordering(968) 00:14:59.210 fused_ordering(969) 00:14:59.210 fused_ordering(970) 00:14:59.210 fused_ordering(971) 00:14:59.210 fused_ordering(972) 00:14:59.210 fused_ordering(973) 00:14:59.210 fused_ordering(974) 00:14:59.210 fused_ordering(975) 00:14:59.210 fused_ordering(976) 00:14:59.210 fused_ordering(977) 00:14:59.210 fused_ordering(978) 00:14:59.210 fused_ordering(979) 00:14:59.210 fused_ordering(980) 00:14:59.210 fused_ordering(981) 00:14:59.210 fused_ordering(982) 00:14:59.210 fused_ordering(983) 00:14:59.210 fused_ordering(984) 00:14:59.210 fused_ordering(985) 00:14:59.210 fused_ordering(986) 00:14:59.210 fused_ordering(987) 00:14:59.210 fused_ordering(988) 00:14:59.210 fused_ordering(989) 00:14:59.210 fused_ordering(990) 00:14:59.210 fused_ordering(991) 00:14:59.210 fused_ordering(992) 00:14:59.210 fused_ordering(993) 00:14:59.210 fused_ordering(994) 00:14:59.210 fused_ordering(995) 00:14:59.210 fused_ordering(996) 00:14:59.210 fused_ordering(997) 00:14:59.210 fused_ordering(998) 00:14:59.210 fused_ordering(999) 00:14:59.210 fused_ordering(1000) 00:14:59.210 fused_ordering(1001) 00:14:59.210 fused_ordering(1002) 00:14:59.210 fused_ordering(1003) 00:14:59.210 fused_ordering(1004) 00:14:59.210 fused_ordering(1005) 00:14:59.210 fused_ordering(1006) 00:14:59.210 fused_ordering(1007) 00:14:59.210 fused_ordering(1008) 00:14:59.210 fused_ordering(1009) 00:14:59.210 fused_ordering(1010) 00:14:59.210 fused_ordering(1011) 00:14:59.210 fused_ordering(1012) 00:14:59.210 fused_ordering(1013) 00:14:59.210 fused_ordering(1014) 00:14:59.210 fused_ordering(1015) 00:14:59.210 fused_ordering(1016) 00:14:59.211 fused_ordering(1017) 00:14:59.211 fused_ordering(1018) 00:14:59.211 fused_ordering(1019) 00:14:59.211 fused_ordering(1020) 00:14:59.211 fused_ordering(1021) 00:14:59.211 fused_ordering(1022) 00:14:59.211 fused_ordering(1023) 00:14:59.211 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:59.211 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:59.211 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:59.211 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:14:59.211 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:59.211 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:14:59.211 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:59.211 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:59.211 rmmod nvme_tcp 00:14:59.211 rmmod nvme_fabrics 00:14:59.211 rmmod nvme_keyring 00:14:59.470 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:59.470 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:14:59.470 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:14:59.470 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 132614 ']' 00:14:59.470 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 132614 00:14:59.470 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 132614 ']' 00:14:59.470 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 132614 00:14:59.470 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:14:59.470 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:59.470 12:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 132614 00:14:59.470 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:59.470 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:59.470 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 132614' 00:14:59.470 killing process with pid 132614 00:14:59.470 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 132614 00:14:59.470 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 132614 00:14:59.470 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:59.470 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:59.470 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:59.470 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:14:59.470 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:14:59.470 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:59.470 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:14:59.470 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:59.470 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:59.470 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.470 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:59.470 12:29:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.006 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:02.006 00:15:02.006 real 0m10.736s 00:15:02.006 user 0m5.051s 00:15:02.006 sys 0m5.802s 00:15:02.006 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:02.006 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:02.006 ************************************ 00:15:02.006 END TEST nvmf_fused_ordering 00:15:02.006 ************************************ 00:15:02.006 12:29:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:02.006 12:29:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:02.006 12:29:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:02.006 12:29:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:02.006 ************************************ 00:15:02.006 START TEST nvmf_ns_masking 00:15:02.006 ************************************ 00:15:02.006 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:02.006 * Looking for test storage... 00:15:02.007 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:02.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.007 --rc genhtml_branch_coverage=1 00:15:02.007 --rc genhtml_function_coverage=1 00:15:02.007 --rc genhtml_legend=1 00:15:02.007 --rc geninfo_all_blocks=1 00:15:02.007 --rc geninfo_unexecuted_blocks=1 00:15:02.007 00:15:02.007 ' 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:02.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.007 --rc genhtml_branch_coverage=1 00:15:02.007 --rc genhtml_function_coverage=1 00:15:02.007 --rc genhtml_legend=1 00:15:02.007 --rc geninfo_all_blocks=1 00:15:02.007 --rc geninfo_unexecuted_blocks=1 00:15:02.007 00:15:02.007 ' 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:02.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.007 --rc genhtml_branch_coverage=1 00:15:02.007 --rc genhtml_function_coverage=1 00:15:02.007 --rc genhtml_legend=1 00:15:02.007 --rc geninfo_all_blocks=1 00:15:02.007 --rc geninfo_unexecuted_blocks=1 00:15:02.007 00:15:02.007 ' 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:02.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.007 --rc genhtml_branch_coverage=1 00:15:02.007 --rc genhtml_function_coverage=1 00:15:02.007 --rc genhtml_legend=1 00:15:02.007 --rc geninfo_all_blocks=1 00:15:02.007 --rc geninfo_unexecuted_blocks=1 00:15:02.007 00:15:02.007 ' 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:02.007 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:02.008 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:02.008 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:02.008 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:02.008 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:02.008 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:15:02.008 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:02.008 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:02.008 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:02.008 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.008 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.008 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.008 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:15:02.008 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.008 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:15:02.008 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:02.008 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:02.008 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:02.008 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:02.008 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:02.008 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:02.008 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:02.008 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:02.008 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:02.008 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:02.008 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:02.008 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:15:02.008 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:15:02.008 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:15:02.008 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=cacaabb5-6200-4293-b50f-36c8d64a96b2 00:15:02.008 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:15:02.008 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=a617b9a2-8be4-46fa-914f-14f41d8bb0a9 00:15:02.008 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:02.008 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:15:02.008 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:15:02.008 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:15:02.008 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=e2319512-448d-43b2-8ac8-84910eda857e 00:15:02.008 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:15:02.008 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:02.008 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:02.008 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:02.008 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:02.008 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:02.008 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.008 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:02.008 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.008 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:02.008 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:02.008 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:15:02.008 12:29:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:08.580 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:08.580 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:08.580 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:08.580 Found net devices under 0000:86:00.0: cvl_0_0 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:08.581 Found net devices under 0000:86:00.1: cvl_0_1 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:08.581 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:08.581 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.453 ms 00:15:08.581 00:15:08.581 --- 10.0.0.2 ping statistics --- 00:15:08.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.581 rtt min/avg/max/mdev = 0.453/0.453/0.453/0.000 ms 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:08.581 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:08.581 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:15:08.581 00:15:08.581 --- 10.0.0.1 ping statistics --- 00:15:08.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.581 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=136484 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 136484 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 136484 ']' 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:08.581 [2024-11-20 12:29:13.634439] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:15:08.581 [2024-11-20 12:29:13.634490] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:08.581 [2024-11-20 12:29:13.712569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.581 [2024-11-20 12:29:13.753659] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:08.581 [2024-11-20 12:29:13.753695] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:08.581 [2024-11-20 12:29:13.753703] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:08.581 [2024-11-20 12:29:13.753709] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:08.581 [2024-11-20 12:29:13.753715] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:08.581 [2024-11-20 12:29:13.754245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:08.581 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:08.581 [2024-11-20 12:29:14.062115] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:08.581 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:15:08.581 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:15:08.581 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:08.581 Malloc1 00:15:08.581 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:08.841 Malloc2 00:15:08.841 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:09.099 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:09.358 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:09.358 [2024-11-20 12:29:15.062641] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:09.358 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:15:09.358 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e2319512-448d-43b2-8ac8-84910eda857e -a 10.0.0.2 -s 4420 -i 4 00:15:09.618 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:15:09.618 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:09.618 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:09.618 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:09.618 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:11.521 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:11.521 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:11.521 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:11.521 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:11.521 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:11.521 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:11.521 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:11.521 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:11.521 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:11.521 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:11.521 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:15:11.521 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:11.521 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:11.778 [ 0]:0x1 00:15:11.778 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:11.778 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:11.778 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=87bcc0e3a0954487be8c96457e75d409 00:15:11.778 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 87bcc0e3a0954487be8c96457e75d409 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:11.778 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:11.778 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:15:11.778 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:11.778 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:11.778 [ 0]:0x1 00:15:11.778 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:11.778 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:12.035 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=87bcc0e3a0954487be8c96457e75d409 00:15:12.035 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 87bcc0e3a0954487be8c96457e75d409 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:12.035 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:15:12.035 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:12.035 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:12.035 [ 1]:0x2 00:15:12.035 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:12.035 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:12.035 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bee4cfcb665c4ec9a977f628144106f9 00:15:12.035 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bee4cfcb665c4ec9a977f628144106f9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:12.035 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:15:12.035 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:12.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.035 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:12.292 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:12.581 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:15:12.581 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e2319512-448d-43b2-8ac8-84910eda857e -a 10.0.0.2 -s 4420 -i 4 00:15:12.581 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:12.581 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:12.581 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:12.581 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:15:12.581 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:15:12.581 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:14.486 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:14.486 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:14.745 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:14.745 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:14.745 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:14.745 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:14.745 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:14.745 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:14.745 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:14.745 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:14.746 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:15:14.746 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:14.746 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:14.746 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:14.746 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:14.746 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:14.746 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:14.746 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:14.746 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:14.746 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:14.746 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:14.746 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:14.746 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:14.746 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:14.746 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:14.746 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:14.746 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:14.746 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:14.746 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:15:14.746 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:14.746 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:14.746 [ 0]:0x2 00:15:14.746 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:14.746 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:15.004 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bee4cfcb665c4ec9a977f628144106f9 00:15:15.004 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bee4cfcb665c4ec9a977f628144106f9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:15.004 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:15.004 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:15:15.004 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:15.004 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:15.004 [ 0]:0x1 00:15:15.004 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:15.004 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:15.262 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=87bcc0e3a0954487be8c96457e75d409 00:15:15.262 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 87bcc0e3a0954487be8c96457e75d409 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:15.262 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:15:15.262 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:15.263 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:15.263 [ 1]:0x2 00:15:15.263 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:15.263 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:15.263 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bee4cfcb665c4ec9a977f628144106f9 00:15:15.263 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bee4cfcb665c4ec9a977f628144106f9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:15.263 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:15.263 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:15:15.263 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:15.263 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:15.263 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:15.263 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:15.263 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:15.263 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:15.263 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:15.263 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:15.263 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:15.521 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:15.521 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:15.521 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:15.521 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:15.521 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:15.521 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:15.521 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:15.521 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:15.521 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:15:15.521 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:15.521 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:15.521 [ 0]:0x2 00:15:15.521 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:15.521 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:15.521 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bee4cfcb665c4ec9a977f628144106f9 00:15:15.521 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bee4cfcb665c4ec9a977f628144106f9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:15.521 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:15:15.521 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:15.522 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.522 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:15.780 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:15:15.780 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e2319512-448d-43b2-8ac8-84910eda857e -a 10.0.0.2 -s 4420 -i 4 00:15:16.039 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:16.039 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:16.039 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:16.039 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:15:16.039 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:15:16.039 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:17.941 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:17.941 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:17.941 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:17.941 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:15:17.941 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:17.941 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:17.941 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:17.941 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:17.942 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:17.942 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:17.942 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:15:17.942 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:17.942 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:17.942 [ 0]:0x1 00:15:17.942 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:17.942 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:17.942 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=87bcc0e3a0954487be8c96457e75d409 00:15:17.942 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 87bcc0e3a0954487be8c96457e75d409 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:17.942 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:15:17.942 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:17.942 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:17.942 [ 1]:0x2 00:15:17.942 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:17.942 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:18.200 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bee4cfcb665c4ec9a977f628144106f9 00:15:18.200 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bee4cfcb665c4ec9a977f628144106f9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:18.200 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:18.200 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:15:18.200 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:18.200 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:18.200 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:18.200 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:18.200 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:18.200 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:18.200 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:18.200 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:18.200 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:18.200 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:18.200 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:18.460 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:18.460 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:18.460 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:18.460 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:18.460 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:18.460 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:18.460 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:15:18.460 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:18.460 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:18.460 [ 0]:0x2 00:15:18.460 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:18.460 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:18.460 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bee4cfcb665c4ec9a977f628144106f9 00:15:18.460 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bee4cfcb665c4ec9a977f628144106f9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:18.460 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:18.460 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:18.460 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:18.460 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:18.460 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:18.460 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:18.460 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:18.460 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:18.460 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:18.460 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:18.460 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:18.460 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:18.719 [2024-11-20 12:29:24.224567] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:18.719 request: 00:15:18.719 { 00:15:18.719 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:18.719 "nsid": 2, 00:15:18.719 "host": "nqn.2016-06.io.spdk:host1", 00:15:18.719 "method": "nvmf_ns_remove_host", 00:15:18.719 "req_id": 1 00:15:18.719 } 00:15:18.719 Got JSON-RPC error response 00:15:18.719 response: 00:15:18.719 { 00:15:18.719 "code": -32602, 00:15:18.719 "message": "Invalid parameters" 00:15:18.719 } 00:15:18.719 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:18.719 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:18.719 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:18.719 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:18.719 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:15:18.719 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:18.719 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:18.719 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:18.719 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:18.719 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:18.719 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:18.719 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:18.719 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:18.719 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:18.719 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:18.719 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:18.719 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:18.719 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:18.719 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:18.719 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:18.719 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:18.719 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:18.719 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:15:18.719 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:18.719 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:18.719 [ 0]:0x2 00:15:18.719 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:18.719 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:18.719 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bee4cfcb665c4ec9a977f628144106f9 00:15:18.719 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bee4cfcb665c4ec9a977f628144106f9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:18.719 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:15:18.719 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:18.719 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.719 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=138402 00:15:18.719 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:15:18.719 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:15:18.719 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 138402 /var/tmp/host.sock 00:15:18.719 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 138402 ']' 00:15:18.719 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:18.719 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:18.719 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:18.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:18.719 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:18.719 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:18.719 [2024-11-20 12:29:24.467841] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:15:18.720 [2024-11-20 12:29:24.467887] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138402 ] 00:15:18.979 [2024-11-20 12:29:24.542623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.979 [2024-11-20 12:29:24.582946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:19.238 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:19.238 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:15:19.238 12:29:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:19.496 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:19.496 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid cacaabb5-6200-4293-b50f-36c8d64a96b2 00:15:19.496 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:19.496 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g CACAABB562004293B50F36C8D64A96B2 -i 00:15:19.755 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid a617b9a2-8be4-46fa-914f-14f41d8bb0a9 00:15:19.755 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:19.755 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g A617B9A28BE446FA914F14F41D8BB0A9 -i 00:15:20.014 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:20.272 12:29:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:15:20.272 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:20.272 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:20.838 nvme0n1 00:15:20.838 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:20.838 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:21.096 nvme1n2 00:15:21.096 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:15:21.096 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:15:21.096 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:21.096 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:15:21.097 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:15:21.355 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:15:21.355 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:15:21.355 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:15:21.355 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:15:21.615 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ cacaabb5-6200-4293-b50f-36c8d64a96b2 == \c\a\c\a\a\b\b\5\-\6\2\0\0\-\4\2\9\3\-\b\5\0\f\-\3\6\c\8\d\6\4\a\9\6\b\2 ]] 00:15:21.615 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:15:21.615 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:15:21.615 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:15:21.615 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ a617b9a2-8be4-46fa-914f-14f41d8bb0a9 == \a\6\1\7\b\9\a\2\-\8\b\e\4\-\4\6\f\a\-\9\1\4\f\-\1\4\f\4\1\d\8\b\b\0\a\9 ]] 00:15:21.615 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:21.873 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:22.132 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid cacaabb5-6200-4293-b50f-36c8d64a96b2 00:15:22.132 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:22.132 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g CACAABB562004293B50F36C8D64A96B2 00:15:22.132 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:22.132 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g CACAABB562004293B50F36C8D64A96B2 00:15:22.132 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:22.132 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:22.132 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:22.132 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:22.132 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:22.132 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:22.132 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:22.132 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:22.132 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g CACAABB562004293B50F36C8D64A96B2 00:15:22.392 [2024-11-20 12:29:27.906719] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:15:22.392 [2024-11-20 12:29:27.906749] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:15:22.392 [2024-11-20 12:29:27.906758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.392 request: 00:15:22.392 { 00:15:22.392 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:22.392 "namespace": { 00:15:22.392 "bdev_name": "invalid", 00:15:22.392 "nsid": 1, 00:15:22.392 "nguid": "CACAABB562004293B50F36C8D64A96B2", 00:15:22.392 "no_auto_visible": false 00:15:22.392 }, 00:15:22.392 "method": "nvmf_subsystem_add_ns", 00:15:22.392 "req_id": 1 00:15:22.392 } 00:15:22.392 Got JSON-RPC error response 00:15:22.392 response: 00:15:22.392 { 00:15:22.392 "code": -32602, 00:15:22.392 "message": "Invalid parameters" 00:15:22.392 } 00:15:22.392 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:22.392 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:22.392 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:22.392 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:22.392 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid cacaabb5-6200-4293-b50f-36c8d64a96b2 00:15:22.392 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:22.392 12:29:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g CACAABB562004293B50F36C8D64A96B2 -i 00:15:22.392 12:29:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:15:24.925 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:15:24.925 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:15:24.925 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:24.925 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:15:24.925 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 138402 00:15:24.925 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 138402 ']' 00:15:24.925 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 138402 00:15:24.925 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:15:24.925 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:24.925 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 138402 00:15:24.925 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:24.925 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:24.925 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 138402' 00:15:24.925 killing process with pid 138402 00:15:24.925 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 138402 00:15:24.925 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 138402 00:15:24.925 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:25.185 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:15:25.185 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:15:25.185 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:25.185 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:15:25.185 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:25.185 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:15:25.185 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:25.185 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:25.185 rmmod nvme_tcp 00:15:25.185 rmmod nvme_fabrics 00:15:25.185 rmmod nvme_keyring 00:15:25.185 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:25.185 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:15:25.185 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:15:25.185 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 136484 ']' 00:15:25.185 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 136484 00:15:25.185 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 136484 ']' 00:15:25.185 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 136484 00:15:25.185 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:15:25.185 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:25.444 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 136484 00:15:25.444 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:25.444 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:25.444 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 136484' 00:15:25.444 killing process with pid 136484 00:15:25.444 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 136484 00:15:25.444 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 136484 00:15:25.444 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:25.444 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:25.444 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:25.444 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:15:25.444 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:15:25.444 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:25.444 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:15:25.444 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:25.444 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:25.444 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.444 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:25.444 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.982 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:27.982 00:15:27.982 real 0m25.922s 00:15:27.982 user 0m31.039s 00:15:27.982 sys 0m7.095s 00:15:27.982 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:27.982 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:27.982 ************************************ 00:15:27.982 END TEST nvmf_ns_masking 00:15:27.982 ************************************ 00:15:27.982 12:29:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:15:27.982 12:29:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:27.982 12:29:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:27.982 12:29:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:27.982 12:29:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:27.982 ************************************ 00:15:27.982 START TEST nvmf_nvme_cli 00:15:27.982 ************************************ 00:15:27.982 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:27.982 * Looking for test storage... 00:15:27.982 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:27.982 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:27.982 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:15:27.982 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:27.982 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:27.982 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:27.982 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:27.982 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:27.982 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:15:27.982 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:15:27.982 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:15:27.982 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:15:27.982 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:15:27.982 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:15:27.982 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:15:27.982 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:27.982 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:15:27.982 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:15:27.982 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:27.982 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:27.982 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:15:27.982 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:15:27.982 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:27.982 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:15:27.982 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:15:27.982 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:15:27.982 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:15:27.982 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:27.982 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:15:27.982 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:15:27.982 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:27.982 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:27.982 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:15:27.982 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:27.982 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:27.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.982 --rc genhtml_branch_coverage=1 00:15:27.982 --rc genhtml_function_coverage=1 00:15:27.982 --rc genhtml_legend=1 00:15:27.982 --rc geninfo_all_blocks=1 00:15:27.982 --rc geninfo_unexecuted_blocks=1 00:15:27.982 00:15:27.982 ' 00:15:27.982 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:27.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.982 --rc genhtml_branch_coverage=1 00:15:27.982 --rc genhtml_function_coverage=1 00:15:27.982 --rc genhtml_legend=1 00:15:27.982 --rc geninfo_all_blocks=1 00:15:27.982 --rc geninfo_unexecuted_blocks=1 00:15:27.982 00:15:27.982 ' 00:15:27.982 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:27.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.982 --rc genhtml_branch_coverage=1 00:15:27.982 --rc genhtml_function_coverage=1 00:15:27.982 --rc genhtml_legend=1 00:15:27.982 --rc geninfo_all_blocks=1 00:15:27.982 --rc geninfo_unexecuted_blocks=1 00:15:27.982 00:15:27.982 ' 00:15:27.982 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:27.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.982 --rc genhtml_branch_coverage=1 00:15:27.982 --rc genhtml_function_coverage=1 00:15:27.982 --rc genhtml_legend=1 00:15:27.982 --rc geninfo_all_blocks=1 00:15:27.982 --rc geninfo_unexecuted_blocks=1 00:15:27.982 00:15:27.982 ' 00:15:27.982 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:27.982 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:27.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:15:27.983 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:34.638 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:34.638 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:34.638 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:34.638 Found net devices under 0000:86:00.0: cvl_0_0 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:34.639 Found net devices under 0000:86:00.1: cvl_0_1 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:34.639 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:34.639 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.479 ms 00:15:34.639 00:15:34.639 --- 10.0.0.2 ping statistics --- 00:15:34.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.639 rtt min/avg/max/mdev = 0.479/0.479/0.479/0.000 ms 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:34.639 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:34.639 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:15:34.639 00:15:34.639 --- 10.0.0.1 ping statistics --- 00:15:34.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.639 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=143119 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 143119 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 143119 ']' 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:34.639 [2024-11-20 12:29:39.593585] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:15:34.639 [2024-11-20 12:29:39.593630] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:34.639 [2024-11-20 12:29:39.669377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:34.639 [2024-11-20 12:29:39.712345] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:34.639 [2024-11-20 12:29:39.712382] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:34.639 [2024-11-20 12:29:39.712390] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:34.639 [2024-11-20 12:29:39.712396] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:34.639 [2024-11-20 12:29:39.712401] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:34.639 [2024-11-20 12:29:39.713994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:34.639 [2024-11-20 12:29:39.714104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:34.639 [2024-11-20 12:29:39.714233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.639 [2024-11-20 12:29:39.714233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:34.639 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:34.640 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:34.640 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.640 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:34.640 [2024-11-20 12:29:39.850047] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:34.640 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.640 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:34.640 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.640 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:34.640 Malloc0 00:15:34.640 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.640 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:34.640 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.640 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:34.640 Malloc1 00:15:34.640 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.640 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:34.640 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.640 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:34.640 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.640 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:34.640 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.640 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:34.640 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.640 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:34.640 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.640 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:34.640 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.640 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:34.640 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.640 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:34.640 [2024-11-20 12:29:39.945559] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:34.640 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.640 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:34.640 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.640 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:34.640 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.640 12:29:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:15:34.640 00:15:34.640 Discovery Log Number of Records 2, Generation counter 2 00:15:34.640 =====Discovery Log Entry 0====== 00:15:34.640 trtype: tcp 00:15:34.640 adrfam: ipv4 00:15:34.640 subtype: current discovery subsystem 00:15:34.640 treq: not required 00:15:34.640 portid: 0 00:15:34.640 trsvcid: 4420 00:15:34.640 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:34.640 traddr: 10.0.0.2 00:15:34.640 eflags: explicit discovery connections, duplicate discovery information 00:15:34.640 sectype: none 00:15:34.640 =====Discovery Log Entry 1====== 00:15:34.640 trtype: tcp 00:15:34.640 adrfam: ipv4 00:15:34.640 subtype: nvme subsystem 00:15:34.640 treq: not required 00:15:34.640 portid: 0 00:15:34.640 trsvcid: 4420 00:15:34.640 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:34.640 traddr: 10.0.0.2 00:15:34.640 eflags: none 00:15:34.640 sectype: none 00:15:34.640 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:34.640 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:34.640 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:34.640 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:34.640 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:34.640 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:34.640 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:34.640 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:34.640 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:34.640 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:34.640 12:29:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:35.577 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:35.577 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:15:35.577 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:35.577 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:15:35.577 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:15:35.577 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:15:38.110 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:38.110 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:38.110 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:38.110 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:15:38.110 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:38.110 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:15:38.110 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:38.110 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:38.110 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:38.110 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:38.110 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:38.110 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:38.110 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:38.110 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:38.110 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:38.110 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:15:38.110 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:38.110 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:38.110 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:15:38.111 /dev/nvme0n2 ]] 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:38.111 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:38.111 rmmod nvme_tcp 00:15:38.111 rmmod nvme_fabrics 00:15:38.111 rmmod nvme_keyring 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 143119 ']' 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 143119 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 143119 ']' 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 143119 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 143119 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 143119' 00:15:38.111 killing process with pid 143119 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 143119 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 143119 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:38.111 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.172 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:40.172 00:15:40.173 real 0m12.550s 00:15:40.173 user 0m17.991s 00:15:40.173 sys 0m5.170s 00:15:40.173 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:40.173 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:40.173 ************************************ 00:15:40.173 END TEST nvmf_nvme_cli 00:15:40.173 ************************************ 00:15:40.173 12:29:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:15:40.173 12:29:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:40.432 12:29:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:40.432 12:29:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:40.432 12:29:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:40.432 ************************************ 00:15:40.432 START TEST nvmf_vfio_user 00:15:40.432 ************************************ 00:15:40.432 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:40.432 * Looking for test storage... 00:15:40.432 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:40.432 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:40.432 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:15:40.432 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:40.432 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:40.432 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:40.432 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:40.432 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:40.432 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:15:40.432 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:15:40.432 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:15:40.432 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:15:40.432 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:15:40.432 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:15:40.432 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:15:40.432 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:40.432 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:15:40.432 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:15:40.432 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:40.432 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:40.432 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:15:40.432 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:15:40.432 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:40.432 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:15:40.432 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:15:40.432 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:15:40.432 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:15:40.432 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:40.432 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:15:40.432 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:15:40.432 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:40.432 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:40.432 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:15:40.432 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:40.432 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:40.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.432 --rc genhtml_branch_coverage=1 00:15:40.432 --rc genhtml_function_coverage=1 00:15:40.432 --rc genhtml_legend=1 00:15:40.432 --rc geninfo_all_blocks=1 00:15:40.432 --rc geninfo_unexecuted_blocks=1 00:15:40.432 00:15:40.432 ' 00:15:40.432 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:40.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.432 --rc genhtml_branch_coverage=1 00:15:40.432 --rc genhtml_function_coverage=1 00:15:40.432 --rc genhtml_legend=1 00:15:40.432 --rc geninfo_all_blocks=1 00:15:40.432 --rc geninfo_unexecuted_blocks=1 00:15:40.432 00:15:40.432 ' 00:15:40.432 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:40.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.432 --rc genhtml_branch_coverage=1 00:15:40.432 --rc genhtml_function_coverage=1 00:15:40.432 --rc genhtml_legend=1 00:15:40.432 --rc geninfo_all_blocks=1 00:15:40.432 --rc geninfo_unexecuted_blocks=1 00:15:40.432 00:15:40.432 ' 00:15:40.432 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:40.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.432 --rc genhtml_branch_coverage=1 00:15:40.432 --rc genhtml_function_coverage=1 00:15:40.432 --rc genhtml_legend=1 00:15:40.432 --rc geninfo_all_blocks=1 00:15:40.433 --rc geninfo_unexecuted_blocks=1 00:15:40.433 00:15:40.433 ' 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:40.433 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=144352 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 144352' 00:15:40.433 Process pid: 144352 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 144352 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 144352 ']' 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:40.433 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:40.692 [2024-11-20 12:29:46.220841] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:15:40.692 [2024-11-20 12:29:46.220889] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:40.692 [2024-11-20 12:29:46.293967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:40.692 [2024-11-20 12:29:46.335742] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:40.692 [2024-11-20 12:29:46.335778] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:40.692 [2024-11-20 12:29:46.335785] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:40.692 [2024-11-20 12:29:46.335791] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:40.692 [2024-11-20 12:29:46.335796] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:40.692 [2024-11-20 12:29:46.337370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:40.692 [2024-11-20 12:29:46.337481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:40.692 [2024-11-20 12:29:46.337573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:40.692 [2024-11-20 12:29:46.337575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.692 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:40.692 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:15:40.693 12:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:42.070 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:42.070 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:42.070 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:42.070 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:42.070 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:42.070 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:42.329 Malloc1 00:15:42.329 12:29:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:42.329 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:42.587 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:42.846 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:42.846 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:42.846 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:43.105 Malloc2 00:15:43.105 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:43.364 12:29:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:43.364 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:43.623 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:43.623 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:43.623 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:43.623 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:43.623 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:43.623 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:43.623 [2024-11-20 12:29:49.314807] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:15:43.623 [2024-11-20 12:29:49.314852] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144889 ] 00:15:43.623 [2024-11-20 12:29:49.355654] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:43.623 [2024-11-20 12:29:49.361034] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:43.623 [2024-11-20 12:29:49.361054] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f5c9dd74000 00:15:43.623 [2024-11-20 12:29:49.362034] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:43.623 [2024-11-20 12:29:49.363033] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:43.623 [2024-11-20 12:29:49.364039] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:43.623 [2024-11-20 12:29:49.365043] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:43.623 [2024-11-20 12:29:49.366052] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:43.623 [2024-11-20 12:29:49.367058] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:43.623 [2024-11-20 12:29:49.368065] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:43.623 [2024-11-20 12:29:49.369071] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:43.623 [2024-11-20 12:29:49.370077] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:43.623 [2024-11-20 12:29:49.370086] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f5c9dd69000 00:15:43.623 [2024-11-20 12:29:49.371000] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:43.623 [2024-11-20 12:29:49.380449] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:43.623 [2024-11-20 12:29:49.380477] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:15:43.883 [2024-11-20 12:29:49.386165] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:43.884 [2024-11-20 12:29:49.386210] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:43.884 [2024-11-20 12:29:49.386282] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:15:43.884 [2024-11-20 12:29:49.386297] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:15:43.884 [2024-11-20 12:29:49.386303] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:15:43.884 [2024-11-20 12:29:49.387168] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:43.884 [2024-11-20 12:29:49.387177] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:15:43.884 [2024-11-20 12:29:49.387183] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:15:43.884 [2024-11-20 12:29:49.388170] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:43.884 [2024-11-20 12:29:49.388178] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:15:43.884 [2024-11-20 12:29:49.388185] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:43.884 [2024-11-20 12:29:49.389182] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:43.884 [2024-11-20 12:29:49.389190] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:43.884 [2024-11-20 12:29:49.390188] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:43.884 [2024-11-20 12:29:49.390196] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:43.884 [2024-11-20 12:29:49.390203] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:43.884 [2024-11-20 12:29:49.390212] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:43.884 [2024-11-20 12:29:49.390319] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:15:43.884 [2024-11-20 12:29:49.390324] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:43.884 [2024-11-20 12:29:49.390329] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:43.884 [2024-11-20 12:29:49.391195] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:43.884 [2024-11-20 12:29:49.392197] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:43.884 [2024-11-20 12:29:49.393212] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:43.884 [2024-11-20 12:29:49.394213] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:43.884 [2024-11-20 12:29:49.394274] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:43.884 [2024-11-20 12:29:49.395230] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:43.884 [2024-11-20 12:29:49.395238] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:43.884 [2024-11-20 12:29:49.395243] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:43.884 [2024-11-20 12:29:49.395259] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:15:43.884 [2024-11-20 12:29:49.395266] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:43.884 [2024-11-20 12:29:49.395281] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:43.884 [2024-11-20 12:29:49.395286] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:43.884 [2024-11-20 12:29:49.395289] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:43.884 [2024-11-20 12:29:49.395303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:43.884 [2024-11-20 12:29:49.395343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:43.884 [2024-11-20 12:29:49.395353] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:15:43.884 [2024-11-20 12:29:49.395358] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:15:43.884 [2024-11-20 12:29:49.395361] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:15:43.884 [2024-11-20 12:29:49.395366] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:43.884 [2024-11-20 12:29:49.395374] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:15:43.884 [2024-11-20 12:29:49.395378] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:15:43.884 [2024-11-20 12:29:49.395383] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:15:43.884 [2024-11-20 12:29:49.395392] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:43.884 [2024-11-20 12:29:49.395402] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:43.884 [2024-11-20 12:29:49.395412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:43.884 [2024-11-20 12:29:49.395422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:43.884 [2024-11-20 12:29:49.395430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:43.884 [2024-11-20 12:29:49.395438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:43.884 [2024-11-20 12:29:49.395445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:43.884 [2024-11-20 12:29:49.395449] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:43.884 [2024-11-20 12:29:49.395456] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:43.884 [2024-11-20 12:29:49.395464] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:43.884 [2024-11-20 12:29:49.395473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:43.884 [2024-11-20 12:29:49.395479] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:15:43.884 [2024-11-20 12:29:49.395484] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:43.884 [2024-11-20 12:29:49.395490] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:15:43.884 [2024-11-20 12:29:49.395496] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:43.884 [2024-11-20 12:29:49.395504] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:43.884 [2024-11-20 12:29:49.395514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:43.884 [2024-11-20 12:29:49.395563] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:15:43.884 [2024-11-20 12:29:49.395570] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:43.884 [2024-11-20 12:29:49.395578] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:43.885 [2024-11-20 12:29:49.395582] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:43.885 [2024-11-20 12:29:49.395585] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:43.885 [2024-11-20 12:29:49.395590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:43.885 [2024-11-20 12:29:49.395607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:43.885 [2024-11-20 12:29:49.395616] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:15:43.885 [2024-11-20 12:29:49.395630] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:15:43.885 [2024-11-20 12:29:49.395637] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:43.885 [2024-11-20 12:29:49.395643] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:43.885 [2024-11-20 12:29:49.395647] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:43.885 [2024-11-20 12:29:49.395650] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:43.885 [2024-11-20 12:29:49.395655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:43.885 [2024-11-20 12:29:49.395673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:43.885 [2024-11-20 12:29:49.395684] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:43.885 [2024-11-20 12:29:49.395691] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:43.885 [2024-11-20 12:29:49.395697] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:43.885 [2024-11-20 12:29:49.395701] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:43.885 [2024-11-20 12:29:49.395704] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:43.885 [2024-11-20 12:29:49.395710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:43.885 [2024-11-20 12:29:49.395723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:43.885 [2024-11-20 12:29:49.395731] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:43.885 [2024-11-20 12:29:49.395736] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:43.885 [2024-11-20 12:29:49.395743] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:15:43.885 [2024-11-20 12:29:49.395749] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:43.885 [2024-11-20 12:29:49.395753] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:43.885 [2024-11-20 12:29:49.395758] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:15:43.885 [2024-11-20 12:29:49.395763] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:43.885 [2024-11-20 12:29:49.395767] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:15:43.885 [2024-11-20 12:29:49.395773] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:15:43.885 [2024-11-20 12:29:49.395790] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:43.885 [2024-11-20 12:29:49.395798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:43.885 [2024-11-20 12:29:49.395811] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:43.885 [2024-11-20 12:29:49.395817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:43.885 [2024-11-20 12:29:49.395826] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:43.885 [2024-11-20 12:29:49.395837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:43.885 [2024-11-20 12:29:49.395847] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:43.885 [2024-11-20 12:29:49.395857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:43.885 [2024-11-20 12:29:49.395868] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:43.885 [2024-11-20 12:29:49.395873] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:43.885 [2024-11-20 12:29:49.395876] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:43.885 [2024-11-20 12:29:49.395879] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:43.885 [2024-11-20 12:29:49.395882] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:43.885 [2024-11-20 12:29:49.395888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:43.885 [2024-11-20 12:29:49.395894] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:43.885 [2024-11-20 12:29:49.395898] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:43.885 [2024-11-20 12:29:49.395901] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:43.885 [2024-11-20 12:29:49.395907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:43.885 [2024-11-20 12:29:49.395914] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:43.885 [2024-11-20 12:29:49.395917] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:43.885 [2024-11-20 12:29:49.395920] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:43.885 [2024-11-20 12:29:49.395925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:43.885 [2024-11-20 12:29:49.395932] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:43.885 [2024-11-20 12:29:49.395936] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:43.885 [2024-11-20 12:29:49.395939] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:43.885 [2024-11-20 12:29:49.395944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:43.885 [2024-11-20 12:29:49.395950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:43.885 [2024-11-20 12:29:49.395960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:43.885 [2024-11-20 12:29:49.395972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:43.885 [2024-11-20 12:29:49.395978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:43.885 ===================================================== 00:15:43.885 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:43.885 ===================================================== 00:15:43.885 Controller Capabilities/Features 00:15:43.885 ================================ 00:15:43.885 Vendor ID: 4e58 00:15:43.885 Subsystem Vendor ID: 4e58 00:15:43.885 Serial Number: SPDK1 00:15:43.885 Model Number: SPDK bdev Controller 00:15:43.885 Firmware Version: 25.01 00:15:43.885 Recommended Arb Burst: 6 00:15:43.885 IEEE OUI Identifier: 8d 6b 50 00:15:43.885 Multi-path I/O 00:15:43.885 May have multiple subsystem ports: Yes 00:15:43.885 May have multiple controllers: Yes 00:15:43.885 Associated with SR-IOV VF: No 00:15:43.885 Max Data Transfer Size: 131072 00:15:43.885 Max Number of Namespaces: 32 00:15:43.885 Max Number of I/O Queues: 127 00:15:43.885 NVMe Specification Version (VS): 1.3 00:15:43.885 NVMe Specification Version (Identify): 1.3 00:15:43.885 Maximum Queue Entries: 256 00:15:43.885 Contiguous Queues Required: Yes 00:15:43.885 Arbitration Mechanisms Supported 00:15:43.885 Weighted Round Robin: Not Supported 00:15:43.885 Vendor Specific: Not Supported 00:15:43.885 Reset Timeout: 15000 ms 00:15:43.885 Doorbell Stride: 4 bytes 00:15:43.885 NVM Subsystem Reset: Not Supported 00:15:43.885 Command Sets Supported 00:15:43.885 NVM Command Set: Supported 00:15:43.885 Boot Partition: Not Supported 00:15:43.885 Memory Page Size Minimum: 4096 bytes 00:15:43.885 Memory Page Size Maximum: 4096 bytes 00:15:43.885 Persistent Memory Region: Not Supported 00:15:43.885 Optional Asynchronous Events Supported 00:15:43.885 Namespace Attribute Notices: Supported 00:15:43.885 Firmware Activation Notices: Not Supported 00:15:43.885 ANA Change Notices: Not Supported 00:15:43.886 PLE Aggregate Log Change Notices: Not Supported 00:15:43.886 LBA Status Info Alert Notices: Not Supported 00:15:43.886 EGE Aggregate Log Change Notices: Not Supported 00:15:43.886 Normal NVM Subsystem Shutdown event: Not Supported 00:15:43.886 Zone Descriptor Change Notices: Not Supported 00:15:43.886 Discovery Log Change Notices: Not Supported 00:15:43.886 Controller Attributes 00:15:43.886 128-bit Host Identifier: Supported 00:15:43.886 Non-Operational Permissive Mode: Not Supported 00:15:43.886 NVM Sets: Not Supported 00:15:43.886 Read Recovery Levels: Not Supported 00:15:43.886 Endurance Groups: Not Supported 00:15:43.886 Predictable Latency Mode: Not Supported 00:15:43.886 Traffic Based Keep ALive: Not Supported 00:15:43.886 Namespace Granularity: Not Supported 00:15:43.886 SQ Associations: Not Supported 00:15:43.886 UUID List: Not Supported 00:15:43.886 Multi-Domain Subsystem: Not Supported 00:15:43.886 Fixed Capacity Management: Not Supported 00:15:43.886 Variable Capacity Management: Not Supported 00:15:43.886 Delete Endurance Group: Not Supported 00:15:43.886 Delete NVM Set: Not Supported 00:15:43.886 Extended LBA Formats Supported: Not Supported 00:15:43.886 Flexible Data Placement Supported: Not Supported 00:15:43.886 00:15:43.886 Controller Memory Buffer Support 00:15:43.886 ================================ 00:15:43.886 Supported: No 00:15:43.886 00:15:43.886 Persistent Memory Region Support 00:15:43.886 ================================ 00:15:43.886 Supported: No 00:15:43.886 00:15:43.886 Admin Command Set Attributes 00:15:43.886 ============================ 00:15:43.886 Security Send/Receive: Not Supported 00:15:43.886 Format NVM: Not Supported 00:15:43.886 Firmware Activate/Download: Not Supported 00:15:43.886 Namespace Management: Not Supported 00:15:43.886 Device Self-Test: Not Supported 00:15:43.886 Directives: Not Supported 00:15:43.886 NVMe-MI: Not Supported 00:15:43.886 Virtualization Management: Not Supported 00:15:43.886 Doorbell Buffer Config: Not Supported 00:15:43.886 Get LBA Status Capability: Not Supported 00:15:43.886 Command & Feature Lockdown Capability: Not Supported 00:15:43.886 Abort Command Limit: 4 00:15:43.886 Async Event Request Limit: 4 00:15:43.886 Number of Firmware Slots: N/A 00:15:43.886 Firmware Slot 1 Read-Only: N/A 00:15:43.886 Firmware Activation Without Reset: N/A 00:15:43.886 Multiple Update Detection Support: N/A 00:15:43.886 Firmware Update Granularity: No Information Provided 00:15:43.886 Per-Namespace SMART Log: No 00:15:43.886 Asymmetric Namespace Access Log Page: Not Supported 00:15:43.886 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:43.886 Command Effects Log Page: Supported 00:15:43.886 Get Log Page Extended Data: Supported 00:15:43.886 Telemetry Log Pages: Not Supported 00:15:43.886 Persistent Event Log Pages: Not Supported 00:15:43.886 Supported Log Pages Log Page: May Support 00:15:43.886 Commands Supported & Effects Log Page: Not Supported 00:15:43.886 Feature Identifiers & Effects Log Page:May Support 00:15:43.886 NVMe-MI Commands & Effects Log Page: May Support 00:15:43.886 Data Area 4 for Telemetry Log: Not Supported 00:15:43.886 Error Log Page Entries Supported: 128 00:15:43.886 Keep Alive: Supported 00:15:43.886 Keep Alive Granularity: 10000 ms 00:15:43.886 00:15:43.886 NVM Command Set Attributes 00:15:43.886 ========================== 00:15:43.886 Submission Queue Entry Size 00:15:43.886 Max: 64 00:15:43.886 Min: 64 00:15:43.886 Completion Queue Entry Size 00:15:43.886 Max: 16 00:15:43.886 Min: 16 00:15:43.886 Number of Namespaces: 32 00:15:43.886 Compare Command: Supported 00:15:43.886 Write Uncorrectable Command: Not Supported 00:15:43.886 Dataset Management Command: Supported 00:15:43.886 Write Zeroes Command: Supported 00:15:43.886 Set Features Save Field: Not Supported 00:15:43.886 Reservations: Not Supported 00:15:43.886 Timestamp: Not Supported 00:15:43.886 Copy: Supported 00:15:43.886 Volatile Write Cache: Present 00:15:43.886 Atomic Write Unit (Normal): 1 00:15:43.886 Atomic Write Unit (PFail): 1 00:15:43.886 Atomic Compare & Write Unit: 1 00:15:43.886 Fused Compare & Write: Supported 00:15:43.886 Scatter-Gather List 00:15:43.886 SGL Command Set: Supported (Dword aligned) 00:15:43.886 SGL Keyed: Not Supported 00:15:43.886 SGL Bit Bucket Descriptor: Not Supported 00:15:43.886 SGL Metadata Pointer: Not Supported 00:15:43.886 Oversized SGL: Not Supported 00:15:43.886 SGL Metadata Address: Not Supported 00:15:43.886 SGL Offset: Not Supported 00:15:43.886 Transport SGL Data Block: Not Supported 00:15:43.886 Replay Protected Memory Block: Not Supported 00:15:43.886 00:15:43.886 Firmware Slot Information 00:15:43.886 ========================= 00:15:43.886 Active slot: 1 00:15:43.886 Slot 1 Firmware Revision: 25.01 00:15:43.886 00:15:43.886 00:15:43.886 Commands Supported and Effects 00:15:43.886 ============================== 00:15:43.886 Admin Commands 00:15:43.886 -------------- 00:15:43.886 Get Log Page (02h): Supported 00:15:43.886 Identify (06h): Supported 00:15:43.886 Abort (08h): Supported 00:15:43.886 Set Features (09h): Supported 00:15:43.886 Get Features (0Ah): Supported 00:15:43.886 Asynchronous Event Request (0Ch): Supported 00:15:43.886 Keep Alive (18h): Supported 00:15:43.886 I/O Commands 00:15:43.886 ------------ 00:15:43.886 Flush (00h): Supported LBA-Change 00:15:43.886 Write (01h): Supported LBA-Change 00:15:43.886 Read (02h): Supported 00:15:43.886 Compare (05h): Supported 00:15:43.886 Write Zeroes (08h): Supported LBA-Change 00:15:43.886 Dataset Management (09h): Supported LBA-Change 00:15:43.886 Copy (19h): Supported LBA-Change 00:15:43.886 00:15:43.886 Error Log 00:15:43.886 ========= 00:15:43.886 00:15:43.886 Arbitration 00:15:43.886 =========== 00:15:43.886 Arbitration Burst: 1 00:15:43.886 00:15:43.886 Power Management 00:15:43.886 ================ 00:15:43.886 Number of Power States: 1 00:15:43.886 Current Power State: Power State #0 00:15:43.886 Power State #0: 00:15:43.886 Max Power: 0.00 W 00:15:43.886 Non-Operational State: Operational 00:15:43.886 Entry Latency: Not Reported 00:15:43.886 Exit Latency: Not Reported 00:15:43.886 Relative Read Throughput: 0 00:15:43.886 Relative Read Latency: 0 00:15:43.886 Relative Write Throughput: 0 00:15:43.886 Relative Write Latency: 0 00:15:43.886 Idle Power: Not Reported 00:15:43.886 Active Power: Not Reported 00:15:43.886 Non-Operational Permissive Mode: Not Supported 00:15:43.886 00:15:43.886 Health Information 00:15:43.886 ================== 00:15:43.886 Critical Warnings: 00:15:43.886 Available Spare Space: OK 00:15:43.886 Temperature: OK 00:15:43.886 Device Reliability: OK 00:15:43.886 Read Only: No 00:15:43.886 Volatile Memory Backup: OK 00:15:43.886 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:43.886 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:43.886 Available Spare: 0% 00:15:43.886 Available Sp[2024-11-20 12:29:49.396066] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:43.886 [2024-11-20 12:29:49.396075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:43.886 [2024-11-20 12:29:49.396100] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:15:43.886 [2024-11-20 12:29:49.396109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.886 [2024-11-20 12:29:49.396115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.886 [2024-11-20 12:29:49.396121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.886 [2024-11-20 12:29:49.396126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.886 [2024-11-20 12:29:49.396234] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:43.886 [2024-11-20 12:29:49.396244] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:43.886 [2024-11-20 12:29:49.397244] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:43.887 [2024-11-20 12:29:49.397296] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:15:43.887 [2024-11-20 12:29:49.397303] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:15:43.887 [2024-11-20 12:29:49.398252] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:43.887 [2024-11-20 12:29:49.398262] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:15:43.887 [2024-11-20 12:29:49.398310] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:43.887 [2024-11-20 12:29:49.401207] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:43.887 are Threshold: 0% 00:15:43.887 Life Percentage Used: 0% 00:15:43.887 Data Units Read: 0 00:15:43.887 Data Units Written: 0 00:15:43.887 Host Read Commands: 0 00:15:43.887 Host Write Commands: 0 00:15:43.887 Controller Busy Time: 0 minutes 00:15:43.887 Power Cycles: 0 00:15:43.887 Power On Hours: 0 hours 00:15:43.887 Unsafe Shutdowns: 0 00:15:43.887 Unrecoverable Media Errors: 0 00:15:43.887 Lifetime Error Log Entries: 0 00:15:43.887 Warning Temperature Time: 0 minutes 00:15:43.887 Critical Temperature Time: 0 minutes 00:15:43.887 00:15:43.887 Number of Queues 00:15:43.887 ================ 00:15:43.887 Number of I/O Submission Queues: 127 00:15:43.887 Number of I/O Completion Queues: 127 00:15:43.887 00:15:43.887 Active Namespaces 00:15:43.887 ================= 00:15:43.887 Namespace ID:1 00:15:43.887 Error Recovery Timeout: Unlimited 00:15:43.887 Command Set Identifier: NVM (00h) 00:15:43.887 Deallocate: Supported 00:15:43.887 Deallocated/Unwritten Error: Not Supported 00:15:43.887 Deallocated Read Value: Unknown 00:15:43.887 Deallocate in Write Zeroes: Not Supported 00:15:43.887 Deallocated Guard Field: 0xFFFF 00:15:43.887 Flush: Supported 00:15:43.887 Reservation: Supported 00:15:43.887 Namespace Sharing Capabilities: Multiple Controllers 00:15:43.887 Size (in LBAs): 131072 (0GiB) 00:15:43.887 Capacity (in LBAs): 131072 (0GiB) 00:15:43.887 Utilization (in LBAs): 131072 (0GiB) 00:15:43.887 NGUID: A2FF113781904F369A8F568B09DF51B1 00:15:43.887 UUID: a2ff1137-8190-4f36-9a8f-568b09df51b1 00:15:43.887 Thin Provisioning: Not Supported 00:15:43.887 Per-NS Atomic Units: Yes 00:15:43.887 Atomic Boundary Size (Normal): 0 00:15:43.887 Atomic Boundary Size (PFail): 0 00:15:43.887 Atomic Boundary Offset: 0 00:15:43.887 Maximum Single Source Range Length: 65535 00:15:43.887 Maximum Copy Length: 65535 00:15:43.887 Maximum Source Range Count: 1 00:15:43.887 NGUID/EUI64 Never Reused: No 00:15:43.887 Namespace Write Protected: No 00:15:43.887 Number of LBA Formats: 1 00:15:43.887 Current LBA Format: LBA Format #00 00:15:43.887 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:43.887 00:15:43.887 12:29:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:43.887 [2024-11-20 12:29:49.625998] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:49.158 Initializing NVMe Controllers 00:15:49.158 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:49.158 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:49.158 Initialization complete. Launching workers. 00:15:49.158 ======================================================== 00:15:49.158 Latency(us) 00:15:49.158 Device Information : IOPS MiB/s Average min max 00:15:49.158 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39906.57 155.89 3207.29 929.94 9631.71 00:15:49.158 ======================================================== 00:15:49.158 Total : 39906.57 155.89 3207.29 929.94 9631.71 00:15:49.158 00:15:49.158 [2024-11-20 12:29:54.643113] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:49.159 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:49.159 [2024-11-20 12:29:54.878205] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:54.431 Initializing NVMe Controllers 00:15:54.431 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:54.431 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:54.431 Initialization complete. Launching workers. 00:15:54.431 ======================================================== 00:15:54.431 Latency(us) 00:15:54.431 Device Information : IOPS MiB/s Average min max 00:15:54.431 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16039.10 62.65 7979.82 4956.76 15962.67 00:15:54.431 ======================================================== 00:15:54.431 Total : 16039.10 62.65 7979.82 4956.76 15962.67 00:15:54.431 00:15:54.431 [2024-11-20 12:29:59.914478] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:54.431 12:29:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:54.431 [2024-11-20 12:30:00.126440] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:59.702 [2024-11-20 12:30:05.201482] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:59.702 Initializing NVMe Controllers 00:15:59.702 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:59.702 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:59.702 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:59.702 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:59.702 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:59.702 Initialization complete. Launching workers. 00:15:59.702 Starting thread on core 2 00:15:59.702 Starting thread on core 3 00:15:59.702 Starting thread on core 1 00:15:59.702 12:30:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:59.961 [2024-11-20 12:30:05.497712] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:03.255 [2024-11-20 12:30:08.565415] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:03.255 Initializing NVMe Controllers 00:16:03.255 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:03.255 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:03.255 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:16:03.255 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:16:03.255 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:16:03.255 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:16:03.255 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:03.255 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:03.255 Initialization complete. Launching workers. 00:16:03.255 Starting thread on core 1 with urgent priority queue 00:16:03.255 Starting thread on core 2 with urgent priority queue 00:16:03.255 Starting thread on core 3 with urgent priority queue 00:16:03.255 Starting thread on core 0 with urgent priority queue 00:16:03.255 SPDK bdev Controller (SPDK1 ) core 0: 8062.00 IO/s 12.40 secs/100000 ios 00:16:03.255 SPDK bdev Controller (SPDK1 ) core 1: 8615.67 IO/s 11.61 secs/100000 ios 00:16:03.255 SPDK bdev Controller (SPDK1 ) core 2: 10056.00 IO/s 9.94 secs/100000 ios 00:16:03.255 SPDK bdev Controller (SPDK1 ) core 3: 7668.33 IO/s 13.04 secs/100000 ios 00:16:03.255 ======================================================== 00:16:03.255 00:16:03.255 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:03.255 [2024-11-20 12:30:08.855728] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:03.255 Initializing NVMe Controllers 00:16:03.255 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:03.255 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:03.255 Namespace ID: 1 size: 0GB 00:16:03.255 Initialization complete. 00:16:03.255 INFO: using host memory buffer for IO 00:16:03.255 Hello world! 00:16:03.255 [2024-11-20 12:30:08.889942] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:03.255 12:30:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:03.514 [2024-11-20 12:30:09.176632] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:04.452 Initializing NVMe Controllers 00:16:04.452 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:04.452 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:04.452 Initialization complete. Launching workers. 00:16:04.452 submit (in ns) avg, min, max = 6529.7, 3200.0, 4004540.0 00:16:04.452 complete (in ns) avg, min, max = 19838.7, 1707.6, 4000354.3 00:16:04.452 00:16:04.452 Submit histogram 00:16:04.452 ================ 00:16:04.452 Range in us Cumulative Count 00:16:04.452 3.200 - 3.215: 0.0238% ( 4) 00:16:04.452 3.215 - 3.230: 0.0595% ( 6) 00:16:04.452 3.230 - 3.246: 0.2439% ( 31) 00:16:04.452 3.246 - 3.261: 0.4700% ( 38) 00:16:04.452 3.261 - 3.276: 0.8805% ( 69) 00:16:04.452 3.276 - 3.291: 3.0164% ( 359) 00:16:04.452 3.291 - 3.307: 8.5673% ( 933) 00:16:04.452 3.307 - 3.322: 14.3860% ( 978) 00:16:04.452 3.322 - 3.337: 21.0554% ( 1121) 00:16:04.452 3.337 - 3.352: 28.2604% ( 1211) 00:16:04.452 3.352 - 3.368: 34.0136% ( 967) 00:16:04.452 3.368 - 3.383: 40.0107% ( 1008) 00:16:04.452 3.383 - 3.398: 45.7163% ( 959) 00:16:04.452 3.398 - 3.413: 51.3565% ( 948) 00:16:04.452 3.413 - 3.429: 56.9550% ( 941) 00:16:04.452 3.429 - 3.444: 64.3979% ( 1251) 00:16:04.452 3.444 - 3.459: 71.1744% ( 1139) 00:16:04.452 3.459 - 3.474: 76.1245% ( 832) 00:16:04.452 3.474 - 3.490: 80.4557% ( 728) 00:16:04.452 3.490 - 3.505: 83.7220% ( 549) 00:16:04.452 3.505 - 3.520: 85.7092% ( 334) 00:16:04.452 3.520 - 3.535: 86.7801% ( 180) 00:16:04.452 3.535 - 3.550: 87.2977% ( 87) 00:16:04.452 3.550 - 3.566: 87.5595% ( 44) 00:16:04.452 3.566 - 3.581: 87.9284% ( 62) 00:16:04.452 3.581 - 3.596: 88.5352% ( 102) 00:16:04.452 3.596 - 3.611: 89.2908% ( 127) 00:16:04.452 3.611 - 3.627: 90.1892% ( 151) 00:16:04.453 3.627 - 3.642: 91.2958% ( 186) 00:16:04.453 3.642 - 3.657: 92.1763% ( 148) 00:16:04.453 3.657 - 3.672: 93.1045% ( 156) 00:16:04.453 3.672 - 3.688: 94.1635% ( 178) 00:16:04.453 3.688 - 3.703: 95.2701% ( 186) 00:16:04.453 3.703 - 3.718: 96.1566% ( 149) 00:16:04.453 3.718 - 3.733: 96.9836% ( 139) 00:16:04.453 3.733 - 3.749: 97.6261% ( 108) 00:16:04.453 3.749 - 3.764: 98.0604% ( 73) 00:16:04.453 3.764 - 3.779: 98.4115% ( 59) 00:16:04.453 3.779 - 3.794: 98.7327% ( 54) 00:16:04.453 3.794 - 3.810: 99.0302% ( 50) 00:16:04.453 3.810 - 3.825: 99.1730% ( 24) 00:16:04.453 3.825 - 3.840: 99.2444% ( 12) 00:16:04.453 3.840 - 3.855: 99.3396% ( 16) 00:16:04.453 3.855 - 3.870: 99.4110% ( 12) 00:16:04.453 3.870 - 3.886: 99.4467% ( 6) 00:16:04.453 3.886 - 3.901: 99.4645% ( 3) 00:16:04.453 3.901 - 3.931: 99.5002% ( 6) 00:16:04.453 3.931 - 3.962: 99.5181% ( 3) 00:16:04.453 3.962 - 3.992: 99.5419% ( 4) 00:16:04.453 4.023 - 4.053: 99.5478% ( 1) 00:16:04.453 4.053 - 4.084: 99.5597% ( 2) 00:16:04.453 4.175 - 4.206: 99.5657% ( 1) 00:16:04.453 4.206 - 4.236: 99.5716% ( 1) 00:16:04.453 4.450 - 4.480: 99.5776% ( 1) 00:16:04.453 4.663 - 4.693: 99.5835% ( 1) 00:16:04.453 4.815 - 4.846: 99.5895% ( 1) 00:16:04.453 4.937 - 4.968: 99.5954% ( 1) 00:16:04.453 4.968 - 4.998: 99.6014% ( 1) 00:16:04.453 5.029 - 5.059: 99.6073% ( 1) 00:16:04.453 5.059 - 5.090: 99.6192% ( 2) 00:16:04.453 5.242 - 5.272: 99.6252% ( 1) 00:16:04.453 5.272 - 5.303: 99.6311% ( 1) 00:16:04.453 5.333 - 5.364: 99.6371% ( 1) 00:16:04.453 5.394 - 5.425: 99.6490% ( 2) 00:16:04.453 5.425 - 5.455: 99.6609% ( 2) 00:16:04.453 5.455 - 5.486: 99.6668% ( 1) 00:16:04.453 5.486 - 5.516: 99.6787% ( 2) 00:16:04.453 5.699 - 5.730: 99.6847% ( 1) 00:16:04.453 5.730 - 5.760: 99.6906% ( 1) 00:16:04.453 5.760 - 5.790: 99.6966% ( 1) 00:16:04.453 5.790 - 5.821: 99.7025% ( 1) 00:16:04.453 5.912 - 5.943: 99.7085% ( 1) 00:16:04.453 6.004 - 6.034: 99.7144% ( 1) 00:16:04.453 6.034 - 6.065: 99.7204% ( 1) 00:16:04.453 6.187 - 6.217: 99.7263% ( 1) 00:16:04.453 6.248 - 6.278: 99.7323% ( 1) 00:16:04.453 6.278 - 6.309: 99.7382% ( 1) 00:16:04.453 6.370 - 6.400: 99.7442% ( 1) 00:16:04.453 6.400 - 6.430: 99.7501% ( 1) 00:16:04.453 6.430 - 6.461: 99.7561% ( 1) 00:16:04.453 6.583 - 6.613: 99.7620% ( 1) 00:16:04.453 6.674 - 6.705: 99.7739% ( 2) 00:16:04.453 6.735 - 6.766: 99.7799% ( 1) 00:16:04.453 6.796 - 6.827: 99.7858% ( 1) 00:16:04.453 6.857 - 6.888: 99.7918% ( 1) 00:16:04.453 6.949 - 6.979: 99.8096% ( 3) 00:16:04.453 6.979 - 7.010: 99.8156% ( 1) 00:16:04.453 7.010 - 7.040: 99.8215% ( 1) 00:16:04.453 7.131 - 7.162: 99.8275% ( 1) 00:16:04.453 7.162 - 7.192: 99.8334% ( 1) 00:16:04.453 7.192 - 7.223: 99.8453% ( 2) 00:16:04.453 7.345 - 7.375: 99.8572% ( 2) 00:16:04.453 7.375 - 7.406: 99.8632% ( 1) 00:16:04.453 7.406 - 7.436: 99.8691% ( 1) 00:16:04.453 7.436 - 7.467: 99.8751% ( 1) 00:16:04.453 7.467 - 7.497: 99.8810% ( 1) 00:16:04.453 7.497 - 7.528: 99.8870% ( 1) 00:16:04.453 7.589 - 7.619: 99.8929% ( 1) 00:16:04.453 7.680 - 7.710: 99.8989% ( 1) 00:16:04.453 7.741 - 7.771: 99.9048% ( 1) 00:16:04.453 7.802 - 7.863: 99.9108% ( 1) 00:16:04.453 7.985 - 8.046: 99.9167% ( 1) 00:16:04.453 19.992 - 20.114: 99.9227% ( 1) 00:16:04.453 3994.575 - 4025.783: 100.0000% ( 13) 00:16:04.453 00:16:04.453 Complete histogram 00:16:04.453 ================== 00:16:04.453 Range in us Cumulative Count 00:16:04.453 1.707 - 1.714: 0.0178% ( 3) 00:16:04.453 1.714 - 1.722: 0.1249% ( 18) 00:16:04.453 1.722 - 1.730: 0.2320% ( 18) 00:16:04.453 1.730 - 1.737: 0.2618% ( 5) 00:16:04.453 1.737 - 1.745: 0.2856% ( 4) 00:16:04.453 1.745 - 1.752: 0.2975% ( 2) 00:16:04.453 1.752 - 1.760: 0.6782% ( 64) 00:16:04.453 1.760 - 1.768: 6.4196% ( 965) 00:16:04.453 1.768 - 1.775: 24.8394% ( 3096) 00:16:04.453 1.775 - 1.783: 39.7132% ( 2500) 00:16:04.453 1.783 - 1.790: 44.2408% ( 761) 00:16:04.453 1.790 - 1.798: 46.2220% ( 333) 00:16:04.453 1.798 - 1.806: 47.5012% ( 215) 00:16:04.453 1.806 - 1.813: 48.0961% ( 100) 00:16:04.453 1.813 - 1.821: 49.9941% ( 319) 00:16:04.453 1.821 - 1.829: 60.7389% ( 1806) 00:16:04.453 1.829 - 1.836: 78.0819% ( 2915) 00:16:04.453 1.836 - 1.844: 88.1307% ( 1689) 00:16:04.453 1.844 - 1.851: 92.2656% ( 695) 00:16:04.453 1.851 - 1.859: 94.5740% ( 388) 00:16:04.453 1.859 - 1.867: 95.6747% ( 185) 00:16:04.453 1.867 - 1.874: 96.0673% ( 66) 00:16:04.453 1.874 - 1.882: 96.2637% ( 33) 00:16:04.453 1.882 - 1.890: 96.5493% ( 48) 00:16:04.453 1.890 - 1.897: 97.0907% ( 91) 00:16:04.453 1.897 - 1.905: 97.6797% ( 99) 00:16:04.453 1.905 - 1.912: 98.0902% ( 69) 00:16:04.453 1.912 - 1.920: 98.3282% ( 40) 00:16:04.453 1.920 - 1.928: 98.4531% ( 21) 00:16:04.453 1.928 - 1.935: 98.5067% ( 9) 00:16:04.453 1.935 - 1.943: 98.5781% ( 12) 00:16:04.453 1.943 - 1.950: 98.6554% ( 13) 00:16:04.453 1.950 - 1.966: 98.7922% ( 23) 00:16:04.453 1.966 - 1.981: 98.8398% ( 8) 00:16:04.453 1.981 - 1.996: 98.8636% ( 4) 00:16:04.453 1.996 - 2.011: 99.0183% ( 26) 00:16:04.453 2.011 - 2.027: 99.1135% ( 16) 00:16:04.453 2.027 - 2.042: 99.1611% ( 8) 00:16:04.453 2.042 - 2.057: 99.1730% ( 2) 00:16:04.453 2.057 - 2.072: 99.2385% ( 11) 00:16:04.453 2.072 - 2.088: 99.3337% ( 16) 00:16:04.453 2.088 - 2.103: 99.3753% ( 7) 00:16:04.453 2.103 - 2.118: 99.3812% ( 1) 00:16:04.453 2.149 - 2.164: 99.3931% ( 2) 00:16:04.453 2.179 - 2.194: 99.3991% ( 1) 00:16:04.453 2.194 - 2.210: 99.4050% ( 1) 00:16:04.453 2.210 - 2.225: 99.4110% ( 1) 00:16:04.453 2.301 - 2.316: 99.4169% ( 1) 00:16:04.453 2.362 - 2.377: 99.4229% ( 1) 00:16:04.453 2.712 - 2.728: 99.4288% ( 1) 00:16:04.453 3.383 - 3.398: 99.4348% ( 1) 00:16:04.453 3.992 - 4.023: 99.4407% ( 1) 00:16:04.453 4.084 - 4.114: 99.4526% ( 2) 00:16:04.453 4.236 - 4.267: 99.4586% ( 1) 00:16:04.453 4.297 - 4.328: 99.4645% ( 1) 00:16:04.453 4.541 - 4.571: 99.4705% ( 1) 00:16:04.453 4.602 - 4.632: 99.4764% ( 1) 00:16:04.453 4.724 - 4.754: 99.4824% ( 1) 00:16:04.453 4.785 - 4.815: 99.4883% ( 1) 00:16:04.453 5.029 - 5.059: 99.4943% ( 1) 00:16:04.453 5.394 - 5.4[2024-11-20 12:30:10.198506] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:04.713 25: 99.5062% ( 2) 00:16:04.713 5.425 - 5.455: 99.5121% ( 1) 00:16:04.713 5.455 - 5.486: 99.5181% ( 1) 00:16:04.713 5.516 - 5.547: 99.5240% ( 1) 00:16:04.713 6.126 - 6.156: 99.5300% ( 1) 00:16:04.713 6.217 - 6.248: 99.5359% ( 1) 00:16:04.713 6.491 - 6.522: 99.5419% ( 1) 00:16:04.713 38.766 - 39.010: 99.5478% ( 1) 00:16:04.713 3354.819 - 3370.423: 99.5538% ( 1) 00:16:04.713 3978.971 - 3994.575: 99.5597% ( 1) 00:16:04.713 3994.575 - 4025.783: 100.0000% ( 74) 00:16:04.713 00:16:04.713 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:16:04.713 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:04.713 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:16:04.713 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:16:04.713 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:04.713 [ 00:16:04.713 { 00:16:04.713 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:04.713 "subtype": "Discovery", 00:16:04.713 "listen_addresses": [], 00:16:04.713 "allow_any_host": true, 00:16:04.713 "hosts": [] 00:16:04.713 }, 00:16:04.713 { 00:16:04.713 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:04.713 "subtype": "NVMe", 00:16:04.713 "listen_addresses": [ 00:16:04.713 { 00:16:04.713 "trtype": "VFIOUSER", 00:16:04.713 "adrfam": "IPv4", 00:16:04.713 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:04.713 "trsvcid": "0" 00:16:04.713 } 00:16:04.713 ], 00:16:04.713 "allow_any_host": true, 00:16:04.713 "hosts": [], 00:16:04.713 "serial_number": "SPDK1", 00:16:04.713 "model_number": "SPDK bdev Controller", 00:16:04.713 "max_namespaces": 32, 00:16:04.713 "min_cntlid": 1, 00:16:04.713 "max_cntlid": 65519, 00:16:04.713 "namespaces": [ 00:16:04.713 { 00:16:04.713 "nsid": 1, 00:16:04.713 "bdev_name": "Malloc1", 00:16:04.713 "name": "Malloc1", 00:16:04.713 "nguid": "A2FF113781904F369A8F568B09DF51B1", 00:16:04.713 "uuid": "a2ff1137-8190-4f36-9a8f-568b09df51b1" 00:16:04.713 } 00:16:04.713 ] 00:16:04.713 }, 00:16:04.713 { 00:16:04.713 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:04.713 "subtype": "NVMe", 00:16:04.713 "listen_addresses": [ 00:16:04.713 { 00:16:04.713 "trtype": "VFIOUSER", 00:16:04.713 "adrfam": "IPv4", 00:16:04.713 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:04.713 "trsvcid": "0" 00:16:04.713 } 00:16:04.713 ], 00:16:04.713 "allow_any_host": true, 00:16:04.713 "hosts": [], 00:16:04.713 "serial_number": "SPDK2", 00:16:04.713 "model_number": "SPDK bdev Controller", 00:16:04.713 "max_namespaces": 32, 00:16:04.713 "min_cntlid": 1, 00:16:04.713 "max_cntlid": 65519, 00:16:04.713 "namespaces": [ 00:16:04.713 { 00:16:04.713 "nsid": 1, 00:16:04.713 "bdev_name": "Malloc2", 00:16:04.713 "name": "Malloc2", 00:16:04.713 "nguid": "C79DBC48F59A4DF695CC8884F54D7E96", 00:16:04.713 "uuid": "c79dbc48-f59a-4df6-95cc-8884f54d7e96" 00:16:04.713 } 00:16:04.713 ] 00:16:04.713 } 00:16:04.713 ] 00:16:04.713 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:04.713 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=148856 00:16:04.713 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:04.713 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:16:04.713 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:16:04.713 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:04.713 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:04.713 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:16:04.713 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:04.713 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:16:04.972 [2024-11-20 12:30:10.608632] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:04.972 Malloc3 00:16:04.972 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:16:05.232 [2024-11-20 12:30:10.858427] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:05.232 12:30:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:05.232 Asynchronous Event Request test 00:16:05.232 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:05.232 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:05.232 Registering asynchronous event callbacks... 00:16:05.232 Starting namespace attribute notice tests for all controllers... 00:16:05.232 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:05.232 aer_cb - Changed Namespace 00:16:05.232 Cleaning up... 00:16:05.492 [ 00:16:05.492 { 00:16:05.492 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:05.492 "subtype": "Discovery", 00:16:05.492 "listen_addresses": [], 00:16:05.492 "allow_any_host": true, 00:16:05.492 "hosts": [] 00:16:05.492 }, 00:16:05.492 { 00:16:05.492 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:05.492 "subtype": "NVMe", 00:16:05.492 "listen_addresses": [ 00:16:05.492 { 00:16:05.492 "trtype": "VFIOUSER", 00:16:05.492 "adrfam": "IPv4", 00:16:05.492 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:05.492 "trsvcid": "0" 00:16:05.492 } 00:16:05.492 ], 00:16:05.492 "allow_any_host": true, 00:16:05.492 "hosts": [], 00:16:05.492 "serial_number": "SPDK1", 00:16:05.492 "model_number": "SPDK bdev Controller", 00:16:05.492 "max_namespaces": 32, 00:16:05.492 "min_cntlid": 1, 00:16:05.492 "max_cntlid": 65519, 00:16:05.492 "namespaces": [ 00:16:05.492 { 00:16:05.492 "nsid": 1, 00:16:05.492 "bdev_name": "Malloc1", 00:16:05.492 "name": "Malloc1", 00:16:05.492 "nguid": "A2FF113781904F369A8F568B09DF51B1", 00:16:05.492 "uuid": "a2ff1137-8190-4f36-9a8f-568b09df51b1" 00:16:05.492 }, 00:16:05.492 { 00:16:05.492 "nsid": 2, 00:16:05.492 "bdev_name": "Malloc3", 00:16:05.492 "name": "Malloc3", 00:16:05.492 "nguid": "6D0048A7511242F0800C35992FEBB62E", 00:16:05.492 "uuid": "6d0048a7-5112-42f0-800c-35992febb62e" 00:16:05.492 } 00:16:05.492 ] 00:16:05.492 }, 00:16:05.492 { 00:16:05.492 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:05.492 "subtype": "NVMe", 00:16:05.492 "listen_addresses": [ 00:16:05.492 { 00:16:05.492 "trtype": "VFIOUSER", 00:16:05.492 "adrfam": "IPv4", 00:16:05.492 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:05.492 "trsvcid": "0" 00:16:05.492 } 00:16:05.492 ], 00:16:05.492 "allow_any_host": true, 00:16:05.492 "hosts": [], 00:16:05.492 "serial_number": "SPDK2", 00:16:05.492 "model_number": "SPDK bdev Controller", 00:16:05.492 "max_namespaces": 32, 00:16:05.492 "min_cntlid": 1, 00:16:05.492 "max_cntlid": 65519, 00:16:05.492 "namespaces": [ 00:16:05.493 { 00:16:05.493 "nsid": 1, 00:16:05.493 "bdev_name": "Malloc2", 00:16:05.493 "name": "Malloc2", 00:16:05.493 "nguid": "C79DBC48F59A4DF695CC8884F54D7E96", 00:16:05.493 "uuid": "c79dbc48-f59a-4df6-95cc-8884f54d7e96" 00:16:05.493 } 00:16:05.493 ] 00:16:05.493 } 00:16:05.493 ] 00:16:05.493 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 148856 00:16:05.493 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:05.493 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:05.493 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:16:05.493 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:05.493 [2024-11-20 12:30:11.098761] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:16:05.493 [2024-11-20 12:30:11.098791] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148885 ] 00:16:05.493 [2024-11-20 12:30:11.137558] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:16:05.493 [2024-11-20 12:30:11.142815] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:05.493 [2024-11-20 12:30:11.142841] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fbe70bfc000 00:16:05.493 [2024-11-20 12:30:11.143812] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:05.493 [2024-11-20 12:30:11.144819] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:05.493 [2024-11-20 12:30:11.145827] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:05.493 [2024-11-20 12:30:11.146841] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:05.493 [2024-11-20 12:30:11.147844] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:05.493 [2024-11-20 12:30:11.148855] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:05.493 [2024-11-20 12:30:11.149859] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:05.493 [2024-11-20 12:30:11.150860] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:05.493 [2024-11-20 12:30:11.151868] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:05.493 [2024-11-20 12:30:11.151878] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fbe70bf1000 00:16:05.493 [2024-11-20 12:30:11.152794] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:05.493 [2024-11-20 12:30:11.162154] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:16:05.493 [2024-11-20 12:30:11.162178] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:16:05.493 [2024-11-20 12:30:11.166255] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:05.493 [2024-11-20 12:30:11.166294] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:05.493 [2024-11-20 12:30:11.166361] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:16:05.493 [2024-11-20 12:30:11.166374] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:16:05.493 [2024-11-20 12:30:11.166379] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:16:05.493 [2024-11-20 12:30:11.167264] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:16:05.493 [2024-11-20 12:30:11.167274] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:16:05.493 [2024-11-20 12:30:11.167280] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:16:05.493 [2024-11-20 12:30:11.168265] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:05.493 [2024-11-20 12:30:11.168273] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:16:05.493 [2024-11-20 12:30:11.168280] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:16:05.493 [2024-11-20 12:30:11.169276] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:16:05.493 [2024-11-20 12:30:11.169284] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:05.493 [2024-11-20 12:30:11.170281] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:16:05.493 [2024-11-20 12:30:11.170290] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:16:05.493 [2024-11-20 12:30:11.170295] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:16:05.493 [2024-11-20 12:30:11.170301] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:05.493 [2024-11-20 12:30:11.170408] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:16:05.493 [2024-11-20 12:30:11.170413] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:05.493 [2024-11-20 12:30:11.170417] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:16:05.493 [2024-11-20 12:30:11.171286] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:16:05.493 [2024-11-20 12:30:11.172294] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:16:05.493 [2024-11-20 12:30:11.173304] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:05.493 [2024-11-20 12:30:11.174306] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:05.493 [2024-11-20 12:30:11.174346] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:05.493 [2024-11-20 12:30:11.175318] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:16:05.493 [2024-11-20 12:30:11.175327] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:05.493 [2024-11-20 12:30:11.175331] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:16:05.493 [2024-11-20 12:30:11.175348] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:16:05.493 [2024-11-20 12:30:11.175359] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:16:05.493 [2024-11-20 12:30:11.175371] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:05.493 [2024-11-20 12:30:11.175375] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:05.493 [2024-11-20 12:30:11.175378] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:05.493 [2024-11-20 12:30:11.175390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:05.493 [2024-11-20 12:30:11.182210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:05.493 [2024-11-20 12:30:11.182223] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:16:05.493 [2024-11-20 12:30:11.182227] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:16:05.493 [2024-11-20 12:30:11.182231] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:16:05.493 [2024-11-20 12:30:11.182235] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:05.493 [2024-11-20 12:30:11.182244] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:16:05.493 [2024-11-20 12:30:11.182249] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:16:05.493 [2024-11-20 12:30:11.182253] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:16:05.493 [2024-11-20 12:30:11.182261] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:16:05.493 [2024-11-20 12:30:11.182270] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:05.493 [2024-11-20 12:30:11.192209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:05.493 [2024-11-20 12:30:11.192221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.493 [2024-11-20 12:30:11.192228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.493 [2024-11-20 12:30:11.192235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.493 [2024-11-20 12:30:11.192243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.493 [2024-11-20 12:30:11.192247] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:16:05.493 [2024-11-20 12:30:11.192254] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:05.494 [2024-11-20 12:30:11.192262] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:05.494 [2024-11-20 12:30:11.200209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:05.494 [2024-11-20 12:30:11.200219] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:16:05.494 [2024-11-20 12:30:11.200224] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:05.494 [2024-11-20 12:30:11.200231] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:16:05.494 [2024-11-20 12:30:11.200236] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:16:05.494 [2024-11-20 12:30:11.200245] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:05.494 [2024-11-20 12:30:11.208219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:05.494 [2024-11-20 12:30:11.208275] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:16:05.494 [2024-11-20 12:30:11.208284] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:16:05.494 [2024-11-20 12:30:11.208291] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:05.494 [2024-11-20 12:30:11.208295] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:05.494 [2024-11-20 12:30:11.208301] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:05.494 [2024-11-20 12:30:11.208308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:05.494 [2024-11-20 12:30:11.216209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:05.494 [2024-11-20 12:30:11.216222] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:16:05.494 [2024-11-20 12:30:11.216230] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:16:05.494 [2024-11-20 12:30:11.216237] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:16:05.494 [2024-11-20 12:30:11.216243] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:05.494 [2024-11-20 12:30:11.216247] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:05.494 [2024-11-20 12:30:11.216250] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:05.494 [2024-11-20 12:30:11.216255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:05.494 [2024-11-20 12:30:11.224208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:05.494 [2024-11-20 12:30:11.224222] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:05.494 [2024-11-20 12:30:11.224229] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:05.494 [2024-11-20 12:30:11.224235] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:05.494 [2024-11-20 12:30:11.224239] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:05.494 [2024-11-20 12:30:11.224242] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:05.494 [2024-11-20 12:30:11.224247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:05.494 [2024-11-20 12:30:11.232208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:05.494 [2024-11-20 12:30:11.232218] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:05.494 [2024-11-20 12:30:11.232224] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:16:05.494 [2024-11-20 12:30:11.232231] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:16:05.494 [2024-11-20 12:30:11.232239] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:16:05.494 [2024-11-20 12:30:11.232244] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:05.494 [2024-11-20 12:30:11.232249] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:16:05.494 [2024-11-20 12:30:11.232254] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:16:05.494 [2024-11-20 12:30:11.232258] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:16:05.494 [2024-11-20 12:30:11.232265] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:16:05.494 [2024-11-20 12:30:11.232282] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:05.494 [2024-11-20 12:30:11.240209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:05.494 [2024-11-20 12:30:11.240224] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:05.494 [2024-11-20 12:30:11.248208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:05.494 [2024-11-20 12:30:11.248220] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:05.755 [2024-11-20 12:30:11.256207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:05.755 [2024-11-20 12:30:11.256220] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:05.755 [2024-11-20 12:30:11.264209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:05.755 [2024-11-20 12:30:11.264225] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:05.755 [2024-11-20 12:30:11.264230] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:05.755 [2024-11-20 12:30:11.264233] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:05.755 [2024-11-20 12:30:11.264236] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:05.755 [2024-11-20 12:30:11.264239] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:05.755 [2024-11-20 12:30:11.264244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:05.755 [2024-11-20 12:30:11.264251] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:05.755 [2024-11-20 12:30:11.264255] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:05.755 [2024-11-20 12:30:11.264258] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:05.755 [2024-11-20 12:30:11.264263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:05.755 [2024-11-20 12:30:11.264269] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:05.755 [2024-11-20 12:30:11.264273] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:05.755 [2024-11-20 12:30:11.264276] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:05.755 [2024-11-20 12:30:11.264281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:05.755 [2024-11-20 12:30:11.264287] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:05.755 [2024-11-20 12:30:11.264291] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:05.755 [2024-11-20 12:30:11.264294] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:05.755 [2024-11-20 12:30:11.264299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:05.755 [2024-11-20 12:30:11.272208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:05.755 [2024-11-20 12:30:11.272224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:05.755 [2024-11-20 12:30:11.272233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:05.755 [2024-11-20 12:30:11.272239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:05.755 ===================================================== 00:16:05.755 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:05.755 ===================================================== 00:16:05.755 Controller Capabilities/Features 00:16:05.755 ================================ 00:16:05.755 Vendor ID: 4e58 00:16:05.755 Subsystem Vendor ID: 4e58 00:16:05.755 Serial Number: SPDK2 00:16:05.755 Model Number: SPDK bdev Controller 00:16:05.755 Firmware Version: 25.01 00:16:05.755 Recommended Arb Burst: 6 00:16:05.755 IEEE OUI Identifier: 8d 6b 50 00:16:05.755 Multi-path I/O 00:16:05.755 May have multiple subsystem ports: Yes 00:16:05.755 May have multiple controllers: Yes 00:16:05.755 Associated with SR-IOV VF: No 00:16:05.755 Max Data Transfer Size: 131072 00:16:05.755 Max Number of Namespaces: 32 00:16:05.755 Max Number of I/O Queues: 127 00:16:05.755 NVMe Specification Version (VS): 1.3 00:16:05.755 NVMe Specification Version (Identify): 1.3 00:16:05.755 Maximum Queue Entries: 256 00:16:05.755 Contiguous Queues Required: Yes 00:16:05.755 Arbitration Mechanisms Supported 00:16:05.755 Weighted Round Robin: Not Supported 00:16:05.755 Vendor Specific: Not Supported 00:16:05.755 Reset Timeout: 15000 ms 00:16:05.755 Doorbell Stride: 4 bytes 00:16:05.755 NVM Subsystem Reset: Not Supported 00:16:05.755 Command Sets Supported 00:16:05.755 NVM Command Set: Supported 00:16:05.755 Boot Partition: Not Supported 00:16:05.755 Memory Page Size Minimum: 4096 bytes 00:16:05.755 Memory Page Size Maximum: 4096 bytes 00:16:05.755 Persistent Memory Region: Not Supported 00:16:05.756 Optional Asynchronous Events Supported 00:16:05.756 Namespace Attribute Notices: Supported 00:16:05.756 Firmware Activation Notices: Not Supported 00:16:05.756 ANA Change Notices: Not Supported 00:16:05.756 PLE Aggregate Log Change Notices: Not Supported 00:16:05.756 LBA Status Info Alert Notices: Not Supported 00:16:05.756 EGE Aggregate Log Change Notices: Not Supported 00:16:05.756 Normal NVM Subsystem Shutdown event: Not Supported 00:16:05.756 Zone Descriptor Change Notices: Not Supported 00:16:05.756 Discovery Log Change Notices: Not Supported 00:16:05.756 Controller Attributes 00:16:05.756 128-bit Host Identifier: Supported 00:16:05.756 Non-Operational Permissive Mode: Not Supported 00:16:05.756 NVM Sets: Not Supported 00:16:05.756 Read Recovery Levels: Not Supported 00:16:05.756 Endurance Groups: Not Supported 00:16:05.756 Predictable Latency Mode: Not Supported 00:16:05.756 Traffic Based Keep ALive: Not Supported 00:16:05.756 Namespace Granularity: Not Supported 00:16:05.756 SQ Associations: Not Supported 00:16:05.756 UUID List: Not Supported 00:16:05.756 Multi-Domain Subsystem: Not Supported 00:16:05.756 Fixed Capacity Management: Not Supported 00:16:05.756 Variable Capacity Management: Not Supported 00:16:05.756 Delete Endurance Group: Not Supported 00:16:05.756 Delete NVM Set: Not Supported 00:16:05.756 Extended LBA Formats Supported: Not Supported 00:16:05.756 Flexible Data Placement Supported: Not Supported 00:16:05.756 00:16:05.756 Controller Memory Buffer Support 00:16:05.756 ================================ 00:16:05.756 Supported: No 00:16:05.756 00:16:05.756 Persistent Memory Region Support 00:16:05.756 ================================ 00:16:05.756 Supported: No 00:16:05.756 00:16:05.756 Admin Command Set Attributes 00:16:05.756 ============================ 00:16:05.756 Security Send/Receive: Not Supported 00:16:05.756 Format NVM: Not Supported 00:16:05.756 Firmware Activate/Download: Not Supported 00:16:05.756 Namespace Management: Not Supported 00:16:05.756 Device Self-Test: Not Supported 00:16:05.756 Directives: Not Supported 00:16:05.756 NVMe-MI: Not Supported 00:16:05.756 Virtualization Management: Not Supported 00:16:05.756 Doorbell Buffer Config: Not Supported 00:16:05.756 Get LBA Status Capability: Not Supported 00:16:05.756 Command & Feature Lockdown Capability: Not Supported 00:16:05.756 Abort Command Limit: 4 00:16:05.756 Async Event Request Limit: 4 00:16:05.756 Number of Firmware Slots: N/A 00:16:05.756 Firmware Slot 1 Read-Only: N/A 00:16:05.756 Firmware Activation Without Reset: N/A 00:16:05.756 Multiple Update Detection Support: N/A 00:16:05.756 Firmware Update Granularity: No Information Provided 00:16:05.756 Per-Namespace SMART Log: No 00:16:05.756 Asymmetric Namespace Access Log Page: Not Supported 00:16:05.756 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:16:05.756 Command Effects Log Page: Supported 00:16:05.756 Get Log Page Extended Data: Supported 00:16:05.756 Telemetry Log Pages: Not Supported 00:16:05.756 Persistent Event Log Pages: Not Supported 00:16:05.756 Supported Log Pages Log Page: May Support 00:16:05.756 Commands Supported & Effects Log Page: Not Supported 00:16:05.756 Feature Identifiers & Effects Log Page:May Support 00:16:05.756 NVMe-MI Commands & Effects Log Page: May Support 00:16:05.756 Data Area 4 for Telemetry Log: Not Supported 00:16:05.756 Error Log Page Entries Supported: 128 00:16:05.756 Keep Alive: Supported 00:16:05.756 Keep Alive Granularity: 10000 ms 00:16:05.756 00:16:05.756 NVM Command Set Attributes 00:16:05.756 ========================== 00:16:05.756 Submission Queue Entry Size 00:16:05.756 Max: 64 00:16:05.756 Min: 64 00:16:05.756 Completion Queue Entry Size 00:16:05.756 Max: 16 00:16:05.756 Min: 16 00:16:05.756 Number of Namespaces: 32 00:16:05.756 Compare Command: Supported 00:16:05.756 Write Uncorrectable Command: Not Supported 00:16:05.756 Dataset Management Command: Supported 00:16:05.756 Write Zeroes Command: Supported 00:16:05.756 Set Features Save Field: Not Supported 00:16:05.756 Reservations: Not Supported 00:16:05.756 Timestamp: Not Supported 00:16:05.756 Copy: Supported 00:16:05.756 Volatile Write Cache: Present 00:16:05.756 Atomic Write Unit (Normal): 1 00:16:05.756 Atomic Write Unit (PFail): 1 00:16:05.756 Atomic Compare & Write Unit: 1 00:16:05.756 Fused Compare & Write: Supported 00:16:05.756 Scatter-Gather List 00:16:05.756 SGL Command Set: Supported (Dword aligned) 00:16:05.756 SGL Keyed: Not Supported 00:16:05.756 SGL Bit Bucket Descriptor: Not Supported 00:16:05.756 SGL Metadata Pointer: Not Supported 00:16:05.756 Oversized SGL: Not Supported 00:16:05.756 SGL Metadata Address: Not Supported 00:16:05.756 SGL Offset: Not Supported 00:16:05.756 Transport SGL Data Block: Not Supported 00:16:05.756 Replay Protected Memory Block: Not Supported 00:16:05.756 00:16:05.756 Firmware Slot Information 00:16:05.756 ========================= 00:16:05.756 Active slot: 1 00:16:05.756 Slot 1 Firmware Revision: 25.01 00:16:05.756 00:16:05.756 00:16:05.756 Commands Supported and Effects 00:16:05.756 ============================== 00:16:05.756 Admin Commands 00:16:05.756 -------------- 00:16:05.756 Get Log Page (02h): Supported 00:16:05.756 Identify (06h): Supported 00:16:05.756 Abort (08h): Supported 00:16:05.756 Set Features (09h): Supported 00:16:05.756 Get Features (0Ah): Supported 00:16:05.756 Asynchronous Event Request (0Ch): Supported 00:16:05.756 Keep Alive (18h): Supported 00:16:05.756 I/O Commands 00:16:05.756 ------------ 00:16:05.756 Flush (00h): Supported LBA-Change 00:16:05.756 Write (01h): Supported LBA-Change 00:16:05.756 Read (02h): Supported 00:16:05.756 Compare (05h): Supported 00:16:05.756 Write Zeroes (08h): Supported LBA-Change 00:16:05.756 Dataset Management (09h): Supported LBA-Change 00:16:05.756 Copy (19h): Supported LBA-Change 00:16:05.756 00:16:05.756 Error Log 00:16:05.756 ========= 00:16:05.756 00:16:05.756 Arbitration 00:16:05.756 =========== 00:16:05.756 Arbitration Burst: 1 00:16:05.756 00:16:05.756 Power Management 00:16:05.756 ================ 00:16:05.756 Number of Power States: 1 00:16:05.756 Current Power State: Power State #0 00:16:05.756 Power State #0: 00:16:05.756 Max Power: 0.00 W 00:16:05.756 Non-Operational State: Operational 00:16:05.756 Entry Latency: Not Reported 00:16:05.756 Exit Latency: Not Reported 00:16:05.756 Relative Read Throughput: 0 00:16:05.756 Relative Read Latency: 0 00:16:05.756 Relative Write Throughput: 0 00:16:05.756 Relative Write Latency: 0 00:16:05.756 Idle Power: Not Reported 00:16:05.756 Active Power: Not Reported 00:16:05.756 Non-Operational Permissive Mode: Not Supported 00:16:05.756 00:16:05.756 Health Information 00:16:05.756 ================== 00:16:05.756 Critical Warnings: 00:16:05.756 Available Spare Space: OK 00:16:05.756 Temperature: OK 00:16:05.756 Device Reliability: OK 00:16:05.756 Read Only: No 00:16:05.756 Volatile Memory Backup: OK 00:16:05.756 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:05.756 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:05.756 Available Spare: 0% 00:16:05.756 Available Sp[2024-11-20 12:30:11.272329] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:05.756 [2024-11-20 12:30:11.280208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:05.756 [2024-11-20 12:30:11.280238] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:16:05.756 [2024-11-20 12:30:11.280247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.756 [2024-11-20 12:30:11.280253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.756 [2024-11-20 12:30:11.280258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.756 [2024-11-20 12:30:11.280264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.756 [2024-11-20 12:30:11.280305] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:05.756 [2024-11-20 12:30:11.280315] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:16:05.756 [2024-11-20 12:30:11.281309] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:05.756 [2024-11-20 12:30:11.281352] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:16:05.756 [2024-11-20 12:30:11.281358] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:16:05.756 [2024-11-20 12:30:11.282317] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:16:05.757 [2024-11-20 12:30:11.282329] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:16:05.757 [2024-11-20 12:30:11.282376] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:16:05.757 [2024-11-20 12:30:11.283340] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:05.757 are Threshold: 0% 00:16:05.757 Life Percentage Used: 0% 00:16:05.757 Data Units Read: 0 00:16:05.757 Data Units Written: 0 00:16:05.757 Host Read Commands: 0 00:16:05.757 Host Write Commands: 0 00:16:05.757 Controller Busy Time: 0 minutes 00:16:05.757 Power Cycles: 0 00:16:05.757 Power On Hours: 0 hours 00:16:05.757 Unsafe Shutdowns: 0 00:16:05.757 Unrecoverable Media Errors: 0 00:16:05.757 Lifetime Error Log Entries: 0 00:16:05.757 Warning Temperature Time: 0 minutes 00:16:05.757 Critical Temperature Time: 0 minutes 00:16:05.757 00:16:05.757 Number of Queues 00:16:05.757 ================ 00:16:05.757 Number of I/O Submission Queues: 127 00:16:05.757 Number of I/O Completion Queues: 127 00:16:05.757 00:16:05.757 Active Namespaces 00:16:05.757 ================= 00:16:05.757 Namespace ID:1 00:16:05.757 Error Recovery Timeout: Unlimited 00:16:05.757 Command Set Identifier: NVM (00h) 00:16:05.757 Deallocate: Supported 00:16:05.757 Deallocated/Unwritten Error: Not Supported 00:16:05.757 Deallocated Read Value: Unknown 00:16:05.757 Deallocate in Write Zeroes: Not Supported 00:16:05.757 Deallocated Guard Field: 0xFFFF 00:16:05.757 Flush: Supported 00:16:05.757 Reservation: Supported 00:16:05.757 Namespace Sharing Capabilities: Multiple Controllers 00:16:05.757 Size (in LBAs): 131072 (0GiB) 00:16:05.757 Capacity (in LBAs): 131072 (0GiB) 00:16:05.757 Utilization (in LBAs): 131072 (0GiB) 00:16:05.757 NGUID: C79DBC48F59A4DF695CC8884F54D7E96 00:16:05.757 UUID: c79dbc48-f59a-4df6-95cc-8884f54d7e96 00:16:05.757 Thin Provisioning: Not Supported 00:16:05.757 Per-NS Atomic Units: Yes 00:16:05.757 Atomic Boundary Size (Normal): 0 00:16:05.757 Atomic Boundary Size (PFail): 0 00:16:05.757 Atomic Boundary Offset: 0 00:16:05.757 Maximum Single Source Range Length: 65535 00:16:05.757 Maximum Copy Length: 65535 00:16:05.757 Maximum Source Range Count: 1 00:16:05.757 NGUID/EUI64 Never Reused: No 00:16:05.757 Namespace Write Protected: No 00:16:05.757 Number of LBA Formats: 1 00:16:05.757 Current LBA Format: LBA Format #00 00:16:05.757 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:05.757 00:16:05.757 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:05.757 [2024-11-20 12:30:11.510549] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:11.030 Initializing NVMe Controllers 00:16:11.030 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:11.030 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:11.030 Initialization complete. Launching workers. 00:16:11.030 ======================================================== 00:16:11.031 Latency(us) 00:16:11.031 Device Information : IOPS MiB/s Average min max 00:16:11.031 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39999.16 156.25 3202.17 944.34 9465.56 00:16:11.031 ======================================================== 00:16:11.031 Total : 39999.16 156.25 3202.17 944.34 9465.56 00:16:11.031 00:16:11.031 [2024-11-20 12:30:16.621461] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:11.031 12:30:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:11.289 [2024-11-20 12:30:16.858144] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:16.558 Initializing NVMe Controllers 00:16:16.558 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:16.558 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:16.558 Initialization complete. Launching workers. 00:16:16.558 ======================================================== 00:16:16.558 Latency(us) 00:16:16.558 Device Information : IOPS MiB/s Average min max 00:16:16.558 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39940.72 156.02 3204.35 927.76 9509.92 00:16:16.558 ======================================================== 00:16:16.558 Total : 39940.72 156.02 3204.35 927.76 9509.92 00:16:16.558 00:16:16.558 [2024-11-20 12:30:21.880345] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:16.558 12:30:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:16.558 [2024-11-20 12:30:22.087567] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:21.833 [2024-11-20 12:30:27.225298] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:21.833 Initializing NVMe Controllers 00:16:21.833 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:21.833 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:21.833 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:21.833 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:21.833 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:21.833 Initialization complete. Launching workers. 00:16:21.833 Starting thread on core 2 00:16:21.833 Starting thread on core 3 00:16:21.833 Starting thread on core 1 00:16:21.833 12:30:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:21.833 [2024-11-20 12:30:27.521610] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:25.125 [2024-11-20 12:30:30.575768] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:25.125 Initializing NVMe Controllers 00:16:25.125 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:25.125 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:25.125 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:25.125 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:25.125 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:25.125 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:25.125 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:25.125 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:25.125 Initialization complete. Launching workers. 00:16:25.125 Starting thread on core 1 with urgent priority queue 00:16:25.125 Starting thread on core 2 with urgent priority queue 00:16:25.125 Starting thread on core 3 with urgent priority queue 00:16:25.125 Starting thread on core 0 with urgent priority queue 00:16:25.125 SPDK bdev Controller (SPDK2 ) core 0: 10829.67 IO/s 9.23 secs/100000 ios 00:16:25.125 SPDK bdev Controller (SPDK2 ) core 1: 9235.00 IO/s 10.83 secs/100000 ios 00:16:25.125 SPDK bdev Controller (SPDK2 ) core 2: 7636.67 IO/s 13.09 secs/100000 ios 00:16:25.125 SPDK bdev Controller (SPDK2 ) core 3: 9734.00 IO/s 10.27 secs/100000 ios 00:16:25.125 ======================================================== 00:16:25.125 00:16:25.125 12:30:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:25.125 [2024-11-20 12:30:30.863659] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:25.125 Initializing NVMe Controllers 00:16:25.125 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:25.125 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:25.125 Namespace ID: 1 size: 0GB 00:16:25.125 Initialization complete. 00:16:25.125 INFO: using host memory buffer for IO 00:16:25.125 Hello world! 00:16:25.125 [2024-11-20 12:30:30.873734] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:25.384 12:30:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:25.642 [2024-11-20 12:30:31.150579] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:26.580 Initializing NVMe Controllers 00:16:26.580 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:26.580 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:26.580 Initialization complete. Launching workers. 00:16:26.580 submit (in ns) avg, min, max = 8674.2, 3184.8, 3999254.3 00:16:26.580 complete (in ns) avg, min, max = 20400.4, 1717.1, 6988611.4 00:16:26.580 00:16:26.580 Submit histogram 00:16:26.580 ================ 00:16:26.580 Range in us Cumulative Count 00:16:26.580 3.185 - 3.200: 0.0179% ( 3) 00:16:26.580 3.200 - 3.215: 0.0778% ( 10) 00:16:26.580 3.215 - 3.230: 0.3110% ( 39) 00:16:26.580 3.230 - 3.246: 0.6520% ( 57) 00:16:26.580 3.246 - 3.261: 1.3399% ( 115) 00:16:26.580 3.261 - 3.276: 3.5889% ( 376) 00:16:26.580 3.276 - 3.291: 9.0023% ( 905) 00:16:26.580 3.291 - 3.307: 15.3727% ( 1065) 00:16:26.580 3.307 - 3.322: 22.0062% ( 1109) 00:16:26.580 3.322 - 3.337: 28.6278% ( 1107) 00:16:26.580 3.337 - 3.352: 34.4599% ( 975) 00:16:26.580 3.352 - 3.368: 39.9569% ( 919) 00:16:26.580 3.368 - 3.383: 46.5008% ( 1094) 00:16:26.580 3.383 - 3.398: 52.1115% ( 938) 00:16:26.580 3.398 - 3.413: 56.7173% ( 770) 00:16:26.580 3.413 - 3.429: 63.1116% ( 1069) 00:16:26.580 3.429 - 3.444: 70.8219% ( 1289) 00:16:26.580 3.444 - 3.459: 74.9372% ( 688) 00:16:26.580 3.459 - 3.474: 79.7942% ( 812) 00:16:26.580 3.474 - 3.490: 83.5327% ( 625) 00:16:26.580 3.490 - 3.505: 85.8296% ( 384) 00:16:26.580 3.505 - 3.520: 87.2533% ( 238) 00:16:26.580 3.520 - 3.535: 87.7796% ( 88) 00:16:26.580 3.535 - 3.550: 88.1505% ( 62) 00:16:26.580 3.550 - 3.566: 88.4795% ( 55) 00:16:26.580 3.566 - 3.581: 89.1255% ( 108) 00:16:26.580 3.581 - 3.596: 89.9091% ( 131) 00:16:26.580 3.596 - 3.611: 90.9020% ( 166) 00:16:26.580 3.611 - 3.627: 91.8651% ( 161) 00:16:26.580 3.627 - 3.642: 92.8401% ( 163) 00:16:26.580 3.642 - 3.657: 93.7732% ( 156) 00:16:26.580 3.657 - 3.672: 94.6285% ( 143) 00:16:26.580 3.672 - 3.688: 95.6155% ( 165) 00:16:26.580 3.688 - 3.703: 96.5127% ( 150) 00:16:26.580 3.703 - 3.718: 97.3203% ( 135) 00:16:26.580 3.718 - 3.733: 97.9842% ( 111) 00:16:26.580 3.733 - 3.749: 98.4867% ( 84) 00:16:26.580 3.749 - 3.764: 98.7857% ( 50) 00:16:26.580 3.764 - 3.779: 99.0968% ( 52) 00:16:26.580 3.779 - 3.794: 99.2822% ( 31) 00:16:26.580 3.794 - 3.810: 99.4377% ( 26) 00:16:26.580 3.810 - 3.825: 99.5215% ( 14) 00:16:26.580 3.825 - 3.840: 99.5693% ( 8) 00:16:26.580 3.840 - 3.855: 99.5992% ( 5) 00:16:26.580 3.855 - 3.870: 99.6232% ( 4) 00:16:26.580 3.870 - 3.886: 99.6291% ( 1) 00:16:26.580 3.886 - 3.901: 99.6351% ( 1) 00:16:26.580 3.931 - 3.962: 99.6471% ( 2) 00:16:26.580 5.029 - 5.059: 99.6591% ( 2) 00:16:26.580 5.059 - 5.090: 99.6650% ( 1) 00:16:26.580 5.120 - 5.150: 99.6710% ( 1) 00:16:26.580 5.211 - 5.242: 99.6770% ( 1) 00:16:26.580 5.303 - 5.333: 99.6830% ( 1) 00:16:26.580 5.364 - 5.394: 99.6890% ( 1) 00:16:26.580 5.394 - 5.425: 99.6949% ( 1) 00:16:26.580 5.425 - 5.455: 99.7009% ( 1) 00:16:26.580 5.547 - 5.577: 99.7189% ( 3) 00:16:26.580 5.669 - 5.699: 99.7308% ( 2) 00:16:26.580 5.699 - 5.730: 99.7428% ( 2) 00:16:26.580 5.760 - 5.790: 99.7548% ( 2) 00:16:26.580 5.790 - 5.821: 99.7607% ( 1) 00:16:26.580 5.882 - 5.912: 99.7727% ( 2) 00:16:26.580 6.004 - 6.034: 99.7787% ( 1) 00:16:26.580 6.034 - 6.065: 99.7847% ( 1) 00:16:26.580 6.095 - 6.126: 99.7906% ( 1) 00:16:26.580 6.217 - 6.248: 99.7966% ( 1) 00:16:26.580 6.339 - 6.370: 99.8026% ( 1) 00:16:26.580 6.949 - 6.979: 99.8146% ( 2) 00:16:26.580 7.010 - 7.040: 99.8206% ( 1) 00:16:26.580 7.162 - 7.192: 99.8265% ( 1) 00:16:26.580 7.192 - 7.223: 99.8385% ( 2) 00:16:26.580 7.253 - 7.284: 99.8445% ( 1) 00:16:26.580 7.406 - 7.436: 99.8505% ( 1) 00:16:26.580 7.558 - 7.589: 99.8564% ( 1) 00:16:26.580 7.589 - 7.619: 99.8624% ( 1) 00:16:26.580 10.728 - 10.789: 99.8684% ( 1) 00:16:26.580 3994.575 - 4025.783: 100.0000% ( 22) 00:16:26.580 00:16:26.580 Complete histogram 00:16:26.580 ================== 00:16:26.580 Range in us Cumulative Count 00:16:26.580 1.714 - [2024-11-20 12:30:32.244174] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:26.580 1.722: 0.0299% ( 5) 00:16:26.580 1.722 - 1.730: 0.2153% ( 31) 00:16:26.580 1.730 - 1.737: 0.3948% ( 30) 00:16:26.580 1.737 - 1.745: 0.4127% ( 3) 00:16:26.580 1.745 - 1.752: 0.4307% ( 3) 00:16:26.580 1.752 - 1.760: 0.4725% ( 7) 00:16:26.580 1.760 - 1.768: 1.4834% ( 169) 00:16:26.580 1.768 - 1.775: 12.1187% ( 1778) 00:16:26.580 1.775 - 1.783: 39.2152% ( 4530) 00:16:26.580 1.783 - 1.790: 59.1757% ( 3337) 00:16:26.580 1.790 - 1.798: 65.0855% ( 988) 00:16:26.580 1.798 - 1.806: 67.8610% ( 464) 00:16:26.580 1.806 - 1.813: 70.0263% ( 362) 00:16:26.580 1.813 - 1.821: 70.8817% ( 143) 00:16:26.580 1.821 - 1.829: 72.3412% ( 244) 00:16:26.580 1.829 - 1.836: 78.7116% ( 1065) 00:16:26.580 1.836 - 1.844: 87.9471% ( 1544) 00:16:26.580 1.844 - 1.851: 93.4203% ( 915) 00:16:26.580 1.851 - 1.859: 95.9086% ( 416) 00:16:26.580 1.859 - 1.867: 97.5176% ( 269) 00:16:26.580 1.867 - 1.874: 98.4807% ( 161) 00:16:26.580 1.874 - 1.882: 98.8515% ( 62) 00:16:26.580 1.882 - 1.890: 99.0250% ( 29) 00:16:26.580 1.890 - 1.897: 99.0968% ( 12) 00:16:26.580 1.897 - 1.905: 99.1566% ( 10) 00:16:26.580 1.905 - 1.912: 99.1865% ( 5) 00:16:26.580 1.912 - 1.920: 99.2523% ( 11) 00:16:26.580 1.920 - 1.928: 99.3002% ( 8) 00:16:26.580 1.928 - 1.935: 99.3181% ( 3) 00:16:26.580 1.935 - 1.943: 99.3301% ( 2) 00:16:26.580 1.943 - 1.950: 99.3360% ( 1) 00:16:26.580 1.950 - 1.966: 99.3420% ( 1) 00:16:26.580 1.981 - 1.996: 99.3480% ( 1) 00:16:26.580 1.996 - 2.011: 99.3600% ( 2) 00:16:26.580 2.011 - 2.027: 99.3719% ( 2) 00:16:26.580 2.057 - 2.072: 99.3779% ( 1) 00:16:26.580 2.088 - 2.103: 99.3839% ( 1) 00:16:26.580 3.337 - 3.352: 99.3899% ( 1) 00:16:26.580 3.383 - 3.398: 99.3959% ( 1) 00:16:26.580 3.459 - 3.474: 99.4018% ( 1) 00:16:26.580 3.688 - 3.703: 99.4138% ( 2) 00:16:26.580 3.718 - 3.733: 99.4198% ( 1) 00:16:26.580 3.779 - 3.794: 99.4258% ( 1) 00:16:26.580 3.855 - 3.870: 99.4318% ( 1) 00:16:26.580 3.870 - 3.886: 99.4377% ( 1) 00:16:26.580 3.886 - 3.901: 99.4437% ( 1) 00:16:26.580 4.175 - 4.206: 99.4497% ( 1) 00:16:26.580 4.267 - 4.297: 99.4617% ( 2) 00:16:26.580 4.328 - 4.358: 99.4676% ( 1) 00:16:26.580 4.571 - 4.602: 99.4736% ( 1) 00:16:26.580 5.120 - 5.150: 99.4796% ( 1) 00:16:26.580 5.455 - 5.486: 99.4856% ( 1) 00:16:26.580 5.577 - 5.608: 99.4916% ( 1) 00:16:26.580 5.730 - 5.760: 99.4975% ( 1) 00:16:26.580 5.821 - 5.851: 99.5035% ( 1) 00:16:26.580 5.943 - 5.973: 99.5095% ( 1) 00:16:26.580 6.004 - 6.034: 99.5155% ( 1) 00:16:26.580 6.461 - 6.491: 99.5215% ( 1) 00:16:26.580 10.118 - 10.179: 99.5275% ( 1) 00:16:26.580 17.189 - 17.310: 99.5334% ( 1) 00:16:26.580 147.261 - 148.236: 99.5394% ( 1) 00:16:26.580 2168.930 - 2184.533: 99.5454% ( 1) 00:16:26.580 3994.575 - 4025.783: 99.9880% ( 74) 00:16:26.580 5960.655 - 5991.863: 99.9940% ( 1) 00:16:26.580 6959.299 - 6990.507: 100.0000% ( 1) 00:16:26.580 00:16:26.580 12:30:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:26.580 12:30:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:26.580 12:30:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:26.580 12:30:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:26.580 12:30:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:26.838 [ 00:16:26.839 { 00:16:26.839 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:26.839 "subtype": "Discovery", 00:16:26.839 "listen_addresses": [], 00:16:26.839 "allow_any_host": true, 00:16:26.839 "hosts": [] 00:16:26.839 }, 00:16:26.839 { 00:16:26.839 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:26.839 "subtype": "NVMe", 00:16:26.839 "listen_addresses": [ 00:16:26.839 { 00:16:26.839 "trtype": "VFIOUSER", 00:16:26.839 "adrfam": "IPv4", 00:16:26.839 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:26.839 "trsvcid": "0" 00:16:26.839 } 00:16:26.839 ], 00:16:26.839 "allow_any_host": true, 00:16:26.839 "hosts": [], 00:16:26.839 "serial_number": "SPDK1", 00:16:26.839 "model_number": "SPDK bdev Controller", 00:16:26.839 "max_namespaces": 32, 00:16:26.839 "min_cntlid": 1, 00:16:26.839 "max_cntlid": 65519, 00:16:26.839 "namespaces": [ 00:16:26.839 { 00:16:26.839 "nsid": 1, 00:16:26.839 "bdev_name": "Malloc1", 00:16:26.839 "name": "Malloc1", 00:16:26.839 "nguid": "A2FF113781904F369A8F568B09DF51B1", 00:16:26.839 "uuid": "a2ff1137-8190-4f36-9a8f-568b09df51b1" 00:16:26.839 }, 00:16:26.839 { 00:16:26.839 "nsid": 2, 00:16:26.839 "bdev_name": "Malloc3", 00:16:26.839 "name": "Malloc3", 00:16:26.839 "nguid": "6D0048A7511242F0800C35992FEBB62E", 00:16:26.839 "uuid": "6d0048a7-5112-42f0-800c-35992febb62e" 00:16:26.839 } 00:16:26.839 ] 00:16:26.839 }, 00:16:26.839 { 00:16:26.839 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:26.839 "subtype": "NVMe", 00:16:26.839 "listen_addresses": [ 00:16:26.839 { 00:16:26.839 "trtype": "VFIOUSER", 00:16:26.839 "adrfam": "IPv4", 00:16:26.839 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:26.839 "trsvcid": "0" 00:16:26.839 } 00:16:26.839 ], 00:16:26.839 "allow_any_host": true, 00:16:26.839 "hosts": [], 00:16:26.839 "serial_number": "SPDK2", 00:16:26.839 "model_number": "SPDK bdev Controller", 00:16:26.839 "max_namespaces": 32, 00:16:26.839 "min_cntlid": 1, 00:16:26.839 "max_cntlid": 65519, 00:16:26.839 "namespaces": [ 00:16:26.839 { 00:16:26.839 "nsid": 1, 00:16:26.839 "bdev_name": "Malloc2", 00:16:26.839 "name": "Malloc2", 00:16:26.839 "nguid": "C79DBC48F59A4DF695CC8884F54D7E96", 00:16:26.839 "uuid": "c79dbc48-f59a-4df6-95cc-8884f54d7e96" 00:16:26.839 } 00:16:26.839 ] 00:16:26.839 } 00:16:26.839 ] 00:16:26.839 12:30:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:26.839 12:30:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=152533 00:16:26.839 12:30:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:26.839 12:30:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:26.839 12:30:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:16:26.839 12:30:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:26.839 12:30:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:26.839 12:30:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:16:26.839 12:30:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:26.839 12:30:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:27.098 [2024-11-20 12:30:32.657613] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:27.098 Malloc4 00:16:27.098 12:30:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:27.356 [2024-11-20 12:30:32.883218] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:27.356 12:30:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:27.356 Asynchronous Event Request test 00:16:27.356 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:27.356 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:27.356 Registering asynchronous event callbacks... 00:16:27.356 Starting namespace attribute notice tests for all controllers... 00:16:27.356 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:27.356 aer_cb - Changed Namespace 00:16:27.356 Cleaning up... 00:16:27.356 [ 00:16:27.356 { 00:16:27.356 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:27.356 "subtype": "Discovery", 00:16:27.356 "listen_addresses": [], 00:16:27.356 "allow_any_host": true, 00:16:27.356 "hosts": [] 00:16:27.356 }, 00:16:27.356 { 00:16:27.356 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:27.356 "subtype": "NVMe", 00:16:27.356 "listen_addresses": [ 00:16:27.356 { 00:16:27.356 "trtype": "VFIOUSER", 00:16:27.356 "adrfam": "IPv4", 00:16:27.356 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:27.357 "trsvcid": "0" 00:16:27.357 } 00:16:27.357 ], 00:16:27.357 "allow_any_host": true, 00:16:27.357 "hosts": [], 00:16:27.357 "serial_number": "SPDK1", 00:16:27.357 "model_number": "SPDK bdev Controller", 00:16:27.357 "max_namespaces": 32, 00:16:27.357 "min_cntlid": 1, 00:16:27.357 "max_cntlid": 65519, 00:16:27.357 "namespaces": [ 00:16:27.357 { 00:16:27.357 "nsid": 1, 00:16:27.357 "bdev_name": "Malloc1", 00:16:27.357 "name": "Malloc1", 00:16:27.357 "nguid": "A2FF113781904F369A8F568B09DF51B1", 00:16:27.357 "uuid": "a2ff1137-8190-4f36-9a8f-568b09df51b1" 00:16:27.357 }, 00:16:27.357 { 00:16:27.357 "nsid": 2, 00:16:27.357 "bdev_name": "Malloc3", 00:16:27.357 "name": "Malloc3", 00:16:27.357 "nguid": "6D0048A7511242F0800C35992FEBB62E", 00:16:27.357 "uuid": "6d0048a7-5112-42f0-800c-35992febb62e" 00:16:27.357 } 00:16:27.357 ] 00:16:27.357 }, 00:16:27.357 { 00:16:27.357 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:27.357 "subtype": "NVMe", 00:16:27.357 "listen_addresses": [ 00:16:27.357 { 00:16:27.357 "trtype": "VFIOUSER", 00:16:27.357 "adrfam": "IPv4", 00:16:27.357 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:27.357 "trsvcid": "0" 00:16:27.357 } 00:16:27.357 ], 00:16:27.357 "allow_any_host": true, 00:16:27.357 "hosts": [], 00:16:27.357 "serial_number": "SPDK2", 00:16:27.357 "model_number": "SPDK bdev Controller", 00:16:27.357 "max_namespaces": 32, 00:16:27.357 "min_cntlid": 1, 00:16:27.357 "max_cntlid": 65519, 00:16:27.357 "namespaces": [ 00:16:27.357 { 00:16:27.357 "nsid": 1, 00:16:27.357 "bdev_name": "Malloc2", 00:16:27.357 "name": "Malloc2", 00:16:27.357 "nguid": "C79DBC48F59A4DF695CC8884F54D7E96", 00:16:27.357 "uuid": "c79dbc48-f59a-4df6-95cc-8884f54d7e96" 00:16:27.357 }, 00:16:27.357 { 00:16:27.357 "nsid": 2, 00:16:27.357 "bdev_name": "Malloc4", 00:16:27.357 "name": "Malloc4", 00:16:27.357 "nguid": "23D8B26CBB104E3C90B2918679A87AEF", 00:16:27.357 "uuid": "23d8b26c-bb10-4e3c-90b2-918679a87aef" 00:16:27.357 } 00:16:27.357 ] 00:16:27.357 } 00:16:27.357 ] 00:16:27.357 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 152533 00:16:27.357 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:27.357 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 144352 00:16:27.357 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 144352 ']' 00:16:27.357 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 144352 00:16:27.357 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:16:27.357 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:27.357 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 144352 00:16:27.616 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:27.616 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:27.616 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 144352' 00:16:27.616 killing process with pid 144352 00:16:27.616 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 144352 00:16:27.616 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 144352 00:16:27.875 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:27.875 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:27.875 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:27.875 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:27.875 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:27.875 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=152554 00:16:27.875 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 152554' 00:16:27.875 Process pid: 152554 00:16:27.875 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:27.875 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:27.875 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 152554 00:16:27.875 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 152554 ']' 00:16:27.875 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.875 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:27.875 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.875 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:27.875 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:27.875 [2024-11-20 12:30:33.443091] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:27.875 [2024-11-20 12:30:33.443992] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:16:27.875 [2024-11-20 12:30:33.444032] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:27.875 [2024-11-20 12:30:33.521511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:27.875 [2024-11-20 12:30:33.563724] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:27.875 [2024-11-20 12:30:33.563760] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:27.875 [2024-11-20 12:30:33.563769] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:27.875 [2024-11-20 12:30:33.563775] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:27.875 [2024-11-20 12:30:33.563781] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:27.875 [2024-11-20 12:30:33.565290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:27.875 [2024-11-20 12:30:33.565399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:27.875 [2024-11-20 12:30:33.565508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.875 [2024-11-20 12:30:33.565509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:27.875 [2024-11-20 12:30:33.633123] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:27.875 [2024-11-20 12:30:33.633488] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:27.875 [2024-11-20 12:30:33.633987] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:27.875 [2024-11-20 12:30:33.634354] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:27.875 [2024-11-20 12:30:33.634400] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:28.134 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:28.134 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:16:28.134 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:29.071 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:29.330 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:29.330 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:29.330 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:29.330 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:29.330 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:29.588 Malloc1 00:16:29.588 12:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:29.588 12:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:29.847 12:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:30.106 12:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:30.106 12:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:30.106 12:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:30.365 Malloc2 00:16:30.365 12:30:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:30.624 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:30.624 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:30.883 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:30.883 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 152554 00:16:30.883 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 152554 ']' 00:16:30.883 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 152554 00:16:30.883 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:16:30.883 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:30.883 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 152554 00:16:30.883 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:30.883 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:30.883 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 152554' 00:16:30.883 killing process with pid 152554 00:16:30.883 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 152554 00:16:30.883 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 152554 00:16:31.143 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:31.143 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:31.143 00:16:31.143 real 0m50.827s 00:16:31.143 user 3m16.430s 00:16:31.143 sys 0m3.283s 00:16:31.143 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:31.143 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:31.143 ************************************ 00:16:31.143 END TEST nvmf_vfio_user 00:16:31.143 ************************************ 00:16:31.143 12:30:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:31.143 12:30:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:31.143 12:30:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:31.143 12:30:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:31.143 ************************************ 00:16:31.143 START TEST nvmf_vfio_user_nvme_compliance 00:16:31.143 ************************************ 00:16:31.143 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:31.403 * Looking for test storage... 00:16:31.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:31.403 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:31.403 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:16:31.403 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:31.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.403 --rc genhtml_branch_coverage=1 00:16:31.403 --rc genhtml_function_coverage=1 00:16:31.403 --rc genhtml_legend=1 00:16:31.403 --rc geninfo_all_blocks=1 00:16:31.403 --rc geninfo_unexecuted_blocks=1 00:16:31.403 00:16:31.403 ' 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:31.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.403 --rc genhtml_branch_coverage=1 00:16:31.403 --rc genhtml_function_coverage=1 00:16:31.403 --rc genhtml_legend=1 00:16:31.403 --rc geninfo_all_blocks=1 00:16:31.403 --rc geninfo_unexecuted_blocks=1 00:16:31.403 00:16:31.403 ' 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:31.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.403 --rc genhtml_branch_coverage=1 00:16:31.403 --rc genhtml_function_coverage=1 00:16:31.403 --rc genhtml_legend=1 00:16:31.403 --rc geninfo_all_blocks=1 00:16:31.403 --rc geninfo_unexecuted_blocks=1 00:16:31.403 00:16:31.403 ' 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:31.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.403 --rc genhtml_branch_coverage=1 00:16:31.403 --rc genhtml_function_coverage=1 00:16:31.403 --rc genhtml_legend=1 00:16:31.403 --rc geninfo_all_blocks=1 00:16:31.403 --rc geninfo_unexecuted_blocks=1 00:16:31.403 00:16:31.403 ' 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:31.403 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:31.404 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.404 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.404 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.404 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:31.404 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.404 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:16:31.404 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:31.404 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:31.404 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:31.404 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:31.404 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:31.404 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:31.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:31.404 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:31.404 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:31.404 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:31.404 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:31.404 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:31.404 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:31.404 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:31.404 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:31.404 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=153318 00:16:31.404 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 153318' 00:16:31.404 Process pid: 153318 00:16:31.404 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:31.404 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:31.404 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 153318 00:16:31.404 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 153318 ']' 00:16:31.404 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.404 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:31.404 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:31.404 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:31.404 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:31.404 [2024-11-20 12:30:37.114084] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:16:31.404 [2024-11-20 12:30:37.114131] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:31.664 [2024-11-20 12:30:37.186706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:31.664 [2024-11-20 12:30:37.228159] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:31.664 [2024-11-20 12:30:37.228198] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:31.664 [2024-11-20 12:30:37.228210] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:31.664 [2024-11-20 12:30:37.228217] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:31.664 [2024-11-20 12:30:37.228222] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:31.664 [2024-11-20 12:30:37.229491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:31.664 [2024-11-20 12:30:37.229509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:31.664 [2024-11-20 12:30:37.229512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.664 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:31.664 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:16:31.664 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:32.599 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:32.599 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:32.599 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:32.599 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.599 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:32.599 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.599 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:32.599 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:32.599 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.599 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:32.858 malloc0 00:16:32.858 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.858 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:32.858 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.858 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:32.858 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.858 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:32.858 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.858 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:32.858 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.858 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:32.858 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.858 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:32.858 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.858 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:32.858 00:16:32.858 00:16:32.858 CUnit - A unit testing framework for C - Version 2.1-3 00:16:32.858 http://cunit.sourceforge.net/ 00:16:32.858 00:16:32.858 00:16:32.858 Suite: nvme_compliance 00:16:32.858 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-20 12:30:38.576679] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:32.858 [2024-11-20 12:30:38.578005] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:32.858 [2024-11-20 12:30:38.578021] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:32.858 [2024-11-20 12:30:38.578026] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:32.858 [2024-11-20 12:30:38.580708] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:32.858 passed 00:16:33.141 Test: admin_identify_ctrlr_verify_fused ...[2024-11-20 12:30:38.661244] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:33.141 [2024-11-20 12:30:38.664277] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:33.141 passed 00:16:33.141 Test: admin_identify_ns ...[2024-11-20 12:30:38.739711] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:33.141 [2024-11-20 12:30:38.803211] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:33.141 [2024-11-20 12:30:38.811222] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:33.141 [2024-11-20 12:30:38.832315] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:33.141 passed 00:16:33.400 Test: admin_get_features_mandatory_features ...[2024-11-20 12:30:38.906127] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:33.400 [2024-11-20 12:30:38.909148] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:33.400 passed 00:16:33.400 Test: admin_get_features_optional_features ...[2024-11-20 12:30:38.985662] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:33.400 [2024-11-20 12:30:38.988683] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:33.400 passed 00:16:33.400 Test: admin_set_features_number_of_queues ...[2024-11-20 12:30:39.064416] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:33.670 [2024-11-20 12:30:39.170300] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:33.670 passed 00:16:33.670 Test: admin_get_log_page_mandatory_logs ...[2024-11-20 12:30:39.247892] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:33.670 [2024-11-20 12:30:39.250910] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:33.670 passed 00:16:33.670 Test: admin_get_log_page_with_lpo ...[2024-11-20 12:30:39.327640] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:33.670 [2024-11-20 12:30:39.396214] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:33.670 [2024-11-20 12:30:39.409274] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:33.928 passed 00:16:33.928 Test: fabric_property_get ...[2024-11-20 12:30:39.483054] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:33.928 [2024-11-20 12:30:39.484289] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:33.928 [2024-11-20 12:30:39.488096] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:33.928 passed 00:16:33.928 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-20 12:30:39.563597] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:33.928 [2024-11-20 12:30:39.564824] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:33.928 [2024-11-20 12:30:39.566617] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:33.928 passed 00:16:33.928 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-20 12:30:39.644325] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:34.187 [2024-11-20 12:30:39.730209] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:34.187 [2024-11-20 12:30:39.746212] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:34.187 [2024-11-20 12:30:39.750473] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:34.187 passed 00:16:34.187 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-20 12:30:39.827244] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:34.187 [2024-11-20 12:30:39.828472] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:34.187 [2024-11-20 12:30:39.830263] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:34.187 passed 00:16:34.187 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-20 12:30:39.904884] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:34.445 [2024-11-20 12:30:39.980217] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:34.445 [2024-11-20 12:30:40.004209] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:34.445 [2024-11-20 12:30:40.009333] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:34.445 passed 00:16:34.445 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-20 12:30:40.097225] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:34.445 [2024-11-20 12:30:40.098474] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:34.445 [2024-11-20 12:30:40.098499] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:34.445 [2024-11-20 12:30:40.100250] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:34.445 passed 00:16:34.445 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-20 12:30:40.175056] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:34.704 [2024-11-20 12:30:40.266208] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:34.704 [2024-11-20 12:30:40.274211] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:34.704 [2024-11-20 12:30:40.282208] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:34.704 [2024-11-20 12:30:40.290213] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:34.704 [2024-11-20 12:30:40.319372] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:34.704 passed 00:16:34.704 Test: admin_create_io_sq_verify_pc ...[2024-11-20 12:30:40.395013] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:34.704 [2024-11-20 12:30:40.411219] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:34.704 [2024-11-20 12:30:40.429210] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:34.704 passed 00:16:34.963 Test: admin_create_io_qp_max_qps ...[2024-11-20 12:30:40.507727] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:35.899 [2024-11-20 12:30:41.606212] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:16:36.467 [2024-11-20 12:30:42.005313] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:36.467 passed 00:16:36.467 Test: admin_create_io_sq_shared_cq ...[2024-11-20 12:30:42.081198] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:36.467 [2024-11-20 12:30:42.214213] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:36.726 [2024-11-20 12:30:42.251289] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:36.726 passed 00:16:36.726 00:16:36.726 Run Summary: Type Total Ran Passed Failed Inactive 00:16:36.726 suites 1 1 n/a 0 0 00:16:36.726 tests 18 18 18 0 0 00:16:36.726 asserts 360 360 360 0 n/a 00:16:36.726 00:16:36.726 Elapsed time = 1.507 seconds 00:16:36.726 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 153318 00:16:36.726 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 153318 ']' 00:16:36.726 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 153318 00:16:36.726 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:16:36.726 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:36.726 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 153318 00:16:36.726 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:36.726 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:36.726 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 153318' 00:16:36.726 killing process with pid 153318 00:16:36.726 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 153318 00:16:36.726 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 153318 00:16:36.985 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:36.985 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:36.985 00:16:36.985 real 0m5.667s 00:16:36.985 user 0m15.871s 00:16:36.985 sys 0m0.522s 00:16:36.985 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:36.985 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:36.985 ************************************ 00:16:36.985 END TEST nvmf_vfio_user_nvme_compliance 00:16:36.985 ************************************ 00:16:36.985 12:30:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:36.985 12:30:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:36.985 12:30:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:36.985 12:30:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:36.985 ************************************ 00:16:36.985 START TEST nvmf_vfio_user_fuzz 00:16:36.985 ************************************ 00:16:36.985 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:36.985 * Looking for test storage... 00:16:36.985 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:36.985 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:36.985 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:16:36.985 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:37.245 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:37.245 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:37.245 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:37.245 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:37.245 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:16:37.245 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:16:37.245 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:16:37.245 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:16:37.245 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:16:37.245 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:16:37.245 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:16:37.245 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:37.245 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:16:37.245 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:16:37.245 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:37.245 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:37.245 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:16:37.245 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:16:37.245 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:37.245 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:16:37.245 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:16:37.245 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:16:37.245 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:16:37.245 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:37.245 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:16:37.245 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:16:37.245 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:37.245 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:37.245 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:16:37.245 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:37.245 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:37.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.245 --rc genhtml_branch_coverage=1 00:16:37.245 --rc genhtml_function_coverage=1 00:16:37.245 --rc genhtml_legend=1 00:16:37.245 --rc geninfo_all_blocks=1 00:16:37.245 --rc geninfo_unexecuted_blocks=1 00:16:37.245 00:16:37.245 ' 00:16:37.245 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:37.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.245 --rc genhtml_branch_coverage=1 00:16:37.245 --rc genhtml_function_coverage=1 00:16:37.245 --rc genhtml_legend=1 00:16:37.245 --rc geninfo_all_blocks=1 00:16:37.245 --rc geninfo_unexecuted_blocks=1 00:16:37.245 00:16:37.245 ' 00:16:37.245 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:37.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.246 --rc genhtml_branch_coverage=1 00:16:37.246 --rc genhtml_function_coverage=1 00:16:37.246 --rc genhtml_legend=1 00:16:37.246 --rc geninfo_all_blocks=1 00:16:37.246 --rc geninfo_unexecuted_blocks=1 00:16:37.246 00:16:37.246 ' 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:37.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.246 --rc genhtml_branch_coverage=1 00:16:37.246 --rc genhtml_function_coverage=1 00:16:37.246 --rc genhtml_legend=1 00:16:37.246 --rc geninfo_all_blocks=1 00:16:37.246 --rc geninfo_unexecuted_blocks=1 00:16:37.246 00:16:37.246 ' 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:37.246 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=154302 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 154302' 00:16:37.246 Process pid: 154302 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 154302 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 154302 ']' 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:37.246 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:37.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:37.247 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:37.247 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:37.505 12:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:37.505 12:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:16:37.505 12:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:38.461 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:38.461 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.461 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:38.461 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.461 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:38.461 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:38.461 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.461 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:38.461 malloc0 00:16:38.461 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.461 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:38.461 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.461 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:38.461 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.461 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:38.462 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.462 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:38.462 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.462 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:38.462 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.462 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:38.462 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.462 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:38.462 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:10.642 Fuzzing completed. Shutting down the fuzz application 00:17:10.642 00:17:10.642 Dumping successful admin opcodes: 00:17:10.642 8, 9, 10, 24, 00:17:10.642 Dumping successful io opcodes: 00:17:10.642 0, 00:17:10.642 NS: 0x20000081ef00 I/O qp, Total commands completed: 1035928, total successful commands: 4087, random_seed: 2600212544 00:17:10.642 NS: 0x20000081ef00 admin qp, Total commands completed: 251464, total successful commands: 2033, random_seed: 1421494272 00:17:10.642 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:10.642 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.642 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:10.642 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.642 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 154302 00:17:10.642 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 154302 ']' 00:17:10.642 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 154302 00:17:10.642 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:17:10.642 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:10.642 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 154302 00:17:10.642 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:10.642 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:10.642 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 154302' 00:17:10.642 killing process with pid 154302 00:17:10.642 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 154302 00:17:10.642 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 154302 00:17:10.642 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:10.642 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:10.642 00:17:10.642 real 0m32.207s 00:17:10.642 user 0m29.606s 00:17:10.642 sys 0m31.625s 00:17:10.642 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:10.642 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:10.642 ************************************ 00:17:10.642 END TEST nvmf_vfio_user_fuzz 00:17:10.642 ************************************ 00:17:10.642 12:31:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:10.642 12:31:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:10.642 12:31:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:10.642 12:31:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:10.642 ************************************ 00:17:10.642 START TEST nvmf_auth_target 00:17:10.642 ************************************ 00:17:10.642 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:10.642 * Looking for test storage... 00:17:10.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:10.642 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:10.642 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:17:10.642 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:10.642 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:10.642 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:10.642 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:10.642 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:10.642 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:17:10.642 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:17:10.642 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:17:10.642 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:17:10.642 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:17:10.642 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:17:10.642 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:17:10.642 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:10.642 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:17:10.642 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:17:10.642 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:10.642 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:10.642 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:17:10.642 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:17:10.642 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:10.642 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:17:10.642 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:17:10.642 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:17:10.643 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:17:10.643 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:10.643 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:17:10.643 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:17:10.643 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:10.643 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:10.643 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:17:10.643 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:10.643 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:10.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.643 --rc genhtml_branch_coverage=1 00:17:10.643 --rc genhtml_function_coverage=1 00:17:10.643 --rc genhtml_legend=1 00:17:10.643 --rc geninfo_all_blocks=1 00:17:10.643 --rc geninfo_unexecuted_blocks=1 00:17:10.643 00:17:10.643 ' 00:17:10.643 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:10.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.643 --rc genhtml_branch_coverage=1 00:17:10.643 --rc genhtml_function_coverage=1 00:17:10.643 --rc genhtml_legend=1 00:17:10.643 --rc geninfo_all_blocks=1 00:17:10.643 --rc geninfo_unexecuted_blocks=1 00:17:10.643 00:17:10.643 ' 00:17:10.643 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:10.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.643 --rc genhtml_branch_coverage=1 00:17:10.643 --rc genhtml_function_coverage=1 00:17:10.643 --rc genhtml_legend=1 00:17:10.643 --rc geninfo_all_blocks=1 00:17:10.643 --rc geninfo_unexecuted_blocks=1 00:17:10.643 00:17:10.643 ' 00:17:10.643 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:10.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.643 --rc genhtml_branch_coverage=1 00:17:10.643 --rc genhtml_function_coverage=1 00:17:10.643 --rc genhtml_legend=1 00:17:10.643 --rc geninfo_all_blocks=1 00:17:10.643 --rc geninfo_unexecuted_blocks=1 00:17:10.643 00:17:10.643 ' 00:17:10.643 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:10.643 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:10.643 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:10.643 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:10.643 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:10.643 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:10.643 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:10.643 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:10.643 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:10.643 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:10.643 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:10.643 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:10.643 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:10.643 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:17:10.643 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:10.643 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:10.643 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:10.643 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:10.643 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:10.643 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:17:10.643 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:10.643 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:10.643 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:10.643 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.643 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.643 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.643 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:10.643 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.643 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:17:10.643 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:10.643 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:10.644 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:10.644 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:10.644 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:10.644 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:10.644 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:10.644 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:10.644 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:10.644 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:10.644 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:10.644 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:10.644 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:10.644 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:10.644 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:10.644 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:10.644 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:10.644 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:17:10.644 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:10.644 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:10.644 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:10.644 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:10.644 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:10.644 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:10.644 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:10.644 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.644 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:10.644 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:10.644 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:17:10.644 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:15.922 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:15.922 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:15.922 Found net devices under 0000:86:00.0: cvl_0_0 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:15.922 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:15.923 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:15.923 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:15.923 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:15.923 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:15.923 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:15.923 Found net devices under 0000:86:00.1: cvl_0_1 00:17:15.923 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:15.923 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:15.923 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:17:15.923 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:15.923 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:15.923 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:15.923 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:15.923 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:15.923 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:15.923 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:15.923 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:15.923 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:15.923 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:15.923 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:15.923 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:15.923 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:15.923 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:15.923 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:15.923 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:15.923 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:15.923 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:15.923 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:15.923 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:15.923 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:15.923 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:15.923 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:15.923 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:15.923 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:15.923 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:15.923 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:15.923 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.408 ms 00:17:15.923 00:17:15.923 --- 10.0.0.2 ping statistics --- 00:17:15.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:15.923 rtt min/avg/max/mdev = 0.408/0.408/0.408/0.000 ms 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:15.923 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:15.923 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:17:15.923 00:17:15.923 --- 10.0.0.1 ping statistics --- 00:17:15.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:15.923 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=162612 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 162612 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 162612 ']' 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=162768 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9dfdb6b84485734ce5764c3e7610bb88bc53f4bb03f98bd3 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.ria 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9dfdb6b84485734ce5764c3e7610bb88bc53f4bb03f98bd3 0 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9dfdb6b84485734ce5764c3e7610bb88bc53f4bb03f98bd3 0 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9dfdb6b84485734ce5764c3e7610bb88bc53f4bb03f98bd3 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.ria 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.ria 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.ria 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=7dbe8a92e8266c97b58f81cec1b9ec30d0b4b47a8603632ca94df9c6f6c16807 00:17:15.923 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.CWL 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 7dbe8a92e8266c97b58f81cec1b9ec30d0b4b47a8603632ca94df9c6f6c16807 3 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 7dbe8a92e8266c97b58f81cec1b9ec30d0b4b47a8603632ca94df9c6f6c16807 3 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=7dbe8a92e8266c97b58f81cec1b9ec30d0b4b47a8603632ca94df9c6f6c16807 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.CWL 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.CWL 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.CWL 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=37d661bab647b014def5cd8d072338c5 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.3lx 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 37d661bab647b014def5cd8d072338c5 1 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 37d661bab647b014def5cd8d072338c5 1 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=37d661bab647b014def5cd8d072338c5 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.3lx 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.3lx 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.3lx 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ed20bddf0775b2641b70f5fc2a185871db3d6a382b31df76 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Bmg 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ed20bddf0775b2641b70f5fc2a185871db3d6a382b31df76 2 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ed20bddf0775b2641b70f5fc2a185871db3d6a382b31df76 2 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ed20bddf0775b2641b70f5fc2a185871db3d6a382b31df76 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Bmg 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Bmg 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.Bmg 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=73bd6b57cb118928ed3513b53bc047ba4b350b1e844a0bb5 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.a48 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 73bd6b57cb118928ed3513b53bc047ba4b350b1e844a0bb5 2 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 73bd6b57cb118928ed3513b53bc047ba4b350b1e844a0bb5 2 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=73bd6b57cb118928ed3513b53bc047ba4b350b1e844a0bb5 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.a48 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.a48 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.a48 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8294921525215eeffdc0fc50eaf6146c 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.mS0 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8294921525215eeffdc0fc50eaf6146c 1 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8294921525215eeffdc0fc50eaf6146c 1 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8294921525215eeffdc0fc50eaf6146c 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:15.924 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:16.183 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.mS0 00:17:16.183 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.mS0 00:17:16.183 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.mS0 00:17:16.183 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:17:16.183 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:16.183 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:16.183 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:16.183 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:16.183 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:16.184 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:16.184 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9e9e4f03fe22c34bd3eb078020e2e9d779e8049dd88b3b9b7a80daaa495f7094 00:17:16.184 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:16.184 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.z4w 00:17:16.184 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9e9e4f03fe22c34bd3eb078020e2e9d779e8049dd88b3b9b7a80daaa495f7094 3 00:17:16.184 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9e9e4f03fe22c34bd3eb078020e2e9d779e8049dd88b3b9b7a80daaa495f7094 3 00:17:16.184 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:16.184 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:16.184 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9e9e4f03fe22c34bd3eb078020e2e9d779e8049dd88b3b9b7a80daaa495f7094 00:17:16.184 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:16.184 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:16.184 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.z4w 00:17:16.184 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.z4w 00:17:16.184 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.z4w 00:17:16.184 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:17:16.184 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 162612 00:17:16.184 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 162612 ']' 00:17:16.184 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.184 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:16.184 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.184 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:16.184 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.442 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:16.442 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:16.442 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 162768 /var/tmp/host.sock 00:17:16.442 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 162768 ']' 00:17:16.442 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:17:16.442 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:16.442 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:16.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:16.442 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:16.442 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.442 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:16.442 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:16.442 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:17:16.442 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.442 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.703 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.703 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:16.703 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ria 00:17:16.703 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.703 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.703 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.703 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.ria 00:17:16.703 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.ria 00:17:16.703 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.CWL ]] 00:17:16.703 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.CWL 00:17:16.703 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.703 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.703 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.703 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.CWL 00:17:16.703 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.CWL 00:17:16.962 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:16.962 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.3lx 00:17:16.962 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.962 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.962 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.962 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.3lx 00:17:16.962 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.3lx 00:17:17.220 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.Bmg ]] 00:17:17.220 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Bmg 00:17:17.220 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.220 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.220 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.220 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Bmg 00:17:17.220 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Bmg 00:17:17.479 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:17.479 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.a48 00:17:17.479 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.479 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.479 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.479 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.a48 00:17:17.479 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.a48 00:17:17.479 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.mS0 ]] 00:17:17.479 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.mS0 00:17:17.479 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.479 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.737 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.737 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.mS0 00:17:17.737 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.mS0 00:17:17.737 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:17.737 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.z4w 00:17:17.737 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.737 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.737 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.737 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.z4w 00:17:17.737 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.z4w 00:17:17.995 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:17:17.995 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:17.995 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:17.995 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:17.995 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:17.995 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:18.253 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:17:18.253 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.253 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:18.253 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:18.253 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:18.253 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.253 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.253 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.253 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.254 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.254 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.254 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.254 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.512 00:17:18.512 12:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.512 12:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.512 12:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.771 12:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.771 12:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.771 12:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.771 12:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.771 12:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.771 12:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.771 { 00:17:18.771 "cntlid": 1, 00:17:18.771 "qid": 0, 00:17:18.771 "state": "enabled", 00:17:18.771 "thread": "nvmf_tgt_poll_group_000", 00:17:18.771 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:18.771 "listen_address": { 00:17:18.771 "trtype": "TCP", 00:17:18.771 "adrfam": "IPv4", 00:17:18.771 "traddr": "10.0.0.2", 00:17:18.771 "trsvcid": "4420" 00:17:18.771 }, 00:17:18.771 "peer_address": { 00:17:18.771 "trtype": "TCP", 00:17:18.771 "adrfam": "IPv4", 00:17:18.771 "traddr": "10.0.0.1", 00:17:18.771 "trsvcid": "39696" 00:17:18.771 }, 00:17:18.771 "auth": { 00:17:18.771 "state": "completed", 00:17:18.771 "digest": "sha256", 00:17:18.771 "dhgroup": "null" 00:17:18.771 } 00:17:18.771 } 00:17:18.771 ]' 00:17:18.771 12:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.771 12:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:18.771 12:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.771 12:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:18.771 12:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.771 12:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.771 12:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.771 12:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.028 12:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRmZGI2Yjg0NDg1NzM0Y2U1NzY0YzNlNzYxMGJiODhiYzUzZjRiYjAzZjk4YmQzGzV93Q==: --dhchap-ctrl-secret DHHC-1:03:N2RiZThhOTJlODI2NmM5N2I1OGY4MWNlYzFiOWVjMzBkMGI0YjQ3YTg2MDM2MzJjYTk0ZGY5YzZmNmMxNjgwN5ionDU=: 00:17:19.028 12:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWRmZGI2Yjg0NDg1NzM0Y2U1NzY0YzNlNzYxMGJiODhiYzUzZjRiYjAzZjk4YmQzGzV93Q==: --dhchap-ctrl-secret DHHC-1:03:N2RiZThhOTJlODI2NmM5N2I1OGY4MWNlYzFiOWVjMzBkMGI0YjQ3YTg2MDM2MzJjYTk0ZGY5YzZmNmMxNjgwN5ionDU=: 00:17:19.596 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.596 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.596 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:19.596 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.596 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.596 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.596 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.596 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:19.596 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:19.855 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:17:19.855 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.855 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:19.855 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:19.855 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:19.855 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.855 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.855 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.855 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.855 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.855 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.855 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.855 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.114 00:17:20.114 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.114 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.114 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.114 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.114 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.114 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.114 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.114 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.114 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.114 { 00:17:20.114 "cntlid": 3, 00:17:20.114 "qid": 0, 00:17:20.114 "state": "enabled", 00:17:20.114 "thread": "nvmf_tgt_poll_group_000", 00:17:20.114 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:20.114 "listen_address": { 00:17:20.114 "trtype": "TCP", 00:17:20.114 "adrfam": "IPv4", 00:17:20.114 "traddr": "10.0.0.2", 00:17:20.114 "trsvcid": "4420" 00:17:20.114 }, 00:17:20.114 "peer_address": { 00:17:20.114 "trtype": "TCP", 00:17:20.114 "adrfam": "IPv4", 00:17:20.115 "traddr": "10.0.0.1", 00:17:20.115 "trsvcid": "39730" 00:17:20.115 }, 00:17:20.115 "auth": { 00:17:20.115 "state": "completed", 00:17:20.115 "digest": "sha256", 00:17:20.115 "dhgroup": "null" 00:17:20.115 } 00:17:20.115 } 00:17:20.115 ]' 00:17:20.115 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.373 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:20.373 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.373 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:20.373 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.373 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.373 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.373 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.630 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzdkNjYxYmFiNjQ3YjAxNGRlZjVjZDhkMDcyMzM4YzWRYJIq: --dhchap-ctrl-secret DHHC-1:02:ZWQyMGJkZGYwNzc1YjI2NDFiNzBmNWZjMmExODU4NzFkYjNkNmEzODJiMzFkZjc2ixuwiQ==: 00:17:20.630 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MzdkNjYxYmFiNjQ3YjAxNGRlZjVjZDhkMDcyMzM4YzWRYJIq: --dhchap-ctrl-secret DHHC-1:02:ZWQyMGJkZGYwNzc1YjI2NDFiNzBmNWZjMmExODU4NzFkYjNkNmEzODJiMzFkZjc2ixuwiQ==: 00:17:21.197 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.197 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.197 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:21.197 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.197 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.197 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.197 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.197 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:21.197 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:21.197 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:17:21.197 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.197 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:21.197 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:21.197 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:21.197 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.197 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.197 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.197 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.197 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.197 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.197 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.197 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.456 00:17:21.456 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.456 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.456 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.715 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.715 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.715 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.715 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.715 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.715 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.715 { 00:17:21.715 "cntlid": 5, 00:17:21.715 "qid": 0, 00:17:21.715 "state": "enabled", 00:17:21.715 "thread": "nvmf_tgt_poll_group_000", 00:17:21.715 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:21.715 "listen_address": { 00:17:21.715 "trtype": "TCP", 00:17:21.715 "adrfam": "IPv4", 00:17:21.715 "traddr": "10.0.0.2", 00:17:21.715 "trsvcid": "4420" 00:17:21.715 }, 00:17:21.715 "peer_address": { 00:17:21.715 "trtype": "TCP", 00:17:21.715 "adrfam": "IPv4", 00:17:21.715 "traddr": "10.0.0.1", 00:17:21.715 "trsvcid": "39750" 00:17:21.715 }, 00:17:21.715 "auth": { 00:17:21.715 "state": "completed", 00:17:21.715 "digest": "sha256", 00:17:21.715 "dhgroup": "null" 00:17:21.715 } 00:17:21.715 } 00:17:21.715 ]' 00:17:21.715 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.715 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:21.715 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.973 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:21.973 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.973 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.973 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.973 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.231 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzNiZDZiNTdjYjExODkyOGVkMzUxM2I1M2JjMDQ3YmE0YjM1MGIxZTg0NGEwYmI16L/adQ==: --dhchap-ctrl-secret DHHC-1:01:ODI5NDkyMTUyNTIxNWVlZmZkYzBmYzUwZWFmNjE0NmOk3lGG: 00:17:22.231 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzNiZDZiNTdjYjExODkyOGVkMzUxM2I1M2JjMDQ3YmE0YjM1MGIxZTg0NGEwYmI16L/adQ==: --dhchap-ctrl-secret DHHC-1:01:ODI5NDkyMTUyNTIxNWVlZmZkYzBmYzUwZWFmNjE0NmOk3lGG: 00:17:22.797 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.797 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.797 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:22.797 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.797 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.797 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.797 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.797 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:22.797 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:22.797 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:17:22.797 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.797 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:22.797 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:22.797 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:22.797 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.797 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:22.797 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.797 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.797 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.797 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:22.797 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:22.797 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:23.054 00:17:23.054 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.054 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.054 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.311 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.311 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.311 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.311 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.311 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.311 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.311 { 00:17:23.311 "cntlid": 7, 00:17:23.311 "qid": 0, 00:17:23.311 "state": "enabled", 00:17:23.311 "thread": "nvmf_tgt_poll_group_000", 00:17:23.311 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:23.311 "listen_address": { 00:17:23.311 "trtype": "TCP", 00:17:23.311 "adrfam": "IPv4", 00:17:23.311 "traddr": "10.0.0.2", 00:17:23.311 "trsvcid": "4420" 00:17:23.311 }, 00:17:23.311 "peer_address": { 00:17:23.311 "trtype": "TCP", 00:17:23.311 "adrfam": "IPv4", 00:17:23.311 "traddr": "10.0.0.1", 00:17:23.311 "trsvcid": "39778" 00:17:23.311 }, 00:17:23.311 "auth": { 00:17:23.311 "state": "completed", 00:17:23.311 "digest": "sha256", 00:17:23.311 "dhgroup": "null" 00:17:23.311 } 00:17:23.311 } 00:17:23.311 ]' 00:17:23.311 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.311 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:23.311 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.311 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:23.311 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.569 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.569 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.569 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.569 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWU5ZTRmMDNmZTIyYzM0YmQzZWIwNzgwMjBlMmU5ZDc3OWU4MDQ5ZGQ4OGIzYjliN2E4MGRhYWE0OTVmNzA5NHf4llw=: 00:17:23.569 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWU5ZTRmMDNmZTIyYzM0YmQzZWIwNzgwMjBlMmU5ZDc3OWU4MDQ5ZGQ4OGIzYjliN2E4MGRhYWE0OTVmNzA5NHf4llw=: 00:17:24.134 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.134 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.134 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:24.134 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.134 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.134 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.134 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:24.134 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.134 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:24.134 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:24.392 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:17:24.392 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.392 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:24.392 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:24.392 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:24.392 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.392 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.392 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.392 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.392 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.392 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.392 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.392 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.650 00:17:24.650 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.650 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.650 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.908 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.908 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.908 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.908 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.908 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.908 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.908 { 00:17:24.908 "cntlid": 9, 00:17:24.908 "qid": 0, 00:17:24.908 "state": "enabled", 00:17:24.908 "thread": "nvmf_tgt_poll_group_000", 00:17:24.908 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:24.908 "listen_address": { 00:17:24.908 "trtype": "TCP", 00:17:24.908 "adrfam": "IPv4", 00:17:24.908 "traddr": "10.0.0.2", 00:17:24.908 "trsvcid": "4420" 00:17:24.908 }, 00:17:24.908 "peer_address": { 00:17:24.908 "trtype": "TCP", 00:17:24.908 "adrfam": "IPv4", 00:17:24.908 "traddr": "10.0.0.1", 00:17:24.908 "trsvcid": "48724" 00:17:24.908 }, 00:17:24.908 "auth": { 00:17:24.908 "state": "completed", 00:17:24.908 "digest": "sha256", 00:17:24.908 "dhgroup": "ffdhe2048" 00:17:24.908 } 00:17:24.908 } 00:17:24.908 ]' 00:17:24.908 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.908 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:24.908 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.908 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:24.908 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.166 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.166 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.166 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.166 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRmZGI2Yjg0NDg1NzM0Y2U1NzY0YzNlNzYxMGJiODhiYzUzZjRiYjAzZjk4YmQzGzV93Q==: --dhchap-ctrl-secret DHHC-1:03:N2RiZThhOTJlODI2NmM5N2I1OGY4MWNlYzFiOWVjMzBkMGI0YjQ3YTg2MDM2MzJjYTk0ZGY5YzZmNmMxNjgwN5ionDU=: 00:17:25.166 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWRmZGI2Yjg0NDg1NzM0Y2U1NzY0YzNlNzYxMGJiODhiYzUzZjRiYjAzZjk4YmQzGzV93Q==: --dhchap-ctrl-secret DHHC-1:03:N2RiZThhOTJlODI2NmM5N2I1OGY4MWNlYzFiOWVjMzBkMGI0YjQ3YTg2MDM2MzJjYTk0ZGY5YzZmNmMxNjgwN5ionDU=: 00:17:25.732 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.732 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:25.732 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.732 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.732 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.732 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.732 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:25.732 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:25.991 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:17:25.991 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.991 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:25.991 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:25.991 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:25.991 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.991 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.991 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.991 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.991 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.991 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.991 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.991 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.250 00:17:26.250 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.250 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.250 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.508 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.508 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.508 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.508 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.508 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.508 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.508 { 00:17:26.508 "cntlid": 11, 00:17:26.508 "qid": 0, 00:17:26.508 "state": "enabled", 00:17:26.508 "thread": "nvmf_tgt_poll_group_000", 00:17:26.508 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:26.508 "listen_address": { 00:17:26.508 "trtype": "TCP", 00:17:26.508 "adrfam": "IPv4", 00:17:26.508 "traddr": "10.0.0.2", 00:17:26.508 "trsvcid": "4420" 00:17:26.508 }, 00:17:26.508 "peer_address": { 00:17:26.508 "trtype": "TCP", 00:17:26.508 "adrfam": "IPv4", 00:17:26.508 "traddr": "10.0.0.1", 00:17:26.508 "trsvcid": "48740" 00:17:26.508 }, 00:17:26.508 "auth": { 00:17:26.508 "state": "completed", 00:17:26.508 "digest": "sha256", 00:17:26.508 "dhgroup": "ffdhe2048" 00:17:26.508 } 00:17:26.508 } 00:17:26.508 ]' 00:17:26.508 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.509 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:26.509 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.509 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:26.509 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.509 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.509 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.509 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.767 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzdkNjYxYmFiNjQ3YjAxNGRlZjVjZDhkMDcyMzM4YzWRYJIq: --dhchap-ctrl-secret DHHC-1:02:ZWQyMGJkZGYwNzc1YjI2NDFiNzBmNWZjMmExODU4NzFkYjNkNmEzODJiMzFkZjc2ixuwiQ==: 00:17:26.767 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MzdkNjYxYmFiNjQ3YjAxNGRlZjVjZDhkMDcyMzM4YzWRYJIq: --dhchap-ctrl-secret DHHC-1:02:ZWQyMGJkZGYwNzc1YjI2NDFiNzBmNWZjMmExODU4NzFkYjNkNmEzODJiMzFkZjc2ixuwiQ==: 00:17:27.334 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.334 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:27.334 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.334 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.334 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.334 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.334 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:27.334 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:27.592 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:17:27.592 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.592 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:27.592 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:27.592 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:27.592 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.592 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.592 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.592 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.592 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.592 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.592 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.593 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.851 00:17:27.851 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:27.851 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.851 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.109 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.109 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.109 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.109 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.109 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.109 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.109 { 00:17:28.109 "cntlid": 13, 00:17:28.109 "qid": 0, 00:17:28.109 "state": "enabled", 00:17:28.109 "thread": "nvmf_tgt_poll_group_000", 00:17:28.109 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:28.110 "listen_address": { 00:17:28.110 "trtype": "TCP", 00:17:28.110 "adrfam": "IPv4", 00:17:28.110 "traddr": "10.0.0.2", 00:17:28.110 "trsvcid": "4420" 00:17:28.110 }, 00:17:28.110 "peer_address": { 00:17:28.110 "trtype": "TCP", 00:17:28.110 "adrfam": "IPv4", 00:17:28.110 "traddr": "10.0.0.1", 00:17:28.110 "trsvcid": "48754" 00:17:28.110 }, 00:17:28.110 "auth": { 00:17:28.110 "state": "completed", 00:17:28.110 "digest": "sha256", 00:17:28.110 "dhgroup": "ffdhe2048" 00:17:28.110 } 00:17:28.110 } 00:17:28.110 ]' 00:17:28.110 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.110 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:28.110 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.110 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:28.110 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.110 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.110 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.110 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.368 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzNiZDZiNTdjYjExODkyOGVkMzUxM2I1M2JjMDQ3YmE0YjM1MGIxZTg0NGEwYmI16L/adQ==: --dhchap-ctrl-secret DHHC-1:01:ODI5NDkyMTUyNTIxNWVlZmZkYzBmYzUwZWFmNjE0NmOk3lGG: 00:17:28.368 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzNiZDZiNTdjYjExODkyOGVkMzUxM2I1M2JjMDQ3YmE0YjM1MGIxZTg0NGEwYmI16L/adQ==: --dhchap-ctrl-secret DHHC-1:01:ODI5NDkyMTUyNTIxNWVlZmZkYzBmYzUwZWFmNjE0NmOk3lGG: 00:17:28.935 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.935 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.935 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:28.935 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.935 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.935 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.935 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.935 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:28.935 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:29.194 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:17:29.194 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.194 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:29.194 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:29.194 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:29.194 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.194 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:29.194 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.194 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.194 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.194 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:29.194 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:29.194 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:29.452 00:17:29.452 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.452 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.452 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.711 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.711 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.711 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.711 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.711 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.711 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:29.711 { 00:17:29.711 "cntlid": 15, 00:17:29.711 "qid": 0, 00:17:29.711 "state": "enabled", 00:17:29.711 "thread": "nvmf_tgt_poll_group_000", 00:17:29.711 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:29.711 "listen_address": { 00:17:29.711 "trtype": "TCP", 00:17:29.711 "adrfam": "IPv4", 00:17:29.711 "traddr": "10.0.0.2", 00:17:29.711 "trsvcid": "4420" 00:17:29.711 }, 00:17:29.711 "peer_address": { 00:17:29.711 "trtype": "TCP", 00:17:29.711 "adrfam": "IPv4", 00:17:29.711 "traddr": "10.0.0.1", 00:17:29.711 "trsvcid": "48780" 00:17:29.711 }, 00:17:29.711 "auth": { 00:17:29.711 "state": "completed", 00:17:29.711 "digest": "sha256", 00:17:29.711 "dhgroup": "ffdhe2048" 00:17:29.711 } 00:17:29.711 } 00:17:29.711 ]' 00:17:29.711 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:29.711 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:29.711 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:29.711 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:29.711 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:29.711 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.711 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.711 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.969 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWU5ZTRmMDNmZTIyYzM0YmQzZWIwNzgwMjBlMmU5ZDc3OWU4MDQ5ZGQ4OGIzYjliN2E4MGRhYWE0OTVmNzA5NHf4llw=: 00:17:29.969 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWU5ZTRmMDNmZTIyYzM0YmQzZWIwNzgwMjBlMmU5ZDc3OWU4MDQ5ZGQ4OGIzYjliN2E4MGRhYWE0OTVmNzA5NHf4llw=: 00:17:30.536 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.536 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.536 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:30.536 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.536 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.536 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.536 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:30.536 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.536 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:30.536 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:30.795 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:17:30.795 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.795 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:30.795 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:30.795 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:30.795 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.795 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.795 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.795 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.795 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.795 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.795 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.795 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.054 00:17:31.054 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.054 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.054 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.312 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.312 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.312 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.312 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.312 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.312 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.312 { 00:17:31.312 "cntlid": 17, 00:17:31.312 "qid": 0, 00:17:31.312 "state": "enabled", 00:17:31.312 "thread": "nvmf_tgt_poll_group_000", 00:17:31.312 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:31.312 "listen_address": { 00:17:31.312 "trtype": "TCP", 00:17:31.312 "adrfam": "IPv4", 00:17:31.312 "traddr": "10.0.0.2", 00:17:31.312 "trsvcid": "4420" 00:17:31.312 }, 00:17:31.312 "peer_address": { 00:17:31.312 "trtype": "TCP", 00:17:31.312 "adrfam": "IPv4", 00:17:31.312 "traddr": "10.0.0.1", 00:17:31.312 "trsvcid": "48794" 00:17:31.312 }, 00:17:31.312 "auth": { 00:17:31.312 "state": "completed", 00:17:31.312 "digest": "sha256", 00:17:31.312 "dhgroup": "ffdhe3072" 00:17:31.312 } 00:17:31.312 } 00:17:31.312 ]' 00:17:31.312 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.312 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:31.312 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.312 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:31.312 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.312 12:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.313 12:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.313 12:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.571 12:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRmZGI2Yjg0NDg1NzM0Y2U1NzY0YzNlNzYxMGJiODhiYzUzZjRiYjAzZjk4YmQzGzV93Q==: --dhchap-ctrl-secret DHHC-1:03:N2RiZThhOTJlODI2NmM5N2I1OGY4MWNlYzFiOWVjMzBkMGI0YjQ3YTg2MDM2MzJjYTk0ZGY5YzZmNmMxNjgwN5ionDU=: 00:17:31.571 12:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWRmZGI2Yjg0NDg1NzM0Y2U1NzY0YzNlNzYxMGJiODhiYzUzZjRiYjAzZjk4YmQzGzV93Q==: --dhchap-ctrl-secret DHHC-1:03:N2RiZThhOTJlODI2NmM5N2I1OGY4MWNlYzFiOWVjMzBkMGI0YjQ3YTg2MDM2MzJjYTk0ZGY5YzZmNmMxNjgwN5ionDU=: 00:17:32.138 12:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.138 12:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:32.138 12:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.138 12:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.138 12:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.138 12:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.138 12:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:32.138 12:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:32.397 12:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:17:32.397 12:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.397 12:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:32.397 12:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:32.397 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:32.397 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.397 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.397 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.397 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.397 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.397 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.397 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.397 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.656 00:17:32.656 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.656 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.656 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.915 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.915 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.915 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.916 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.916 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.916 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.916 { 00:17:32.916 "cntlid": 19, 00:17:32.916 "qid": 0, 00:17:32.916 "state": "enabled", 00:17:32.916 "thread": "nvmf_tgt_poll_group_000", 00:17:32.916 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:32.916 "listen_address": { 00:17:32.916 "trtype": "TCP", 00:17:32.916 "adrfam": "IPv4", 00:17:32.916 "traddr": "10.0.0.2", 00:17:32.916 "trsvcid": "4420" 00:17:32.916 }, 00:17:32.916 "peer_address": { 00:17:32.916 "trtype": "TCP", 00:17:32.916 "adrfam": "IPv4", 00:17:32.916 "traddr": "10.0.0.1", 00:17:32.916 "trsvcid": "48828" 00:17:32.916 }, 00:17:32.916 "auth": { 00:17:32.916 "state": "completed", 00:17:32.916 "digest": "sha256", 00:17:32.916 "dhgroup": "ffdhe3072" 00:17:32.916 } 00:17:32.916 } 00:17:32.916 ]' 00:17:32.916 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.916 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:32.916 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.916 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:32.916 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.916 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.916 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.916 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.175 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzdkNjYxYmFiNjQ3YjAxNGRlZjVjZDhkMDcyMzM4YzWRYJIq: --dhchap-ctrl-secret DHHC-1:02:ZWQyMGJkZGYwNzc1YjI2NDFiNzBmNWZjMmExODU4NzFkYjNkNmEzODJiMzFkZjc2ixuwiQ==: 00:17:33.175 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MzdkNjYxYmFiNjQ3YjAxNGRlZjVjZDhkMDcyMzM4YzWRYJIq: --dhchap-ctrl-secret DHHC-1:02:ZWQyMGJkZGYwNzc1YjI2NDFiNzBmNWZjMmExODU4NzFkYjNkNmEzODJiMzFkZjc2ixuwiQ==: 00:17:33.742 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.742 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.742 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:33.742 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.742 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.742 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.742 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.742 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:33.742 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:34.001 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:17:34.001 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.001 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:34.001 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:34.001 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:34.001 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.001 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.001 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.001 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.001 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.001 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.001 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.001 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.261 00:17:34.261 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.261 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.261 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.520 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.520 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.520 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.520 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.520 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.520 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.520 { 00:17:34.520 "cntlid": 21, 00:17:34.520 "qid": 0, 00:17:34.520 "state": "enabled", 00:17:34.520 "thread": "nvmf_tgt_poll_group_000", 00:17:34.520 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:34.520 "listen_address": { 00:17:34.520 "trtype": "TCP", 00:17:34.520 "adrfam": "IPv4", 00:17:34.520 "traddr": "10.0.0.2", 00:17:34.520 "trsvcid": "4420" 00:17:34.520 }, 00:17:34.520 "peer_address": { 00:17:34.520 "trtype": "TCP", 00:17:34.520 "adrfam": "IPv4", 00:17:34.520 "traddr": "10.0.0.1", 00:17:34.520 "trsvcid": "34392" 00:17:34.520 }, 00:17:34.520 "auth": { 00:17:34.520 "state": "completed", 00:17:34.520 "digest": "sha256", 00:17:34.520 "dhgroup": "ffdhe3072" 00:17:34.520 } 00:17:34.520 } 00:17:34.520 ]' 00:17:34.520 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.520 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:34.520 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.520 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:34.520 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.520 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.520 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.520 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.779 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzNiZDZiNTdjYjExODkyOGVkMzUxM2I1M2JjMDQ3YmE0YjM1MGIxZTg0NGEwYmI16L/adQ==: --dhchap-ctrl-secret DHHC-1:01:ODI5NDkyMTUyNTIxNWVlZmZkYzBmYzUwZWFmNjE0NmOk3lGG: 00:17:34.779 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzNiZDZiNTdjYjExODkyOGVkMzUxM2I1M2JjMDQ3YmE0YjM1MGIxZTg0NGEwYmI16L/adQ==: --dhchap-ctrl-secret DHHC-1:01:ODI5NDkyMTUyNTIxNWVlZmZkYzBmYzUwZWFmNjE0NmOk3lGG: 00:17:35.345 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.346 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:35.346 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.346 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.346 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.346 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.346 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:35.346 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:35.604 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:17:35.604 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.604 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:35.604 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:35.604 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:35.604 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.604 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:35.604 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.604 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.604 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.604 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:35.604 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:35.604 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:35.862 00:17:35.862 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.862 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.862 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.862 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.862 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.862 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.862 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.120 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.120 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.120 { 00:17:36.120 "cntlid": 23, 00:17:36.120 "qid": 0, 00:17:36.120 "state": "enabled", 00:17:36.120 "thread": "nvmf_tgt_poll_group_000", 00:17:36.120 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:36.120 "listen_address": { 00:17:36.120 "trtype": "TCP", 00:17:36.120 "adrfam": "IPv4", 00:17:36.120 "traddr": "10.0.0.2", 00:17:36.120 "trsvcid": "4420" 00:17:36.120 }, 00:17:36.120 "peer_address": { 00:17:36.120 "trtype": "TCP", 00:17:36.120 "adrfam": "IPv4", 00:17:36.120 "traddr": "10.0.0.1", 00:17:36.120 "trsvcid": "34412" 00:17:36.120 }, 00:17:36.120 "auth": { 00:17:36.120 "state": "completed", 00:17:36.120 "digest": "sha256", 00:17:36.120 "dhgroup": "ffdhe3072" 00:17:36.120 } 00:17:36.120 } 00:17:36.120 ]' 00:17:36.120 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.120 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:36.120 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.120 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:36.120 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.120 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.120 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.120 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.379 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWU5ZTRmMDNmZTIyYzM0YmQzZWIwNzgwMjBlMmU5ZDc3OWU4MDQ5ZGQ4OGIzYjliN2E4MGRhYWE0OTVmNzA5NHf4llw=: 00:17:36.379 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWU5ZTRmMDNmZTIyYzM0YmQzZWIwNzgwMjBlMmU5ZDc3OWU4MDQ5ZGQ4OGIzYjliN2E4MGRhYWE0OTVmNzA5NHf4llw=: 00:17:36.947 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.947 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:36.947 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.947 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.947 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.947 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:36.947 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:36.947 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:36.947 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:36.947 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:17:36.947 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.947 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:36.947 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:36.947 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:36.947 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.947 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.947 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.947 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.206 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.206 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.206 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.206 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.464 00:17:37.464 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.464 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.464 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.464 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.464 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.464 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.464 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.464 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.464 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.464 { 00:17:37.464 "cntlid": 25, 00:17:37.464 "qid": 0, 00:17:37.464 "state": "enabled", 00:17:37.464 "thread": "nvmf_tgt_poll_group_000", 00:17:37.464 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:37.464 "listen_address": { 00:17:37.464 "trtype": "TCP", 00:17:37.464 "adrfam": "IPv4", 00:17:37.464 "traddr": "10.0.0.2", 00:17:37.464 "trsvcid": "4420" 00:17:37.464 }, 00:17:37.464 "peer_address": { 00:17:37.464 "trtype": "TCP", 00:17:37.464 "adrfam": "IPv4", 00:17:37.464 "traddr": "10.0.0.1", 00:17:37.464 "trsvcid": "34440" 00:17:37.464 }, 00:17:37.464 "auth": { 00:17:37.464 "state": "completed", 00:17:37.464 "digest": "sha256", 00:17:37.464 "dhgroup": "ffdhe4096" 00:17:37.464 } 00:17:37.464 } 00:17:37.464 ]' 00:17:37.465 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.723 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:37.724 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.724 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:37.724 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.724 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.724 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.724 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.982 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRmZGI2Yjg0NDg1NzM0Y2U1NzY0YzNlNzYxMGJiODhiYzUzZjRiYjAzZjk4YmQzGzV93Q==: --dhchap-ctrl-secret DHHC-1:03:N2RiZThhOTJlODI2NmM5N2I1OGY4MWNlYzFiOWVjMzBkMGI0YjQ3YTg2MDM2MzJjYTk0ZGY5YzZmNmMxNjgwN5ionDU=: 00:17:37.983 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWRmZGI2Yjg0NDg1NzM0Y2U1NzY0YzNlNzYxMGJiODhiYzUzZjRiYjAzZjk4YmQzGzV93Q==: --dhchap-ctrl-secret DHHC-1:03:N2RiZThhOTJlODI2NmM5N2I1OGY4MWNlYzFiOWVjMzBkMGI0YjQ3YTg2MDM2MzJjYTk0ZGY5YzZmNmMxNjgwN5ionDU=: 00:17:38.550 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.550 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:38.550 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.550 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.550 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.550 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.550 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:38.550 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:38.809 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:17:38.809 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:38.809 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:38.809 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:38.809 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:38.809 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.809 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.809 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.809 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.809 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.809 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.809 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.809 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.067 00:17:39.067 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.067 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.067 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.326 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.326 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.326 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.326 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.326 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.326 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.326 { 00:17:39.326 "cntlid": 27, 00:17:39.326 "qid": 0, 00:17:39.326 "state": "enabled", 00:17:39.326 "thread": "nvmf_tgt_poll_group_000", 00:17:39.326 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:39.326 "listen_address": { 00:17:39.326 "trtype": "TCP", 00:17:39.326 "adrfam": "IPv4", 00:17:39.326 "traddr": "10.0.0.2", 00:17:39.326 "trsvcid": "4420" 00:17:39.326 }, 00:17:39.326 "peer_address": { 00:17:39.326 "trtype": "TCP", 00:17:39.326 "adrfam": "IPv4", 00:17:39.326 "traddr": "10.0.0.1", 00:17:39.326 "trsvcid": "34472" 00:17:39.326 }, 00:17:39.326 "auth": { 00:17:39.326 "state": "completed", 00:17:39.326 "digest": "sha256", 00:17:39.326 "dhgroup": "ffdhe4096" 00:17:39.326 } 00:17:39.326 } 00:17:39.326 ]' 00:17:39.326 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.326 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:39.326 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.326 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:39.326 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.326 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.326 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.326 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.586 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzdkNjYxYmFiNjQ3YjAxNGRlZjVjZDhkMDcyMzM4YzWRYJIq: --dhchap-ctrl-secret DHHC-1:02:ZWQyMGJkZGYwNzc1YjI2NDFiNzBmNWZjMmExODU4NzFkYjNkNmEzODJiMzFkZjc2ixuwiQ==: 00:17:39.586 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MzdkNjYxYmFiNjQ3YjAxNGRlZjVjZDhkMDcyMzM4YzWRYJIq: --dhchap-ctrl-secret DHHC-1:02:ZWQyMGJkZGYwNzc1YjI2NDFiNzBmNWZjMmExODU4NzFkYjNkNmEzODJiMzFkZjc2ixuwiQ==: 00:17:40.152 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.152 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.152 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:40.152 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.152 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.152 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.152 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.152 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:40.152 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:40.411 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:17:40.411 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.411 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:40.411 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:40.411 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:40.411 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.411 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.411 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.411 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.411 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.411 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.411 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.411 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.670 00:17:40.670 12:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:40.670 12:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:40.670 12:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.929 12:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.929 12:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.929 12:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.929 12:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.929 12:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.929 12:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:40.929 { 00:17:40.929 "cntlid": 29, 00:17:40.929 "qid": 0, 00:17:40.929 "state": "enabled", 00:17:40.929 "thread": "nvmf_tgt_poll_group_000", 00:17:40.929 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:40.929 "listen_address": { 00:17:40.929 "trtype": "TCP", 00:17:40.929 "adrfam": "IPv4", 00:17:40.929 "traddr": "10.0.0.2", 00:17:40.929 "trsvcid": "4420" 00:17:40.929 }, 00:17:40.929 "peer_address": { 00:17:40.929 "trtype": "TCP", 00:17:40.929 "adrfam": "IPv4", 00:17:40.929 "traddr": "10.0.0.1", 00:17:40.929 "trsvcid": "34494" 00:17:40.929 }, 00:17:40.929 "auth": { 00:17:40.929 "state": "completed", 00:17:40.929 "digest": "sha256", 00:17:40.929 "dhgroup": "ffdhe4096" 00:17:40.929 } 00:17:40.929 } 00:17:40.929 ]' 00:17:40.929 12:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:40.929 12:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:40.929 12:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:40.929 12:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:40.929 12:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:40.929 12:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.929 12:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.929 12:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.188 12:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzNiZDZiNTdjYjExODkyOGVkMzUxM2I1M2JjMDQ3YmE0YjM1MGIxZTg0NGEwYmI16L/adQ==: --dhchap-ctrl-secret DHHC-1:01:ODI5NDkyMTUyNTIxNWVlZmZkYzBmYzUwZWFmNjE0NmOk3lGG: 00:17:41.188 12:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzNiZDZiNTdjYjExODkyOGVkMzUxM2I1M2JjMDQ3YmE0YjM1MGIxZTg0NGEwYmI16L/adQ==: --dhchap-ctrl-secret DHHC-1:01:ODI5NDkyMTUyNTIxNWVlZmZkYzBmYzUwZWFmNjE0NmOk3lGG: 00:17:41.767 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.767 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:41.767 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.768 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.768 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.768 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:41.768 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:41.768 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:42.034 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:17:42.034 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.034 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:42.034 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:42.034 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:42.034 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.034 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:42.034 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.034 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.034 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.034 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:42.034 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:42.034 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:42.292 00:17:42.292 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.292 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.292 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.551 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.551 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.551 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.551 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.551 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.551 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.551 { 00:17:42.551 "cntlid": 31, 00:17:42.551 "qid": 0, 00:17:42.551 "state": "enabled", 00:17:42.551 "thread": "nvmf_tgt_poll_group_000", 00:17:42.551 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:42.551 "listen_address": { 00:17:42.551 "trtype": "TCP", 00:17:42.551 "adrfam": "IPv4", 00:17:42.551 "traddr": "10.0.0.2", 00:17:42.551 "trsvcid": "4420" 00:17:42.551 }, 00:17:42.551 "peer_address": { 00:17:42.551 "trtype": "TCP", 00:17:42.551 "adrfam": "IPv4", 00:17:42.551 "traddr": "10.0.0.1", 00:17:42.551 "trsvcid": "34526" 00:17:42.551 }, 00:17:42.551 "auth": { 00:17:42.551 "state": "completed", 00:17:42.551 "digest": "sha256", 00:17:42.551 "dhgroup": "ffdhe4096" 00:17:42.551 } 00:17:42.551 } 00:17:42.551 ]' 00:17:42.551 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.551 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:42.551 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:42.551 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:42.551 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.551 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.551 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.551 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.811 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWU5ZTRmMDNmZTIyYzM0YmQzZWIwNzgwMjBlMmU5ZDc3OWU4MDQ5ZGQ4OGIzYjliN2E4MGRhYWE0OTVmNzA5NHf4llw=: 00:17:42.811 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWU5ZTRmMDNmZTIyYzM0YmQzZWIwNzgwMjBlMmU5ZDc3OWU4MDQ5ZGQ4OGIzYjliN2E4MGRhYWE0OTVmNzA5NHf4llw=: 00:17:43.379 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.379 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:43.379 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.379 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.379 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.379 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:43.379 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.379 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:43.379 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:43.637 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:17:43.637 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.637 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:43.637 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:43.637 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:43.637 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.637 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.637 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.637 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.637 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.637 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.637 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.637 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.895 00:17:43.895 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.895 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.895 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.154 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.154 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.154 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.154 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.154 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.154 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.154 { 00:17:44.154 "cntlid": 33, 00:17:44.154 "qid": 0, 00:17:44.154 "state": "enabled", 00:17:44.154 "thread": "nvmf_tgt_poll_group_000", 00:17:44.154 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:44.154 "listen_address": { 00:17:44.154 "trtype": "TCP", 00:17:44.154 "adrfam": "IPv4", 00:17:44.154 "traddr": "10.0.0.2", 00:17:44.154 "trsvcid": "4420" 00:17:44.154 }, 00:17:44.154 "peer_address": { 00:17:44.154 "trtype": "TCP", 00:17:44.154 "adrfam": "IPv4", 00:17:44.154 "traddr": "10.0.0.1", 00:17:44.154 "trsvcid": "34548" 00:17:44.154 }, 00:17:44.154 "auth": { 00:17:44.154 "state": "completed", 00:17:44.154 "digest": "sha256", 00:17:44.154 "dhgroup": "ffdhe6144" 00:17:44.154 } 00:17:44.154 } 00:17:44.154 ]' 00:17:44.154 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.154 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:44.154 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.154 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:44.154 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.154 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.154 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.154 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.412 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRmZGI2Yjg0NDg1NzM0Y2U1NzY0YzNlNzYxMGJiODhiYzUzZjRiYjAzZjk4YmQzGzV93Q==: --dhchap-ctrl-secret DHHC-1:03:N2RiZThhOTJlODI2NmM5N2I1OGY4MWNlYzFiOWVjMzBkMGI0YjQ3YTg2MDM2MzJjYTk0ZGY5YzZmNmMxNjgwN5ionDU=: 00:17:44.412 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWRmZGI2Yjg0NDg1NzM0Y2U1NzY0YzNlNzYxMGJiODhiYzUzZjRiYjAzZjk4YmQzGzV93Q==: --dhchap-ctrl-secret DHHC-1:03:N2RiZThhOTJlODI2NmM5N2I1OGY4MWNlYzFiOWVjMzBkMGI0YjQ3YTg2MDM2MzJjYTk0ZGY5YzZmNmMxNjgwN5ionDU=: 00:17:44.979 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.979 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:44.979 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.979 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.979 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.979 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:44.979 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:44.979 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:45.238 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:17:45.238 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.238 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:45.238 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:45.238 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:45.238 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.238 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.238 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.238 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.238 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.238 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.238 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.238 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.496 00:17:45.496 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:45.496 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:45.496 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.755 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.755 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.755 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.755 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.755 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.755 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:45.755 { 00:17:45.755 "cntlid": 35, 00:17:45.755 "qid": 0, 00:17:45.755 "state": "enabled", 00:17:45.755 "thread": "nvmf_tgt_poll_group_000", 00:17:45.755 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:45.755 "listen_address": { 00:17:45.755 "trtype": "TCP", 00:17:45.755 "adrfam": "IPv4", 00:17:45.755 "traddr": "10.0.0.2", 00:17:45.755 "trsvcid": "4420" 00:17:45.755 }, 00:17:45.755 "peer_address": { 00:17:45.755 "trtype": "TCP", 00:17:45.755 "adrfam": "IPv4", 00:17:45.755 "traddr": "10.0.0.1", 00:17:45.755 "trsvcid": "39956" 00:17:45.755 }, 00:17:45.755 "auth": { 00:17:45.755 "state": "completed", 00:17:45.755 "digest": "sha256", 00:17:45.755 "dhgroup": "ffdhe6144" 00:17:45.755 } 00:17:45.755 } 00:17:45.755 ]' 00:17:45.755 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:45.755 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:45.755 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:45.755 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:45.755 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.030 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.030 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.030 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.030 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzdkNjYxYmFiNjQ3YjAxNGRlZjVjZDhkMDcyMzM4YzWRYJIq: --dhchap-ctrl-secret DHHC-1:02:ZWQyMGJkZGYwNzc1YjI2NDFiNzBmNWZjMmExODU4NzFkYjNkNmEzODJiMzFkZjc2ixuwiQ==: 00:17:46.030 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MzdkNjYxYmFiNjQ3YjAxNGRlZjVjZDhkMDcyMzM4YzWRYJIq: --dhchap-ctrl-secret DHHC-1:02:ZWQyMGJkZGYwNzc1YjI2NDFiNzBmNWZjMmExODU4NzFkYjNkNmEzODJiMzFkZjc2ixuwiQ==: 00:17:46.691 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.691 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:46.691 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.691 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.691 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.691 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:46.691 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:46.691 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:46.950 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:17:46.950 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:46.950 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:46.950 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:46.950 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:46.950 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.950 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.950 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.950 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.950 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.950 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.950 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.950 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.208 00:17:47.208 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:47.208 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:47.208 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.467 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.467 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.467 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.467 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.467 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.467 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:47.467 { 00:17:47.467 "cntlid": 37, 00:17:47.467 "qid": 0, 00:17:47.467 "state": "enabled", 00:17:47.467 "thread": "nvmf_tgt_poll_group_000", 00:17:47.467 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:47.467 "listen_address": { 00:17:47.467 "trtype": "TCP", 00:17:47.467 "adrfam": "IPv4", 00:17:47.467 "traddr": "10.0.0.2", 00:17:47.467 "trsvcid": "4420" 00:17:47.467 }, 00:17:47.467 "peer_address": { 00:17:47.467 "trtype": "TCP", 00:17:47.467 "adrfam": "IPv4", 00:17:47.467 "traddr": "10.0.0.1", 00:17:47.467 "trsvcid": "39986" 00:17:47.467 }, 00:17:47.467 "auth": { 00:17:47.467 "state": "completed", 00:17:47.467 "digest": "sha256", 00:17:47.467 "dhgroup": "ffdhe6144" 00:17:47.467 } 00:17:47.467 } 00:17:47.467 ]' 00:17:47.467 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:47.467 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:47.467 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:47.467 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:47.467 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:47.467 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.467 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.467 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.726 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzNiZDZiNTdjYjExODkyOGVkMzUxM2I1M2JjMDQ3YmE0YjM1MGIxZTg0NGEwYmI16L/adQ==: --dhchap-ctrl-secret DHHC-1:01:ODI5NDkyMTUyNTIxNWVlZmZkYzBmYzUwZWFmNjE0NmOk3lGG: 00:17:47.726 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzNiZDZiNTdjYjExODkyOGVkMzUxM2I1M2JjMDQ3YmE0YjM1MGIxZTg0NGEwYmI16L/adQ==: --dhchap-ctrl-secret DHHC-1:01:ODI5NDkyMTUyNTIxNWVlZmZkYzBmYzUwZWFmNjE0NmOk3lGG: 00:17:48.292 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.292 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:48.292 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.292 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.292 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.292 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:48.292 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:48.292 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:48.551 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:17:48.551 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.551 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:48.551 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:48.551 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:48.551 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.551 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:48.551 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.551 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.551 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.551 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:48.551 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:48.551 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:48.810 00:17:49.068 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:49.068 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:49.068 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.068 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.068 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.068 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.068 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.068 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.068 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.068 { 00:17:49.068 "cntlid": 39, 00:17:49.068 "qid": 0, 00:17:49.068 "state": "enabled", 00:17:49.068 "thread": "nvmf_tgt_poll_group_000", 00:17:49.068 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:49.068 "listen_address": { 00:17:49.068 "trtype": "TCP", 00:17:49.068 "adrfam": "IPv4", 00:17:49.068 "traddr": "10.0.0.2", 00:17:49.068 "trsvcid": "4420" 00:17:49.068 }, 00:17:49.068 "peer_address": { 00:17:49.068 "trtype": "TCP", 00:17:49.068 "adrfam": "IPv4", 00:17:49.068 "traddr": "10.0.0.1", 00:17:49.069 "trsvcid": "40014" 00:17:49.069 }, 00:17:49.069 "auth": { 00:17:49.069 "state": "completed", 00:17:49.069 "digest": "sha256", 00:17:49.069 "dhgroup": "ffdhe6144" 00:17:49.069 } 00:17:49.069 } 00:17:49.069 ]' 00:17:49.069 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.327 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:49.327 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.327 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:49.327 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.327 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.327 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.327 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.586 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWU5ZTRmMDNmZTIyYzM0YmQzZWIwNzgwMjBlMmU5ZDc3OWU4MDQ5ZGQ4OGIzYjliN2E4MGRhYWE0OTVmNzA5NHf4llw=: 00:17:49.586 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWU5ZTRmMDNmZTIyYzM0YmQzZWIwNzgwMjBlMmU5ZDc3OWU4MDQ5ZGQ4OGIzYjliN2E4MGRhYWE0OTVmNzA5NHf4llw=: 00:17:50.153 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.153 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.153 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:50.153 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.153 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.153 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.153 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:50.153 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:50.153 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:50.153 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:50.413 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:17:50.413 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.413 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:50.413 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:50.413 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:50.413 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.413 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.413 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.413 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.413 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.413 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.413 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.413 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.671 00:17:50.931 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.931 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.931 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.931 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.931 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.931 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.931 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.931 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.931 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.931 { 00:17:50.931 "cntlid": 41, 00:17:50.932 "qid": 0, 00:17:50.932 "state": "enabled", 00:17:50.932 "thread": "nvmf_tgt_poll_group_000", 00:17:50.932 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:50.932 "listen_address": { 00:17:50.932 "trtype": "TCP", 00:17:50.932 "adrfam": "IPv4", 00:17:50.932 "traddr": "10.0.0.2", 00:17:50.932 "trsvcid": "4420" 00:17:50.932 }, 00:17:50.932 "peer_address": { 00:17:50.932 "trtype": "TCP", 00:17:50.932 "adrfam": "IPv4", 00:17:50.932 "traddr": "10.0.0.1", 00:17:50.932 "trsvcid": "40054" 00:17:50.932 }, 00:17:50.932 "auth": { 00:17:50.932 "state": "completed", 00:17:50.932 "digest": "sha256", 00:17:50.932 "dhgroup": "ffdhe8192" 00:17:50.932 } 00:17:50.932 } 00:17:50.932 ]' 00:17:50.932 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:51.190 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:51.190 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:51.190 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:51.190 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:51.190 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.190 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.190 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.448 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRmZGI2Yjg0NDg1NzM0Y2U1NzY0YzNlNzYxMGJiODhiYzUzZjRiYjAzZjk4YmQzGzV93Q==: --dhchap-ctrl-secret DHHC-1:03:N2RiZThhOTJlODI2NmM5N2I1OGY4MWNlYzFiOWVjMzBkMGI0YjQ3YTg2MDM2MzJjYTk0ZGY5YzZmNmMxNjgwN5ionDU=: 00:17:51.448 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWRmZGI2Yjg0NDg1NzM0Y2U1NzY0YzNlNzYxMGJiODhiYzUzZjRiYjAzZjk4YmQzGzV93Q==: --dhchap-ctrl-secret DHHC-1:03:N2RiZThhOTJlODI2NmM5N2I1OGY4MWNlYzFiOWVjMzBkMGI0YjQ3YTg2MDM2MzJjYTk0ZGY5YzZmNmMxNjgwN5ionDU=: 00:17:52.016 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.016 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:52.016 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.016 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.016 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.016 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:52.016 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:52.016 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:52.016 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:17:52.016 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.016 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:52.016 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:52.016 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:52.016 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.016 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.016 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.016 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.274 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.274 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.274 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.274 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.533 00:17:52.533 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:52.533 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.533 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.791 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.791 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.791 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.791 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.791 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.791 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.791 { 00:17:52.791 "cntlid": 43, 00:17:52.791 "qid": 0, 00:17:52.791 "state": "enabled", 00:17:52.791 "thread": "nvmf_tgt_poll_group_000", 00:17:52.791 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:52.791 "listen_address": { 00:17:52.791 "trtype": "TCP", 00:17:52.791 "adrfam": "IPv4", 00:17:52.791 "traddr": "10.0.0.2", 00:17:52.791 "trsvcid": "4420" 00:17:52.791 }, 00:17:52.791 "peer_address": { 00:17:52.791 "trtype": "TCP", 00:17:52.791 "adrfam": "IPv4", 00:17:52.791 "traddr": "10.0.0.1", 00:17:52.791 "trsvcid": "40082" 00:17:52.791 }, 00:17:52.791 "auth": { 00:17:52.791 "state": "completed", 00:17:52.791 "digest": "sha256", 00:17:52.791 "dhgroup": "ffdhe8192" 00:17:52.791 } 00:17:52.791 } 00:17:52.791 ]' 00:17:52.791 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.791 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:52.791 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.791 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:52.791 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.049 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.049 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.049 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.049 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzdkNjYxYmFiNjQ3YjAxNGRlZjVjZDhkMDcyMzM4YzWRYJIq: --dhchap-ctrl-secret DHHC-1:02:ZWQyMGJkZGYwNzc1YjI2NDFiNzBmNWZjMmExODU4NzFkYjNkNmEzODJiMzFkZjc2ixuwiQ==: 00:17:53.050 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MzdkNjYxYmFiNjQ3YjAxNGRlZjVjZDhkMDcyMzM4YzWRYJIq: --dhchap-ctrl-secret DHHC-1:02:ZWQyMGJkZGYwNzc1YjI2NDFiNzBmNWZjMmExODU4NzFkYjNkNmEzODJiMzFkZjc2ixuwiQ==: 00:17:53.618 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.618 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:53.618 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.618 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.618 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.618 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.618 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:53.618 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:53.876 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:17:53.877 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:53.877 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:53.877 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:53.877 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:53.877 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.877 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.877 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.877 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.877 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.877 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.877 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.877 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.443 00:17:54.443 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:54.443 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.443 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:54.702 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.702 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.702 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.702 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.702 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.702 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:54.702 { 00:17:54.702 "cntlid": 45, 00:17:54.702 "qid": 0, 00:17:54.702 "state": "enabled", 00:17:54.702 "thread": "nvmf_tgt_poll_group_000", 00:17:54.702 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:54.702 "listen_address": { 00:17:54.702 "trtype": "TCP", 00:17:54.702 "adrfam": "IPv4", 00:17:54.702 "traddr": "10.0.0.2", 00:17:54.702 "trsvcid": "4420" 00:17:54.702 }, 00:17:54.702 "peer_address": { 00:17:54.702 "trtype": "TCP", 00:17:54.702 "adrfam": "IPv4", 00:17:54.702 "traddr": "10.0.0.1", 00:17:54.702 "trsvcid": "39516" 00:17:54.702 }, 00:17:54.702 "auth": { 00:17:54.702 "state": "completed", 00:17:54.702 "digest": "sha256", 00:17:54.702 "dhgroup": "ffdhe8192" 00:17:54.702 } 00:17:54.702 } 00:17:54.702 ]' 00:17:54.702 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:54.702 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:54.702 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:54.702 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:54.702 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:54.702 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.702 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.703 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.961 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzNiZDZiNTdjYjExODkyOGVkMzUxM2I1M2JjMDQ3YmE0YjM1MGIxZTg0NGEwYmI16L/adQ==: --dhchap-ctrl-secret DHHC-1:01:ODI5NDkyMTUyNTIxNWVlZmZkYzBmYzUwZWFmNjE0NmOk3lGG: 00:17:54.961 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzNiZDZiNTdjYjExODkyOGVkMzUxM2I1M2JjMDQ3YmE0YjM1MGIxZTg0NGEwYmI16L/adQ==: --dhchap-ctrl-secret DHHC-1:01:ODI5NDkyMTUyNTIxNWVlZmZkYzBmYzUwZWFmNjE0NmOk3lGG: 00:17:55.528 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.528 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:55.528 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.528 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.528 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.528 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.528 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:55.528 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:55.785 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:17:55.785 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.785 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:55.785 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:55.785 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:55.785 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.785 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:55.785 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.785 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.785 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.785 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:55.786 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:55.786 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:56.354 00:17:56.354 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.354 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.354 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.354 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.354 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.354 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.354 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.354 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.354 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:56.354 { 00:17:56.354 "cntlid": 47, 00:17:56.354 "qid": 0, 00:17:56.354 "state": "enabled", 00:17:56.354 "thread": "nvmf_tgt_poll_group_000", 00:17:56.354 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:56.354 "listen_address": { 00:17:56.354 "trtype": "TCP", 00:17:56.354 "adrfam": "IPv4", 00:17:56.354 "traddr": "10.0.0.2", 00:17:56.354 "trsvcid": "4420" 00:17:56.354 }, 00:17:56.354 "peer_address": { 00:17:56.354 "trtype": "TCP", 00:17:56.354 "adrfam": "IPv4", 00:17:56.354 "traddr": "10.0.0.1", 00:17:56.354 "trsvcid": "39534" 00:17:56.354 }, 00:17:56.354 "auth": { 00:17:56.354 "state": "completed", 00:17:56.354 "digest": "sha256", 00:17:56.354 "dhgroup": "ffdhe8192" 00:17:56.354 } 00:17:56.354 } 00:17:56.354 ]' 00:17:56.354 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:56.613 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:56.613 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:56.613 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:56.613 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:56.613 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.613 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.613 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.871 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWU5ZTRmMDNmZTIyYzM0YmQzZWIwNzgwMjBlMmU5ZDc3OWU4MDQ5ZGQ4OGIzYjliN2E4MGRhYWE0OTVmNzA5NHf4llw=: 00:17:56.871 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWU5ZTRmMDNmZTIyYzM0YmQzZWIwNzgwMjBlMmU5ZDc3OWU4MDQ5ZGQ4OGIzYjliN2E4MGRhYWE0OTVmNzA5NHf4llw=: 00:17:57.440 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.440 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:57.440 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.440 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.440 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.440 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:57.440 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:57.440 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:57.440 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:57.440 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:57.440 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:17:57.440 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:57.440 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:57.440 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:57.440 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:57.440 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.440 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.440 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.440 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.699 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.699 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.699 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.699 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.699 00:17:57.957 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:57.957 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:57.957 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.957 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.957 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.957 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.957 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.957 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.957 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.957 { 00:17:57.957 "cntlid": 49, 00:17:57.957 "qid": 0, 00:17:57.957 "state": "enabled", 00:17:57.957 "thread": "nvmf_tgt_poll_group_000", 00:17:57.957 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:57.957 "listen_address": { 00:17:57.957 "trtype": "TCP", 00:17:57.957 "adrfam": "IPv4", 00:17:57.957 "traddr": "10.0.0.2", 00:17:57.957 "trsvcid": "4420" 00:17:57.957 }, 00:17:57.957 "peer_address": { 00:17:57.957 "trtype": "TCP", 00:17:57.957 "adrfam": "IPv4", 00:17:57.957 "traddr": "10.0.0.1", 00:17:57.957 "trsvcid": "39558" 00:17:57.957 }, 00:17:57.957 "auth": { 00:17:57.957 "state": "completed", 00:17:57.957 "digest": "sha384", 00:17:57.957 "dhgroup": "null" 00:17:57.957 } 00:17:57.957 } 00:17:57.957 ]' 00:17:57.957 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.216 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:58.216 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:58.216 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:58.216 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:58.216 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.216 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.216 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.475 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRmZGI2Yjg0NDg1NzM0Y2U1NzY0YzNlNzYxMGJiODhiYzUzZjRiYjAzZjk4YmQzGzV93Q==: --dhchap-ctrl-secret DHHC-1:03:N2RiZThhOTJlODI2NmM5N2I1OGY4MWNlYzFiOWVjMzBkMGI0YjQ3YTg2MDM2MzJjYTk0ZGY5YzZmNmMxNjgwN5ionDU=: 00:17:58.475 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWRmZGI2Yjg0NDg1NzM0Y2U1NzY0YzNlNzYxMGJiODhiYzUzZjRiYjAzZjk4YmQzGzV93Q==: --dhchap-ctrl-secret DHHC-1:03:N2RiZThhOTJlODI2NmM5N2I1OGY4MWNlYzFiOWVjMzBkMGI0YjQ3YTg2MDM2MzJjYTk0ZGY5YzZmNmMxNjgwN5ionDU=: 00:17:59.042 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.042 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:59.042 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.042 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.042 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.042 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:59.042 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:59.042 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:59.042 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:17:59.042 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:59.042 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:59.042 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:59.042 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:59.042 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.043 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.043 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.043 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.301 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.301 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.301 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.301 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.301 00:17:59.560 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:59.560 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:59.560 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.560 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.560 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.560 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.560 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.560 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.560 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.560 { 00:17:59.560 "cntlid": 51, 00:17:59.560 "qid": 0, 00:17:59.560 "state": "enabled", 00:17:59.560 "thread": "nvmf_tgt_poll_group_000", 00:17:59.560 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:59.560 "listen_address": { 00:17:59.560 "trtype": "TCP", 00:17:59.560 "adrfam": "IPv4", 00:17:59.560 "traddr": "10.0.0.2", 00:17:59.560 "trsvcid": "4420" 00:17:59.560 }, 00:17:59.560 "peer_address": { 00:17:59.560 "trtype": "TCP", 00:17:59.560 "adrfam": "IPv4", 00:17:59.560 "traddr": "10.0.0.1", 00:17:59.560 "trsvcid": "39586" 00:17:59.560 }, 00:17:59.560 "auth": { 00:17:59.560 "state": "completed", 00:17:59.560 "digest": "sha384", 00:17:59.560 "dhgroup": "null" 00:17:59.560 } 00:17:59.560 } 00:17:59.560 ]' 00:17:59.560 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.819 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:59.819 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.819 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:59.819 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.819 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.819 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.819 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.078 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzdkNjYxYmFiNjQ3YjAxNGRlZjVjZDhkMDcyMzM4YzWRYJIq: --dhchap-ctrl-secret DHHC-1:02:ZWQyMGJkZGYwNzc1YjI2NDFiNzBmNWZjMmExODU4NzFkYjNkNmEzODJiMzFkZjc2ixuwiQ==: 00:18:00.078 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MzdkNjYxYmFiNjQ3YjAxNGRlZjVjZDhkMDcyMzM4YzWRYJIq: --dhchap-ctrl-secret DHHC-1:02:ZWQyMGJkZGYwNzc1YjI2NDFiNzBmNWZjMmExODU4NzFkYjNkNmEzODJiMzFkZjc2ixuwiQ==: 00:18:00.646 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.646 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:00.646 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.646 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.646 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.646 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:00.646 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:00.646 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:00.646 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:18:00.646 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.646 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:00.646 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:00.646 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:00.646 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.646 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.646 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.646 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.646 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.646 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.646 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.646 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.905 00:18:00.905 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.905 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.905 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.164 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.164 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.164 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.164 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.164 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.164 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.164 { 00:18:01.164 "cntlid": 53, 00:18:01.164 "qid": 0, 00:18:01.164 "state": "enabled", 00:18:01.164 "thread": "nvmf_tgt_poll_group_000", 00:18:01.164 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:01.164 "listen_address": { 00:18:01.164 "trtype": "TCP", 00:18:01.164 "adrfam": "IPv4", 00:18:01.164 "traddr": "10.0.0.2", 00:18:01.164 "trsvcid": "4420" 00:18:01.164 }, 00:18:01.164 "peer_address": { 00:18:01.164 "trtype": "TCP", 00:18:01.164 "adrfam": "IPv4", 00:18:01.164 "traddr": "10.0.0.1", 00:18:01.164 "trsvcid": "39606" 00:18:01.164 }, 00:18:01.164 "auth": { 00:18:01.164 "state": "completed", 00:18:01.164 "digest": "sha384", 00:18:01.164 "dhgroup": "null" 00:18:01.164 } 00:18:01.164 } 00:18:01.164 ]' 00:18:01.164 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.164 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:01.164 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:01.165 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:01.165 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:01.423 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.423 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.423 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.423 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzNiZDZiNTdjYjExODkyOGVkMzUxM2I1M2JjMDQ3YmE0YjM1MGIxZTg0NGEwYmI16L/adQ==: --dhchap-ctrl-secret DHHC-1:01:ODI5NDkyMTUyNTIxNWVlZmZkYzBmYzUwZWFmNjE0NmOk3lGG: 00:18:01.423 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzNiZDZiNTdjYjExODkyOGVkMzUxM2I1M2JjMDQ3YmE0YjM1MGIxZTg0NGEwYmI16L/adQ==: --dhchap-ctrl-secret DHHC-1:01:ODI5NDkyMTUyNTIxNWVlZmZkYzBmYzUwZWFmNjE0NmOk3lGG: 00:18:01.990 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.990 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:01.990 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.990 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.990 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.990 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:01.990 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:01.990 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:02.248 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:18:02.248 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.248 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:02.248 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:02.248 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:02.248 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.248 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:02.248 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.248 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.248 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.248 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:02.248 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:02.248 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:02.507 00:18:02.507 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.507 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.507 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.765 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.765 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.765 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.765 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.765 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.765 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.765 { 00:18:02.765 "cntlid": 55, 00:18:02.765 "qid": 0, 00:18:02.765 "state": "enabled", 00:18:02.765 "thread": "nvmf_tgt_poll_group_000", 00:18:02.765 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:02.765 "listen_address": { 00:18:02.765 "trtype": "TCP", 00:18:02.765 "adrfam": "IPv4", 00:18:02.765 "traddr": "10.0.0.2", 00:18:02.765 "trsvcid": "4420" 00:18:02.765 }, 00:18:02.765 "peer_address": { 00:18:02.765 "trtype": "TCP", 00:18:02.765 "adrfam": "IPv4", 00:18:02.765 "traddr": "10.0.0.1", 00:18:02.765 "trsvcid": "39624" 00:18:02.765 }, 00:18:02.765 "auth": { 00:18:02.765 "state": "completed", 00:18:02.765 "digest": "sha384", 00:18:02.765 "dhgroup": "null" 00:18:02.765 } 00:18:02.765 } 00:18:02.765 ]' 00:18:02.765 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:02.765 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:02.765 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.765 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:02.765 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:03.024 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.024 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.024 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.024 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWU5ZTRmMDNmZTIyYzM0YmQzZWIwNzgwMjBlMmU5ZDc3OWU4MDQ5ZGQ4OGIzYjliN2E4MGRhYWE0OTVmNzA5NHf4llw=: 00:18:03.024 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWU5ZTRmMDNmZTIyYzM0YmQzZWIwNzgwMjBlMmU5ZDc3OWU4MDQ5ZGQ4OGIzYjliN2E4MGRhYWE0OTVmNzA5NHf4llw=: 00:18:03.599 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.599 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:03.599 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.599 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.599 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.599 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:03.599 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.599 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:03.599 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:03.858 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:18:03.858 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.858 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:03.858 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:03.858 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:03.858 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.858 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.858 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.858 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.858 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.858 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.858 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.858 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.117 00:18:04.117 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.117 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.117 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.376 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.376 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.376 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.376 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.376 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.376 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.376 { 00:18:04.376 "cntlid": 57, 00:18:04.376 "qid": 0, 00:18:04.376 "state": "enabled", 00:18:04.376 "thread": "nvmf_tgt_poll_group_000", 00:18:04.376 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:04.376 "listen_address": { 00:18:04.376 "trtype": "TCP", 00:18:04.376 "adrfam": "IPv4", 00:18:04.376 "traddr": "10.0.0.2", 00:18:04.376 "trsvcid": "4420" 00:18:04.376 }, 00:18:04.376 "peer_address": { 00:18:04.376 "trtype": "TCP", 00:18:04.376 "adrfam": "IPv4", 00:18:04.376 "traddr": "10.0.0.1", 00:18:04.376 "trsvcid": "39666" 00:18:04.377 }, 00:18:04.377 "auth": { 00:18:04.377 "state": "completed", 00:18:04.377 "digest": "sha384", 00:18:04.377 "dhgroup": "ffdhe2048" 00:18:04.377 } 00:18:04.377 } 00:18:04.377 ]' 00:18:04.377 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:04.377 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:04.377 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.377 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:04.377 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.377 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.377 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.377 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.635 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRmZGI2Yjg0NDg1NzM0Y2U1NzY0YzNlNzYxMGJiODhiYzUzZjRiYjAzZjk4YmQzGzV93Q==: --dhchap-ctrl-secret DHHC-1:03:N2RiZThhOTJlODI2NmM5N2I1OGY4MWNlYzFiOWVjMzBkMGI0YjQ3YTg2MDM2MzJjYTk0ZGY5YzZmNmMxNjgwN5ionDU=: 00:18:04.635 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWRmZGI2Yjg0NDg1NzM0Y2U1NzY0YzNlNzYxMGJiODhiYzUzZjRiYjAzZjk4YmQzGzV93Q==: --dhchap-ctrl-secret DHHC-1:03:N2RiZThhOTJlODI2NmM5N2I1OGY4MWNlYzFiOWVjMzBkMGI0YjQ3YTg2MDM2MzJjYTk0ZGY5YzZmNmMxNjgwN5ionDU=: 00:18:05.203 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.203 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:05.203 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.203 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.203 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.203 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:05.203 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:05.203 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:05.462 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:18:05.462 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:05.462 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:05.462 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:05.462 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:05.462 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.462 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.462 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.462 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.462 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.462 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.462 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.462 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.720 00:18:05.720 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:05.720 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:05.720 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.979 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.979 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.979 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.979 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.979 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.979 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:05.979 { 00:18:05.979 "cntlid": 59, 00:18:05.979 "qid": 0, 00:18:05.979 "state": "enabled", 00:18:05.979 "thread": "nvmf_tgt_poll_group_000", 00:18:05.979 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:05.979 "listen_address": { 00:18:05.979 "trtype": "TCP", 00:18:05.979 "adrfam": "IPv4", 00:18:05.979 "traddr": "10.0.0.2", 00:18:05.979 "trsvcid": "4420" 00:18:05.979 }, 00:18:05.979 "peer_address": { 00:18:05.979 "trtype": "TCP", 00:18:05.979 "adrfam": "IPv4", 00:18:05.979 "traddr": "10.0.0.1", 00:18:05.979 "trsvcid": "54812" 00:18:05.979 }, 00:18:05.979 "auth": { 00:18:05.979 "state": "completed", 00:18:05.979 "digest": "sha384", 00:18:05.979 "dhgroup": "ffdhe2048" 00:18:05.979 } 00:18:05.979 } 00:18:05.979 ]' 00:18:05.979 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.979 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:05.979 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:05.979 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:05.980 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:05.980 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.980 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.980 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.237 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzdkNjYxYmFiNjQ3YjAxNGRlZjVjZDhkMDcyMzM4YzWRYJIq: --dhchap-ctrl-secret DHHC-1:02:ZWQyMGJkZGYwNzc1YjI2NDFiNzBmNWZjMmExODU4NzFkYjNkNmEzODJiMzFkZjc2ixuwiQ==: 00:18:06.237 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MzdkNjYxYmFiNjQ3YjAxNGRlZjVjZDhkMDcyMzM4YzWRYJIq: --dhchap-ctrl-secret DHHC-1:02:ZWQyMGJkZGYwNzc1YjI2NDFiNzBmNWZjMmExODU4NzFkYjNkNmEzODJiMzFkZjc2ixuwiQ==: 00:18:06.804 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.804 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.804 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:06.804 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.804 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.804 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.804 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:06.804 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:06.804 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:07.063 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:18:07.063 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.063 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:07.063 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:07.063 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:07.063 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.063 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.063 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.063 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.063 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.063 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.063 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.063 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.321 00:18:07.321 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.321 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.321 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.580 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.580 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.580 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.580 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.580 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.580 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:07.580 { 00:18:07.580 "cntlid": 61, 00:18:07.580 "qid": 0, 00:18:07.580 "state": "enabled", 00:18:07.580 "thread": "nvmf_tgt_poll_group_000", 00:18:07.580 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:07.580 "listen_address": { 00:18:07.580 "trtype": "TCP", 00:18:07.580 "adrfam": "IPv4", 00:18:07.580 "traddr": "10.0.0.2", 00:18:07.580 "trsvcid": "4420" 00:18:07.580 }, 00:18:07.580 "peer_address": { 00:18:07.580 "trtype": "TCP", 00:18:07.580 "adrfam": "IPv4", 00:18:07.580 "traddr": "10.0.0.1", 00:18:07.580 "trsvcid": "54826" 00:18:07.580 }, 00:18:07.580 "auth": { 00:18:07.580 "state": "completed", 00:18:07.580 "digest": "sha384", 00:18:07.580 "dhgroup": "ffdhe2048" 00:18:07.580 } 00:18:07.580 } 00:18:07.580 ]' 00:18:07.580 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:07.580 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:07.580 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:07.580 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:07.580 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:07.580 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.580 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.580 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.839 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzNiZDZiNTdjYjExODkyOGVkMzUxM2I1M2JjMDQ3YmE0YjM1MGIxZTg0NGEwYmI16L/adQ==: --dhchap-ctrl-secret DHHC-1:01:ODI5NDkyMTUyNTIxNWVlZmZkYzBmYzUwZWFmNjE0NmOk3lGG: 00:18:07.839 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzNiZDZiNTdjYjExODkyOGVkMzUxM2I1M2JjMDQ3YmE0YjM1MGIxZTg0NGEwYmI16L/adQ==: --dhchap-ctrl-secret DHHC-1:01:ODI5NDkyMTUyNTIxNWVlZmZkYzBmYzUwZWFmNjE0NmOk3lGG: 00:18:08.407 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.407 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.407 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:08.407 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.407 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.407 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.407 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:08.407 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:08.407 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:08.665 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:18:08.665 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:08.665 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:08.665 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:08.665 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:08.665 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.665 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:08.665 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.665 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.665 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.665 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:08.665 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:08.666 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:08.924 00:18:08.924 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:08.924 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:08.924 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.924 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.924 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.924 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.924 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.183 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.183 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.183 { 00:18:09.183 "cntlid": 63, 00:18:09.183 "qid": 0, 00:18:09.183 "state": "enabled", 00:18:09.183 "thread": "nvmf_tgt_poll_group_000", 00:18:09.183 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:09.183 "listen_address": { 00:18:09.183 "trtype": "TCP", 00:18:09.183 "adrfam": "IPv4", 00:18:09.183 "traddr": "10.0.0.2", 00:18:09.183 "trsvcid": "4420" 00:18:09.183 }, 00:18:09.183 "peer_address": { 00:18:09.183 "trtype": "TCP", 00:18:09.183 "adrfam": "IPv4", 00:18:09.183 "traddr": "10.0.0.1", 00:18:09.183 "trsvcid": "54848" 00:18:09.183 }, 00:18:09.183 "auth": { 00:18:09.183 "state": "completed", 00:18:09.183 "digest": "sha384", 00:18:09.183 "dhgroup": "ffdhe2048" 00:18:09.183 } 00:18:09.183 } 00:18:09.183 ]' 00:18:09.183 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.183 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:09.183 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.183 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:09.183 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.183 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.183 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.184 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.442 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWU5ZTRmMDNmZTIyYzM0YmQzZWIwNzgwMjBlMmU5ZDc3OWU4MDQ5ZGQ4OGIzYjliN2E4MGRhYWE0OTVmNzA5NHf4llw=: 00:18:09.442 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWU5ZTRmMDNmZTIyYzM0YmQzZWIwNzgwMjBlMmU5ZDc3OWU4MDQ5ZGQ4OGIzYjliN2E4MGRhYWE0OTVmNzA5NHf4llw=: 00:18:10.010 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.010 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.010 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:10.010 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.010 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.010 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.010 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:10.010 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:10.010 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:10.010 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:10.010 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:18:10.010 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:10.010 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:10.010 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:10.010 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:10.010 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.010 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.010 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.010 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.010 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.010 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.011 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.269 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.269 00:18:10.528 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:10.528 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:10.528 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.528 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.528 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.528 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.528 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.528 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.528 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:10.528 { 00:18:10.528 "cntlid": 65, 00:18:10.528 "qid": 0, 00:18:10.528 "state": "enabled", 00:18:10.528 "thread": "nvmf_tgt_poll_group_000", 00:18:10.528 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:10.528 "listen_address": { 00:18:10.528 "trtype": "TCP", 00:18:10.528 "adrfam": "IPv4", 00:18:10.528 "traddr": "10.0.0.2", 00:18:10.528 "trsvcid": "4420" 00:18:10.528 }, 00:18:10.528 "peer_address": { 00:18:10.528 "trtype": "TCP", 00:18:10.528 "adrfam": "IPv4", 00:18:10.528 "traddr": "10.0.0.1", 00:18:10.528 "trsvcid": "54872" 00:18:10.528 }, 00:18:10.528 "auth": { 00:18:10.528 "state": "completed", 00:18:10.528 "digest": "sha384", 00:18:10.528 "dhgroup": "ffdhe3072" 00:18:10.528 } 00:18:10.528 } 00:18:10.528 ]' 00:18:10.528 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:10.786 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:10.787 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:10.787 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:10.787 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:10.787 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.787 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.787 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.045 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRmZGI2Yjg0NDg1NzM0Y2U1NzY0YzNlNzYxMGJiODhiYzUzZjRiYjAzZjk4YmQzGzV93Q==: --dhchap-ctrl-secret DHHC-1:03:N2RiZThhOTJlODI2NmM5N2I1OGY4MWNlYzFiOWVjMzBkMGI0YjQ3YTg2MDM2MzJjYTk0ZGY5YzZmNmMxNjgwN5ionDU=: 00:18:11.045 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWRmZGI2Yjg0NDg1NzM0Y2U1NzY0YzNlNzYxMGJiODhiYzUzZjRiYjAzZjk4YmQzGzV93Q==: --dhchap-ctrl-secret DHHC-1:03:N2RiZThhOTJlODI2NmM5N2I1OGY4MWNlYzFiOWVjMzBkMGI0YjQ3YTg2MDM2MzJjYTk0ZGY5YzZmNmMxNjgwN5ionDU=: 00:18:11.612 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.612 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.612 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:11.612 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.612 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.612 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.612 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:11.612 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:11.612 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:11.612 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:18:11.612 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:11.612 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:11.612 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:11.612 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:11.612 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.613 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.613 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.613 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.613 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.613 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.613 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.613 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.871 00:18:11.871 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:11.871 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:11.871 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.129 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.130 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.130 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.130 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.130 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.130 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:12.130 { 00:18:12.130 "cntlid": 67, 00:18:12.130 "qid": 0, 00:18:12.130 "state": "enabled", 00:18:12.130 "thread": "nvmf_tgt_poll_group_000", 00:18:12.130 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:12.130 "listen_address": { 00:18:12.130 "trtype": "TCP", 00:18:12.130 "adrfam": "IPv4", 00:18:12.130 "traddr": "10.0.0.2", 00:18:12.130 "trsvcid": "4420" 00:18:12.130 }, 00:18:12.130 "peer_address": { 00:18:12.130 "trtype": "TCP", 00:18:12.130 "adrfam": "IPv4", 00:18:12.130 "traddr": "10.0.0.1", 00:18:12.130 "trsvcid": "54892" 00:18:12.130 }, 00:18:12.130 "auth": { 00:18:12.130 "state": "completed", 00:18:12.130 "digest": "sha384", 00:18:12.130 "dhgroup": "ffdhe3072" 00:18:12.130 } 00:18:12.130 } 00:18:12.130 ]' 00:18:12.130 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:12.130 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:12.130 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:12.388 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:12.388 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:12.388 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.388 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.389 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.389 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzdkNjYxYmFiNjQ3YjAxNGRlZjVjZDhkMDcyMzM4YzWRYJIq: --dhchap-ctrl-secret DHHC-1:02:ZWQyMGJkZGYwNzc1YjI2NDFiNzBmNWZjMmExODU4NzFkYjNkNmEzODJiMzFkZjc2ixuwiQ==: 00:18:12.389 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MzdkNjYxYmFiNjQ3YjAxNGRlZjVjZDhkMDcyMzM4YzWRYJIq: --dhchap-ctrl-secret DHHC-1:02:ZWQyMGJkZGYwNzc1YjI2NDFiNzBmNWZjMmExODU4NzFkYjNkNmEzODJiMzFkZjc2ixuwiQ==: 00:18:12.955 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.955 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.955 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:12.955 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.955 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.955 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.955 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:12.955 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:12.955 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:13.213 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:18:13.213 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:13.213 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:13.213 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:13.213 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:13.213 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.214 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.214 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.214 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.214 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.214 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.214 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.214 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.472 00:18:13.472 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:13.472 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:13.472 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.730 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.730 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.730 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.730 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.730 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.730 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:13.730 { 00:18:13.730 "cntlid": 69, 00:18:13.730 "qid": 0, 00:18:13.730 "state": "enabled", 00:18:13.730 "thread": "nvmf_tgt_poll_group_000", 00:18:13.730 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:13.730 "listen_address": { 00:18:13.730 "trtype": "TCP", 00:18:13.730 "adrfam": "IPv4", 00:18:13.730 "traddr": "10.0.0.2", 00:18:13.730 "trsvcid": "4420" 00:18:13.730 }, 00:18:13.730 "peer_address": { 00:18:13.730 "trtype": "TCP", 00:18:13.730 "adrfam": "IPv4", 00:18:13.730 "traddr": "10.0.0.1", 00:18:13.730 "trsvcid": "54904" 00:18:13.730 }, 00:18:13.730 "auth": { 00:18:13.730 "state": "completed", 00:18:13.730 "digest": "sha384", 00:18:13.730 "dhgroup": "ffdhe3072" 00:18:13.730 } 00:18:13.730 } 00:18:13.730 ]' 00:18:13.730 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:13.730 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:13.730 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:13.730 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:13.730 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.989 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.989 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.989 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.989 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzNiZDZiNTdjYjExODkyOGVkMzUxM2I1M2JjMDQ3YmE0YjM1MGIxZTg0NGEwYmI16L/adQ==: --dhchap-ctrl-secret DHHC-1:01:ODI5NDkyMTUyNTIxNWVlZmZkYzBmYzUwZWFmNjE0NmOk3lGG: 00:18:13.989 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzNiZDZiNTdjYjExODkyOGVkMzUxM2I1M2JjMDQ3YmE0YjM1MGIxZTg0NGEwYmI16L/adQ==: --dhchap-ctrl-secret DHHC-1:01:ODI5NDkyMTUyNTIxNWVlZmZkYzBmYzUwZWFmNjE0NmOk3lGG: 00:18:14.558 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.558 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:14.558 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.558 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.558 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.558 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:14.558 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:14.558 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:14.817 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:18:14.817 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:14.817 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:14.817 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:14.817 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:14.817 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.817 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:14.817 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.817 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.817 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.817 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:14.817 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:14.817 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:15.074 00:18:15.074 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:15.074 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.074 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.333 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.333 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.333 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.333 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.333 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.333 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:15.333 { 00:18:15.333 "cntlid": 71, 00:18:15.333 "qid": 0, 00:18:15.333 "state": "enabled", 00:18:15.333 "thread": "nvmf_tgt_poll_group_000", 00:18:15.333 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:15.333 "listen_address": { 00:18:15.333 "trtype": "TCP", 00:18:15.333 "adrfam": "IPv4", 00:18:15.333 "traddr": "10.0.0.2", 00:18:15.333 "trsvcid": "4420" 00:18:15.333 }, 00:18:15.333 "peer_address": { 00:18:15.333 "trtype": "TCP", 00:18:15.333 "adrfam": "IPv4", 00:18:15.333 "traddr": "10.0.0.1", 00:18:15.333 "trsvcid": "51168" 00:18:15.333 }, 00:18:15.333 "auth": { 00:18:15.333 "state": "completed", 00:18:15.333 "digest": "sha384", 00:18:15.333 "dhgroup": "ffdhe3072" 00:18:15.333 } 00:18:15.333 } 00:18:15.333 ]' 00:18:15.333 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:15.333 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:15.333 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:15.333 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:15.333 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:15.333 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.333 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.333 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.592 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWU5ZTRmMDNmZTIyYzM0YmQzZWIwNzgwMjBlMmU5ZDc3OWU4MDQ5ZGQ4OGIzYjliN2E4MGRhYWE0OTVmNzA5NHf4llw=: 00:18:15.592 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWU5ZTRmMDNmZTIyYzM0YmQzZWIwNzgwMjBlMmU5ZDc3OWU4MDQ5ZGQ4OGIzYjliN2E4MGRhYWE0OTVmNzA5NHf4llw=: 00:18:16.159 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.159 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.159 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:16.159 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.159 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.159 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.159 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:16.159 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:16.159 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:16.159 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:16.418 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:18:16.418 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:16.418 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:16.418 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:16.418 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:16.418 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.418 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.418 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.418 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.418 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.418 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.418 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.418 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.677 00:18:16.677 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:16.677 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:16.677 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.935 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.935 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.935 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.935 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.935 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.935 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:16.935 { 00:18:16.935 "cntlid": 73, 00:18:16.935 "qid": 0, 00:18:16.935 "state": "enabled", 00:18:16.935 "thread": "nvmf_tgt_poll_group_000", 00:18:16.935 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:16.935 "listen_address": { 00:18:16.935 "trtype": "TCP", 00:18:16.935 "adrfam": "IPv4", 00:18:16.935 "traddr": "10.0.0.2", 00:18:16.935 "trsvcid": "4420" 00:18:16.935 }, 00:18:16.935 "peer_address": { 00:18:16.935 "trtype": "TCP", 00:18:16.935 "adrfam": "IPv4", 00:18:16.935 "traddr": "10.0.0.1", 00:18:16.935 "trsvcid": "51198" 00:18:16.935 }, 00:18:16.935 "auth": { 00:18:16.935 "state": "completed", 00:18:16.935 "digest": "sha384", 00:18:16.935 "dhgroup": "ffdhe4096" 00:18:16.935 } 00:18:16.935 } 00:18:16.935 ]' 00:18:16.935 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:16.935 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:16.935 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:16.935 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:16.935 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:16.935 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.935 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.935 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.194 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRmZGI2Yjg0NDg1NzM0Y2U1NzY0YzNlNzYxMGJiODhiYzUzZjRiYjAzZjk4YmQzGzV93Q==: --dhchap-ctrl-secret DHHC-1:03:N2RiZThhOTJlODI2NmM5N2I1OGY4MWNlYzFiOWVjMzBkMGI0YjQ3YTg2MDM2MzJjYTk0ZGY5YzZmNmMxNjgwN5ionDU=: 00:18:17.194 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWRmZGI2Yjg0NDg1NzM0Y2U1NzY0YzNlNzYxMGJiODhiYzUzZjRiYjAzZjk4YmQzGzV93Q==: --dhchap-ctrl-secret DHHC-1:03:N2RiZThhOTJlODI2NmM5N2I1OGY4MWNlYzFiOWVjMzBkMGI0YjQ3YTg2MDM2MzJjYTk0ZGY5YzZmNmMxNjgwN5ionDU=: 00:18:17.761 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.761 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.761 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:17.761 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.761 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.761 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.761 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:17.761 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:17.761 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:18.022 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:18:18.022 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:18.022 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:18.022 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:18.022 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:18.022 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.022 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.022 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.022 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.022 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.022 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.022 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.022 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.281 00:18:18.281 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:18.281 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:18.281 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.539 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.539 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.539 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.539 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.539 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.539 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:18.539 { 00:18:18.539 "cntlid": 75, 00:18:18.539 "qid": 0, 00:18:18.539 "state": "enabled", 00:18:18.539 "thread": "nvmf_tgt_poll_group_000", 00:18:18.539 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:18.539 "listen_address": { 00:18:18.539 "trtype": "TCP", 00:18:18.539 "adrfam": "IPv4", 00:18:18.539 "traddr": "10.0.0.2", 00:18:18.539 "trsvcid": "4420" 00:18:18.539 }, 00:18:18.539 "peer_address": { 00:18:18.539 "trtype": "TCP", 00:18:18.539 "adrfam": "IPv4", 00:18:18.539 "traddr": "10.0.0.1", 00:18:18.539 "trsvcid": "51220" 00:18:18.539 }, 00:18:18.539 "auth": { 00:18:18.539 "state": "completed", 00:18:18.539 "digest": "sha384", 00:18:18.539 "dhgroup": "ffdhe4096" 00:18:18.539 } 00:18:18.539 } 00:18:18.539 ]' 00:18:18.539 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:18.539 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:18.539 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:18.539 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:18.539 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:18.539 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.539 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.539 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.799 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzdkNjYxYmFiNjQ3YjAxNGRlZjVjZDhkMDcyMzM4YzWRYJIq: --dhchap-ctrl-secret DHHC-1:02:ZWQyMGJkZGYwNzc1YjI2NDFiNzBmNWZjMmExODU4NzFkYjNkNmEzODJiMzFkZjc2ixuwiQ==: 00:18:18.799 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MzdkNjYxYmFiNjQ3YjAxNGRlZjVjZDhkMDcyMzM4YzWRYJIq: --dhchap-ctrl-secret DHHC-1:02:ZWQyMGJkZGYwNzc1YjI2NDFiNzBmNWZjMmExODU4NzFkYjNkNmEzODJiMzFkZjc2ixuwiQ==: 00:18:19.366 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.366 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:19.366 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.366 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.366 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.366 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:19.366 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:19.366 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:19.625 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:18:19.625 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:19.625 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:19.625 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:19.625 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:19.625 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.625 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.625 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.625 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.625 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.625 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.625 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.625 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.883 00:18:19.883 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:19.883 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:19.883 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.141 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.141 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.141 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.141 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.141 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.141 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:20.141 { 00:18:20.141 "cntlid": 77, 00:18:20.141 "qid": 0, 00:18:20.141 "state": "enabled", 00:18:20.141 "thread": "nvmf_tgt_poll_group_000", 00:18:20.141 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:20.141 "listen_address": { 00:18:20.141 "trtype": "TCP", 00:18:20.141 "adrfam": "IPv4", 00:18:20.141 "traddr": "10.0.0.2", 00:18:20.141 "trsvcid": "4420" 00:18:20.141 }, 00:18:20.141 "peer_address": { 00:18:20.141 "trtype": "TCP", 00:18:20.141 "adrfam": "IPv4", 00:18:20.141 "traddr": "10.0.0.1", 00:18:20.141 "trsvcid": "51234" 00:18:20.141 }, 00:18:20.141 "auth": { 00:18:20.141 "state": "completed", 00:18:20.141 "digest": "sha384", 00:18:20.141 "dhgroup": "ffdhe4096" 00:18:20.141 } 00:18:20.141 } 00:18:20.141 ]' 00:18:20.141 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:20.141 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:20.141 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:20.141 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:20.141 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:20.141 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.141 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.141 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.399 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzNiZDZiNTdjYjExODkyOGVkMzUxM2I1M2JjMDQ3YmE0YjM1MGIxZTg0NGEwYmI16L/adQ==: --dhchap-ctrl-secret DHHC-1:01:ODI5NDkyMTUyNTIxNWVlZmZkYzBmYzUwZWFmNjE0NmOk3lGG: 00:18:20.400 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzNiZDZiNTdjYjExODkyOGVkMzUxM2I1M2JjMDQ3YmE0YjM1MGIxZTg0NGEwYmI16L/adQ==: --dhchap-ctrl-secret DHHC-1:01:ODI5NDkyMTUyNTIxNWVlZmZkYzBmYzUwZWFmNjE0NmOk3lGG: 00:18:20.966 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.966 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.966 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:20.966 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.966 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.966 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.966 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:20.966 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:20.966 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:21.224 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:18:21.224 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:21.224 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:21.224 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:21.224 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:21.224 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.224 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:21.224 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.224 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.224 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.224 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:21.224 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:21.224 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:21.482 00:18:21.482 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:21.482 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:21.482 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.741 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.741 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.741 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.741 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.741 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.741 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:21.741 { 00:18:21.741 "cntlid": 79, 00:18:21.741 "qid": 0, 00:18:21.741 "state": "enabled", 00:18:21.741 "thread": "nvmf_tgt_poll_group_000", 00:18:21.741 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:21.741 "listen_address": { 00:18:21.741 "trtype": "TCP", 00:18:21.742 "adrfam": "IPv4", 00:18:21.742 "traddr": "10.0.0.2", 00:18:21.742 "trsvcid": "4420" 00:18:21.742 }, 00:18:21.742 "peer_address": { 00:18:21.742 "trtype": "TCP", 00:18:21.742 "adrfam": "IPv4", 00:18:21.742 "traddr": "10.0.0.1", 00:18:21.742 "trsvcid": "51272" 00:18:21.742 }, 00:18:21.742 "auth": { 00:18:21.742 "state": "completed", 00:18:21.742 "digest": "sha384", 00:18:21.742 "dhgroup": "ffdhe4096" 00:18:21.742 } 00:18:21.742 } 00:18:21.742 ]' 00:18:21.742 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:21.742 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:21.742 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:21.742 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:21.742 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:21.742 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.742 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.742 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.000 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWU5ZTRmMDNmZTIyYzM0YmQzZWIwNzgwMjBlMmU5ZDc3OWU4MDQ5ZGQ4OGIzYjliN2E4MGRhYWE0OTVmNzA5NHf4llw=: 00:18:22.000 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWU5ZTRmMDNmZTIyYzM0YmQzZWIwNzgwMjBlMmU5ZDc3OWU4MDQ5ZGQ4OGIzYjliN2E4MGRhYWE0OTVmNzA5NHf4llw=: 00:18:22.569 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.569 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:22.569 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.569 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.569 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.569 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:22.569 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:22.569 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:22.569 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:22.828 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:18:22.828 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:22.828 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:22.828 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:22.828 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:22.828 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.828 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.828 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.828 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.828 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.828 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.828 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.829 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.091 00:18:23.091 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:23.091 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:23.091 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.393 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.393 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.393 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.393 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.393 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.393 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:23.393 { 00:18:23.393 "cntlid": 81, 00:18:23.393 "qid": 0, 00:18:23.393 "state": "enabled", 00:18:23.393 "thread": "nvmf_tgt_poll_group_000", 00:18:23.393 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:23.393 "listen_address": { 00:18:23.393 "trtype": "TCP", 00:18:23.393 "adrfam": "IPv4", 00:18:23.393 "traddr": "10.0.0.2", 00:18:23.393 "trsvcid": "4420" 00:18:23.393 }, 00:18:23.393 "peer_address": { 00:18:23.393 "trtype": "TCP", 00:18:23.393 "adrfam": "IPv4", 00:18:23.393 "traddr": "10.0.0.1", 00:18:23.393 "trsvcid": "51288" 00:18:23.393 }, 00:18:23.393 "auth": { 00:18:23.393 "state": "completed", 00:18:23.393 "digest": "sha384", 00:18:23.393 "dhgroup": "ffdhe6144" 00:18:23.393 } 00:18:23.393 } 00:18:23.393 ]' 00:18:23.393 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:23.393 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:23.393 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:23.393 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:23.393 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:23.393 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.393 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.393 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.715 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRmZGI2Yjg0NDg1NzM0Y2U1NzY0YzNlNzYxMGJiODhiYzUzZjRiYjAzZjk4YmQzGzV93Q==: --dhchap-ctrl-secret DHHC-1:03:N2RiZThhOTJlODI2NmM5N2I1OGY4MWNlYzFiOWVjMzBkMGI0YjQ3YTg2MDM2MzJjYTk0ZGY5YzZmNmMxNjgwN5ionDU=: 00:18:23.716 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWRmZGI2Yjg0NDg1NzM0Y2U1NzY0YzNlNzYxMGJiODhiYzUzZjRiYjAzZjk4YmQzGzV93Q==: --dhchap-ctrl-secret DHHC-1:03:N2RiZThhOTJlODI2NmM5N2I1OGY4MWNlYzFiOWVjMzBkMGI0YjQ3YTg2MDM2MzJjYTk0ZGY5YzZmNmMxNjgwN5ionDU=: 00:18:24.322 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.322 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:24.322 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.322 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.322 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.322 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:24.322 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:24.322 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:24.322 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:18:24.322 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:24.322 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:24.322 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:24.322 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:24.322 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.322 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.322 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.322 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.580 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.580 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.580 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.580 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.838 00:18:24.838 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:24.838 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:24.838 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.098 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.098 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.098 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.098 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.098 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.098 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:25.098 { 00:18:25.098 "cntlid": 83, 00:18:25.098 "qid": 0, 00:18:25.098 "state": "enabled", 00:18:25.098 "thread": "nvmf_tgt_poll_group_000", 00:18:25.098 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:25.098 "listen_address": { 00:18:25.098 "trtype": "TCP", 00:18:25.098 "adrfam": "IPv4", 00:18:25.098 "traddr": "10.0.0.2", 00:18:25.098 "trsvcid": "4420" 00:18:25.098 }, 00:18:25.098 "peer_address": { 00:18:25.098 "trtype": "TCP", 00:18:25.098 "adrfam": "IPv4", 00:18:25.098 "traddr": "10.0.0.1", 00:18:25.098 "trsvcid": "57160" 00:18:25.098 }, 00:18:25.098 "auth": { 00:18:25.098 "state": "completed", 00:18:25.098 "digest": "sha384", 00:18:25.098 "dhgroup": "ffdhe6144" 00:18:25.098 } 00:18:25.098 } 00:18:25.098 ]' 00:18:25.098 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:25.098 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:25.098 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:25.098 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:25.098 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:25.098 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.098 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.098 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.358 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzdkNjYxYmFiNjQ3YjAxNGRlZjVjZDhkMDcyMzM4YzWRYJIq: --dhchap-ctrl-secret DHHC-1:02:ZWQyMGJkZGYwNzc1YjI2NDFiNzBmNWZjMmExODU4NzFkYjNkNmEzODJiMzFkZjc2ixuwiQ==: 00:18:25.358 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MzdkNjYxYmFiNjQ3YjAxNGRlZjVjZDhkMDcyMzM4YzWRYJIq: --dhchap-ctrl-secret DHHC-1:02:ZWQyMGJkZGYwNzc1YjI2NDFiNzBmNWZjMmExODU4NzFkYjNkNmEzODJiMzFkZjc2ixuwiQ==: 00:18:25.928 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.928 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:25.928 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.928 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.928 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.928 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:25.928 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:25.928 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:26.188 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:18:26.188 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:26.188 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:26.188 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:26.188 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:26.188 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.188 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.188 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.189 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.189 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.189 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.189 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.189 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.447 00:18:26.447 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:26.447 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.447 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:26.706 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.706 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.706 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.706 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.706 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.706 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:26.706 { 00:18:26.706 "cntlid": 85, 00:18:26.706 "qid": 0, 00:18:26.706 "state": "enabled", 00:18:26.706 "thread": "nvmf_tgt_poll_group_000", 00:18:26.706 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:26.706 "listen_address": { 00:18:26.706 "trtype": "TCP", 00:18:26.706 "adrfam": "IPv4", 00:18:26.706 "traddr": "10.0.0.2", 00:18:26.706 "trsvcid": "4420" 00:18:26.706 }, 00:18:26.706 "peer_address": { 00:18:26.706 "trtype": "TCP", 00:18:26.706 "adrfam": "IPv4", 00:18:26.706 "traddr": "10.0.0.1", 00:18:26.706 "trsvcid": "57176" 00:18:26.706 }, 00:18:26.706 "auth": { 00:18:26.706 "state": "completed", 00:18:26.706 "digest": "sha384", 00:18:26.706 "dhgroup": "ffdhe6144" 00:18:26.706 } 00:18:26.706 } 00:18:26.706 ]' 00:18:26.706 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:26.706 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:26.706 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:26.706 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:26.706 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:26.706 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.706 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.706 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.964 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzNiZDZiNTdjYjExODkyOGVkMzUxM2I1M2JjMDQ3YmE0YjM1MGIxZTg0NGEwYmI16L/adQ==: --dhchap-ctrl-secret DHHC-1:01:ODI5NDkyMTUyNTIxNWVlZmZkYzBmYzUwZWFmNjE0NmOk3lGG: 00:18:26.964 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzNiZDZiNTdjYjExODkyOGVkMzUxM2I1M2JjMDQ3YmE0YjM1MGIxZTg0NGEwYmI16L/adQ==: --dhchap-ctrl-secret DHHC-1:01:ODI5NDkyMTUyNTIxNWVlZmZkYzBmYzUwZWFmNjE0NmOk3lGG: 00:18:27.534 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.534 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.534 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:27.534 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.534 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.534 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.534 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:27.534 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:27.534 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:27.791 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:18:27.791 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:27.791 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:27.791 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:27.791 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:27.791 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.791 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:27.791 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.791 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.791 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.791 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:27.791 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:27.791 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:28.048 00:18:28.048 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:28.048 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.048 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:28.306 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.306 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.306 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.306 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.306 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.306 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:28.306 { 00:18:28.306 "cntlid": 87, 00:18:28.306 "qid": 0, 00:18:28.306 "state": "enabled", 00:18:28.306 "thread": "nvmf_tgt_poll_group_000", 00:18:28.306 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:28.306 "listen_address": { 00:18:28.306 "trtype": "TCP", 00:18:28.306 "adrfam": "IPv4", 00:18:28.306 "traddr": "10.0.0.2", 00:18:28.306 "trsvcid": "4420" 00:18:28.306 }, 00:18:28.306 "peer_address": { 00:18:28.306 "trtype": "TCP", 00:18:28.306 "adrfam": "IPv4", 00:18:28.306 "traddr": "10.0.0.1", 00:18:28.306 "trsvcid": "57204" 00:18:28.306 }, 00:18:28.306 "auth": { 00:18:28.306 "state": "completed", 00:18:28.306 "digest": "sha384", 00:18:28.306 "dhgroup": "ffdhe6144" 00:18:28.306 } 00:18:28.306 } 00:18:28.306 ]' 00:18:28.306 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:28.306 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:28.306 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:28.306 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:28.306 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:28.306 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.306 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.306 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.565 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWU5ZTRmMDNmZTIyYzM0YmQzZWIwNzgwMjBlMmU5ZDc3OWU4MDQ5ZGQ4OGIzYjliN2E4MGRhYWE0OTVmNzA5NHf4llw=: 00:18:28.565 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWU5ZTRmMDNmZTIyYzM0YmQzZWIwNzgwMjBlMmU5ZDc3OWU4MDQ5ZGQ4OGIzYjliN2E4MGRhYWE0OTVmNzA5NHf4llw=: 00:18:29.131 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.131 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:29.131 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.131 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.131 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.131 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:29.131 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:29.131 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:29.131 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:29.390 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:18:29.390 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:29.390 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:29.390 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:29.390 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:29.390 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.390 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.390 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.390 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.390 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.390 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.390 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.390 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.959 00:18:29.959 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:29.959 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:29.959 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.218 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.218 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.218 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.218 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.218 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.218 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:30.218 { 00:18:30.218 "cntlid": 89, 00:18:30.218 "qid": 0, 00:18:30.218 "state": "enabled", 00:18:30.218 "thread": "nvmf_tgt_poll_group_000", 00:18:30.218 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:30.218 "listen_address": { 00:18:30.218 "trtype": "TCP", 00:18:30.218 "adrfam": "IPv4", 00:18:30.218 "traddr": "10.0.0.2", 00:18:30.218 "trsvcid": "4420" 00:18:30.218 }, 00:18:30.218 "peer_address": { 00:18:30.218 "trtype": "TCP", 00:18:30.218 "adrfam": "IPv4", 00:18:30.218 "traddr": "10.0.0.1", 00:18:30.218 "trsvcid": "57228" 00:18:30.218 }, 00:18:30.218 "auth": { 00:18:30.218 "state": "completed", 00:18:30.218 "digest": "sha384", 00:18:30.218 "dhgroup": "ffdhe8192" 00:18:30.218 } 00:18:30.218 } 00:18:30.218 ]' 00:18:30.218 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:30.218 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:30.218 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:30.218 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:30.218 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:30.218 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.218 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.218 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.476 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRmZGI2Yjg0NDg1NzM0Y2U1NzY0YzNlNzYxMGJiODhiYzUzZjRiYjAzZjk4YmQzGzV93Q==: --dhchap-ctrl-secret DHHC-1:03:N2RiZThhOTJlODI2NmM5N2I1OGY4MWNlYzFiOWVjMzBkMGI0YjQ3YTg2MDM2MzJjYTk0ZGY5YzZmNmMxNjgwN5ionDU=: 00:18:30.476 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWRmZGI2Yjg0NDg1NzM0Y2U1NzY0YzNlNzYxMGJiODhiYzUzZjRiYjAzZjk4YmQzGzV93Q==: --dhchap-ctrl-secret DHHC-1:03:N2RiZThhOTJlODI2NmM5N2I1OGY4MWNlYzFiOWVjMzBkMGI0YjQ3YTg2MDM2MzJjYTk0ZGY5YzZmNmMxNjgwN5ionDU=: 00:18:31.043 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.043 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.043 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:31.043 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.043 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.043 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.043 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:31.043 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:31.043 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:31.302 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:18:31.302 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:31.302 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:31.302 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:31.302 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:31.302 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.302 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.302 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.302 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.302 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.302 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.302 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.302 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.871 00:18:31.871 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:31.871 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:31.871 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.871 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.871 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.871 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.871 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.871 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.871 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:31.871 { 00:18:31.871 "cntlid": 91, 00:18:31.871 "qid": 0, 00:18:31.871 "state": "enabled", 00:18:31.871 "thread": "nvmf_tgt_poll_group_000", 00:18:31.871 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:31.871 "listen_address": { 00:18:31.871 "trtype": "TCP", 00:18:31.871 "adrfam": "IPv4", 00:18:31.871 "traddr": "10.0.0.2", 00:18:31.871 "trsvcid": "4420" 00:18:31.871 }, 00:18:31.871 "peer_address": { 00:18:31.871 "trtype": "TCP", 00:18:31.871 "adrfam": "IPv4", 00:18:31.871 "traddr": "10.0.0.1", 00:18:31.871 "trsvcid": "57270" 00:18:31.871 }, 00:18:31.871 "auth": { 00:18:31.871 "state": "completed", 00:18:31.871 "digest": "sha384", 00:18:31.871 "dhgroup": "ffdhe8192" 00:18:31.871 } 00:18:31.871 } 00:18:31.871 ]' 00:18:31.871 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:31.871 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:31.871 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:32.130 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:32.130 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:32.130 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.130 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.130 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.130 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzdkNjYxYmFiNjQ3YjAxNGRlZjVjZDhkMDcyMzM4YzWRYJIq: --dhchap-ctrl-secret DHHC-1:02:ZWQyMGJkZGYwNzc1YjI2NDFiNzBmNWZjMmExODU4NzFkYjNkNmEzODJiMzFkZjc2ixuwiQ==: 00:18:32.130 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MzdkNjYxYmFiNjQ3YjAxNGRlZjVjZDhkMDcyMzM4YzWRYJIq: --dhchap-ctrl-secret DHHC-1:02:ZWQyMGJkZGYwNzc1YjI2NDFiNzBmNWZjMmExODU4NzFkYjNkNmEzODJiMzFkZjc2ixuwiQ==: 00:18:32.699 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.699 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:32.699 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.699 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.699 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.699 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:32.699 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:32.699 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:32.958 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:18:32.958 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:32.958 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:32.958 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:32.958 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:32.958 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.958 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.958 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.958 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.958 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.958 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.958 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.958 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.525 00:18:33.525 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:33.525 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.525 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:33.783 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.784 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.784 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.784 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.784 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.784 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:33.784 { 00:18:33.784 "cntlid": 93, 00:18:33.784 "qid": 0, 00:18:33.784 "state": "enabled", 00:18:33.784 "thread": "nvmf_tgt_poll_group_000", 00:18:33.784 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:33.784 "listen_address": { 00:18:33.784 "trtype": "TCP", 00:18:33.784 "adrfam": "IPv4", 00:18:33.784 "traddr": "10.0.0.2", 00:18:33.784 "trsvcid": "4420" 00:18:33.784 }, 00:18:33.784 "peer_address": { 00:18:33.784 "trtype": "TCP", 00:18:33.784 "adrfam": "IPv4", 00:18:33.784 "traddr": "10.0.0.1", 00:18:33.784 "trsvcid": "57288" 00:18:33.784 }, 00:18:33.784 "auth": { 00:18:33.784 "state": "completed", 00:18:33.784 "digest": "sha384", 00:18:33.784 "dhgroup": "ffdhe8192" 00:18:33.784 } 00:18:33.784 } 00:18:33.784 ]' 00:18:33.784 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:33.784 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:33.784 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:33.784 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:33.784 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:33.784 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.784 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.784 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.043 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzNiZDZiNTdjYjExODkyOGVkMzUxM2I1M2JjMDQ3YmE0YjM1MGIxZTg0NGEwYmI16L/adQ==: --dhchap-ctrl-secret DHHC-1:01:ODI5NDkyMTUyNTIxNWVlZmZkYzBmYzUwZWFmNjE0NmOk3lGG: 00:18:34.043 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzNiZDZiNTdjYjExODkyOGVkMzUxM2I1M2JjMDQ3YmE0YjM1MGIxZTg0NGEwYmI16L/adQ==: --dhchap-ctrl-secret DHHC-1:01:ODI5NDkyMTUyNTIxNWVlZmZkYzBmYzUwZWFmNjE0NmOk3lGG: 00:18:34.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:34.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:34.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:34.611 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:34.871 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:18:34.871 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:34.872 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:34.872 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:34.872 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:34.872 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.872 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:34.872 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.872 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.872 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.872 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:34.872 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:34.872 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:35.440 00:18:35.441 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:35.441 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:35.441 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.441 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.441 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.441 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.441 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.441 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.441 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:35.441 { 00:18:35.441 "cntlid": 95, 00:18:35.441 "qid": 0, 00:18:35.441 "state": "enabled", 00:18:35.441 "thread": "nvmf_tgt_poll_group_000", 00:18:35.441 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:35.441 "listen_address": { 00:18:35.441 "trtype": "TCP", 00:18:35.441 "adrfam": "IPv4", 00:18:35.441 "traddr": "10.0.0.2", 00:18:35.441 "trsvcid": "4420" 00:18:35.441 }, 00:18:35.441 "peer_address": { 00:18:35.441 "trtype": "TCP", 00:18:35.441 "adrfam": "IPv4", 00:18:35.441 "traddr": "10.0.0.1", 00:18:35.441 "trsvcid": "58644" 00:18:35.441 }, 00:18:35.441 "auth": { 00:18:35.441 "state": "completed", 00:18:35.441 "digest": "sha384", 00:18:35.441 "dhgroup": "ffdhe8192" 00:18:35.441 } 00:18:35.441 } 00:18:35.441 ]' 00:18:35.441 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:35.700 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:35.700 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:35.700 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:35.700 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:35.700 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.700 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.700 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.960 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWU5ZTRmMDNmZTIyYzM0YmQzZWIwNzgwMjBlMmU5ZDc3OWU4MDQ5ZGQ4OGIzYjliN2E4MGRhYWE0OTVmNzA5NHf4llw=: 00:18:35.960 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWU5ZTRmMDNmZTIyYzM0YmQzZWIwNzgwMjBlMmU5ZDc3OWU4MDQ5ZGQ4OGIzYjliN2E4MGRhYWE0OTVmNzA5NHf4llw=: 00:18:36.528 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.528 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:36.528 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.528 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.528 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.528 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:36.528 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:36.528 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:36.529 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:36.529 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:36.788 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:18:36.788 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:36.788 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:36.788 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:36.788 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:36.788 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.788 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.788 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.788 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.788 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.788 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.788 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.788 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.788 00:18:37.047 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:37.047 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:37.047 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.047 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.047 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.048 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.048 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.048 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.048 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:37.048 { 00:18:37.048 "cntlid": 97, 00:18:37.048 "qid": 0, 00:18:37.048 "state": "enabled", 00:18:37.048 "thread": "nvmf_tgt_poll_group_000", 00:18:37.048 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:37.048 "listen_address": { 00:18:37.048 "trtype": "TCP", 00:18:37.048 "adrfam": "IPv4", 00:18:37.048 "traddr": "10.0.0.2", 00:18:37.048 "trsvcid": "4420" 00:18:37.048 }, 00:18:37.048 "peer_address": { 00:18:37.048 "trtype": "TCP", 00:18:37.048 "adrfam": "IPv4", 00:18:37.048 "traddr": "10.0.0.1", 00:18:37.048 "trsvcid": "58666" 00:18:37.048 }, 00:18:37.048 "auth": { 00:18:37.048 "state": "completed", 00:18:37.048 "digest": "sha512", 00:18:37.048 "dhgroup": "null" 00:18:37.048 } 00:18:37.048 } 00:18:37.048 ]' 00:18:37.048 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:37.048 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:37.048 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:37.307 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:37.307 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:37.307 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.307 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.307 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.567 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRmZGI2Yjg0NDg1NzM0Y2U1NzY0YzNlNzYxMGJiODhiYzUzZjRiYjAzZjk4YmQzGzV93Q==: --dhchap-ctrl-secret DHHC-1:03:N2RiZThhOTJlODI2NmM5N2I1OGY4MWNlYzFiOWVjMzBkMGI0YjQ3YTg2MDM2MzJjYTk0ZGY5YzZmNmMxNjgwN5ionDU=: 00:18:37.567 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWRmZGI2Yjg0NDg1NzM0Y2U1NzY0YzNlNzYxMGJiODhiYzUzZjRiYjAzZjk4YmQzGzV93Q==: --dhchap-ctrl-secret DHHC-1:03:N2RiZThhOTJlODI2NmM5N2I1OGY4MWNlYzFiOWVjMzBkMGI0YjQ3YTg2MDM2MzJjYTk0ZGY5YzZmNmMxNjgwN5ionDU=: 00:18:38.135 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.135 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:38.135 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.135 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.135 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.135 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:38.135 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:38.135 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:38.135 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:18:38.135 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:38.135 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:38.135 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:38.135 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:38.135 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.135 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.135 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.135 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.135 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.135 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.135 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.135 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.393 00:18:38.393 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:38.393 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:38.393 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.652 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.652 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.652 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.652 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.652 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.652 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:38.652 { 00:18:38.652 "cntlid": 99, 00:18:38.652 "qid": 0, 00:18:38.652 "state": "enabled", 00:18:38.652 "thread": "nvmf_tgt_poll_group_000", 00:18:38.652 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:38.652 "listen_address": { 00:18:38.652 "trtype": "TCP", 00:18:38.652 "adrfam": "IPv4", 00:18:38.652 "traddr": "10.0.0.2", 00:18:38.652 "trsvcid": "4420" 00:18:38.652 }, 00:18:38.652 "peer_address": { 00:18:38.652 "trtype": "TCP", 00:18:38.652 "adrfam": "IPv4", 00:18:38.652 "traddr": "10.0.0.1", 00:18:38.652 "trsvcid": "58694" 00:18:38.652 }, 00:18:38.652 "auth": { 00:18:38.652 "state": "completed", 00:18:38.652 "digest": "sha512", 00:18:38.652 "dhgroup": "null" 00:18:38.652 } 00:18:38.652 } 00:18:38.652 ]' 00:18:38.652 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:38.652 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:38.652 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:38.911 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:38.911 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:38.911 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.911 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.911 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.911 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzdkNjYxYmFiNjQ3YjAxNGRlZjVjZDhkMDcyMzM4YzWRYJIq: --dhchap-ctrl-secret DHHC-1:02:ZWQyMGJkZGYwNzc1YjI2NDFiNzBmNWZjMmExODU4NzFkYjNkNmEzODJiMzFkZjc2ixuwiQ==: 00:18:38.911 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MzdkNjYxYmFiNjQ3YjAxNGRlZjVjZDhkMDcyMzM4YzWRYJIq: --dhchap-ctrl-secret DHHC-1:02:ZWQyMGJkZGYwNzc1YjI2NDFiNzBmNWZjMmExODU4NzFkYjNkNmEzODJiMzFkZjc2ixuwiQ==: 00:18:39.478 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.478 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:39.478 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.478 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.737 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.737 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:39.737 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:39.737 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:39.737 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:18:39.737 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:39.737 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:39.737 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:39.737 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:39.737 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.737 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.737 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.737 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.737 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.737 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.737 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.737 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.997 00:18:39.997 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:39.997 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:39.997 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.255 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.255 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.255 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.255 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.255 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.255 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:40.255 { 00:18:40.255 "cntlid": 101, 00:18:40.255 "qid": 0, 00:18:40.255 "state": "enabled", 00:18:40.255 "thread": "nvmf_tgt_poll_group_000", 00:18:40.255 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:40.255 "listen_address": { 00:18:40.255 "trtype": "TCP", 00:18:40.255 "adrfam": "IPv4", 00:18:40.255 "traddr": "10.0.0.2", 00:18:40.255 "trsvcid": "4420" 00:18:40.255 }, 00:18:40.255 "peer_address": { 00:18:40.255 "trtype": "TCP", 00:18:40.255 "adrfam": "IPv4", 00:18:40.255 "traddr": "10.0.0.1", 00:18:40.255 "trsvcid": "58736" 00:18:40.255 }, 00:18:40.255 "auth": { 00:18:40.255 "state": "completed", 00:18:40.255 "digest": "sha512", 00:18:40.255 "dhgroup": "null" 00:18:40.255 } 00:18:40.255 } 00:18:40.255 ]' 00:18:40.255 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:40.255 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:40.255 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:40.255 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:40.255 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:40.255 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.514 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.514 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.514 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzNiZDZiNTdjYjExODkyOGVkMzUxM2I1M2JjMDQ3YmE0YjM1MGIxZTg0NGEwYmI16L/adQ==: --dhchap-ctrl-secret DHHC-1:01:ODI5NDkyMTUyNTIxNWVlZmZkYzBmYzUwZWFmNjE0NmOk3lGG: 00:18:40.515 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzNiZDZiNTdjYjExODkyOGVkMzUxM2I1M2JjMDQ3YmE0YjM1MGIxZTg0NGEwYmI16L/adQ==: --dhchap-ctrl-secret DHHC-1:01:ODI5NDkyMTUyNTIxNWVlZmZkYzBmYzUwZWFmNjE0NmOk3lGG: 00:18:41.082 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.082 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.082 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:41.082 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.082 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.082 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.082 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:41.082 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:41.082 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:41.341 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:18:41.341 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:41.341 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:41.341 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:41.341 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:41.341 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.341 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:41.341 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.341 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.341 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.341 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:41.341 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:41.341 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:41.600 00:18:41.600 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:41.601 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.601 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:41.862 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.862 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.862 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.862 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.862 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.862 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:41.862 { 00:18:41.862 "cntlid": 103, 00:18:41.862 "qid": 0, 00:18:41.862 "state": "enabled", 00:18:41.862 "thread": "nvmf_tgt_poll_group_000", 00:18:41.862 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:41.862 "listen_address": { 00:18:41.862 "trtype": "TCP", 00:18:41.862 "adrfam": "IPv4", 00:18:41.862 "traddr": "10.0.0.2", 00:18:41.863 "trsvcid": "4420" 00:18:41.863 }, 00:18:41.863 "peer_address": { 00:18:41.863 "trtype": "TCP", 00:18:41.863 "adrfam": "IPv4", 00:18:41.863 "traddr": "10.0.0.1", 00:18:41.863 "trsvcid": "58778" 00:18:41.863 }, 00:18:41.863 "auth": { 00:18:41.863 "state": "completed", 00:18:41.863 "digest": "sha512", 00:18:41.863 "dhgroup": "null" 00:18:41.863 } 00:18:41.863 } 00:18:41.863 ]' 00:18:41.863 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:41.863 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:41.863 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:41.863 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:41.863 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:41.863 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.863 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.863 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.123 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWU5ZTRmMDNmZTIyYzM0YmQzZWIwNzgwMjBlMmU5ZDc3OWU4MDQ5ZGQ4OGIzYjliN2E4MGRhYWE0OTVmNzA5NHf4llw=: 00:18:42.123 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWU5ZTRmMDNmZTIyYzM0YmQzZWIwNzgwMjBlMmU5ZDc3OWU4MDQ5ZGQ4OGIzYjliN2E4MGRhYWE0OTVmNzA5NHf4llw=: 00:18:42.746 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.746 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:42.746 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.746 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.746 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.746 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:42.746 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:42.746 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:42.746 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:43.005 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:18:43.005 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:43.005 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:43.005 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:43.005 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:43.005 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.005 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.005 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.006 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.006 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.006 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.006 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.006 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.265 00:18:43.265 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:43.265 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:43.265 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.265 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.265 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.265 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.265 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.265 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.265 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:43.265 { 00:18:43.265 "cntlid": 105, 00:18:43.265 "qid": 0, 00:18:43.265 "state": "enabled", 00:18:43.265 "thread": "nvmf_tgt_poll_group_000", 00:18:43.265 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:43.265 "listen_address": { 00:18:43.265 "trtype": "TCP", 00:18:43.265 "adrfam": "IPv4", 00:18:43.265 "traddr": "10.0.0.2", 00:18:43.265 "trsvcid": "4420" 00:18:43.265 }, 00:18:43.265 "peer_address": { 00:18:43.265 "trtype": "TCP", 00:18:43.265 "adrfam": "IPv4", 00:18:43.265 "traddr": "10.0.0.1", 00:18:43.265 "trsvcid": "58792" 00:18:43.265 }, 00:18:43.265 "auth": { 00:18:43.265 "state": "completed", 00:18:43.265 "digest": "sha512", 00:18:43.265 "dhgroup": "ffdhe2048" 00:18:43.265 } 00:18:43.265 } 00:18:43.265 ]' 00:18:43.265 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:43.524 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:43.524 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:43.525 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:43.525 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:43.525 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.525 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.525 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.783 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRmZGI2Yjg0NDg1NzM0Y2U1NzY0YzNlNzYxMGJiODhiYzUzZjRiYjAzZjk4YmQzGzV93Q==: --dhchap-ctrl-secret DHHC-1:03:N2RiZThhOTJlODI2NmM5N2I1OGY4MWNlYzFiOWVjMzBkMGI0YjQ3YTg2MDM2MzJjYTk0ZGY5YzZmNmMxNjgwN5ionDU=: 00:18:43.783 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWRmZGI2Yjg0NDg1NzM0Y2U1NzY0YzNlNzYxMGJiODhiYzUzZjRiYjAzZjk4YmQzGzV93Q==: --dhchap-ctrl-secret DHHC-1:03:N2RiZThhOTJlODI2NmM5N2I1OGY4MWNlYzFiOWVjMzBkMGI0YjQ3YTg2MDM2MzJjYTk0ZGY5YzZmNmMxNjgwN5ionDU=: 00:18:44.350 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.350 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.350 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:44.350 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.350 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.350 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.350 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:44.350 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:44.350 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:44.609 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:18:44.609 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:44.609 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:44.609 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:44.609 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:44.609 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.609 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.609 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.609 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.609 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.609 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.609 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.609 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.609 00:18:44.868 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:44.868 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:44.868 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.868 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.868 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.868 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.868 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.868 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.868 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:44.868 { 00:18:44.868 "cntlid": 107, 00:18:44.868 "qid": 0, 00:18:44.868 "state": "enabled", 00:18:44.868 "thread": "nvmf_tgt_poll_group_000", 00:18:44.868 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:44.868 "listen_address": { 00:18:44.868 "trtype": "TCP", 00:18:44.868 "adrfam": "IPv4", 00:18:44.868 "traddr": "10.0.0.2", 00:18:44.868 "trsvcid": "4420" 00:18:44.868 }, 00:18:44.868 "peer_address": { 00:18:44.868 "trtype": "TCP", 00:18:44.868 "adrfam": "IPv4", 00:18:44.868 "traddr": "10.0.0.1", 00:18:44.868 "trsvcid": "49562" 00:18:44.868 }, 00:18:44.868 "auth": { 00:18:44.868 "state": "completed", 00:18:44.868 "digest": "sha512", 00:18:44.868 "dhgroup": "ffdhe2048" 00:18:44.868 } 00:18:44.868 } 00:18:44.868 ]' 00:18:44.868 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:45.126 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:45.126 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:45.126 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:45.126 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:45.126 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.126 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.126 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.385 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzdkNjYxYmFiNjQ3YjAxNGRlZjVjZDhkMDcyMzM4YzWRYJIq: --dhchap-ctrl-secret DHHC-1:02:ZWQyMGJkZGYwNzc1YjI2NDFiNzBmNWZjMmExODU4NzFkYjNkNmEzODJiMzFkZjc2ixuwiQ==: 00:18:45.385 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MzdkNjYxYmFiNjQ3YjAxNGRlZjVjZDhkMDcyMzM4YzWRYJIq: --dhchap-ctrl-secret DHHC-1:02:ZWQyMGJkZGYwNzc1YjI2NDFiNzBmNWZjMmExODU4NzFkYjNkNmEzODJiMzFkZjc2ixuwiQ==: 00:18:45.952 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.952 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:45.952 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.952 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.952 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.952 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:45.952 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:45.952 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:45.952 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:18:45.952 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:45.952 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:45.952 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:45.952 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:45.952 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.952 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.952 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.952 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.211 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.211 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.211 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.211 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.211 00:18:46.470 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:46.470 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:46.470 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.470 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.470 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.470 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.470 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.470 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.470 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:46.470 { 00:18:46.470 "cntlid": 109, 00:18:46.470 "qid": 0, 00:18:46.470 "state": "enabled", 00:18:46.470 "thread": "nvmf_tgt_poll_group_000", 00:18:46.470 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:46.470 "listen_address": { 00:18:46.470 "trtype": "TCP", 00:18:46.470 "adrfam": "IPv4", 00:18:46.470 "traddr": "10.0.0.2", 00:18:46.470 "trsvcid": "4420" 00:18:46.470 }, 00:18:46.470 "peer_address": { 00:18:46.470 "trtype": "TCP", 00:18:46.470 "adrfam": "IPv4", 00:18:46.470 "traddr": "10.0.0.1", 00:18:46.470 "trsvcid": "49588" 00:18:46.470 }, 00:18:46.470 "auth": { 00:18:46.470 "state": "completed", 00:18:46.470 "digest": "sha512", 00:18:46.470 "dhgroup": "ffdhe2048" 00:18:46.470 } 00:18:46.470 } 00:18:46.470 ]' 00:18:46.470 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:46.470 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:46.470 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:46.728 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:46.728 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:46.728 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.728 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.728 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.987 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzNiZDZiNTdjYjExODkyOGVkMzUxM2I1M2JjMDQ3YmE0YjM1MGIxZTg0NGEwYmI16L/adQ==: --dhchap-ctrl-secret DHHC-1:01:ODI5NDkyMTUyNTIxNWVlZmZkYzBmYzUwZWFmNjE0NmOk3lGG: 00:18:46.987 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzNiZDZiNTdjYjExODkyOGVkMzUxM2I1M2JjMDQ3YmE0YjM1MGIxZTg0NGEwYmI16L/adQ==: --dhchap-ctrl-secret DHHC-1:01:ODI5NDkyMTUyNTIxNWVlZmZkYzBmYzUwZWFmNjE0NmOk3lGG: 00:18:47.555 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.555 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.555 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:47.555 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.555 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.555 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.555 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:47.555 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:47.555 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:47.555 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:18:47.555 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:47.555 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:47.555 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:47.555 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:47.555 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.555 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:47.555 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.555 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.555 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.555 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:47.555 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:47.555 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:47.814 00:18:47.814 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:47.814 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:47.814 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.073 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.073 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.073 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.073 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.073 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.073 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:48.073 { 00:18:48.073 "cntlid": 111, 00:18:48.073 "qid": 0, 00:18:48.073 "state": "enabled", 00:18:48.073 "thread": "nvmf_tgt_poll_group_000", 00:18:48.073 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:48.073 "listen_address": { 00:18:48.073 "trtype": "TCP", 00:18:48.073 "adrfam": "IPv4", 00:18:48.073 "traddr": "10.0.0.2", 00:18:48.073 "trsvcid": "4420" 00:18:48.073 }, 00:18:48.073 "peer_address": { 00:18:48.073 "trtype": "TCP", 00:18:48.073 "adrfam": "IPv4", 00:18:48.073 "traddr": "10.0.0.1", 00:18:48.073 "trsvcid": "49620" 00:18:48.073 }, 00:18:48.073 "auth": { 00:18:48.073 "state": "completed", 00:18:48.073 "digest": "sha512", 00:18:48.073 "dhgroup": "ffdhe2048" 00:18:48.073 } 00:18:48.073 } 00:18:48.073 ]' 00:18:48.073 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:48.073 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:48.073 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:48.332 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:48.332 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:48.332 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.332 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.332 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.332 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWU5ZTRmMDNmZTIyYzM0YmQzZWIwNzgwMjBlMmU5ZDc3OWU4MDQ5ZGQ4OGIzYjliN2E4MGRhYWE0OTVmNzA5NHf4llw=: 00:18:48.332 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWU5ZTRmMDNmZTIyYzM0YmQzZWIwNzgwMjBlMmU5ZDc3OWU4MDQ5ZGQ4OGIzYjliN2E4MGRhYWE0OTVmNzA5NHf4llw=: 00:18:48.899 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.158 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.158 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:49.158 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.158 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.158 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.158 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:49.158 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:49.158 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:49.158 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:49.158 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:18:49.158 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:49.158 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:49.158 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:49.158 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:49.158 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.158 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.158 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.158 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.159 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.159 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.159 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.159 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.418 00:18:49.418 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:49.418 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:49.418 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.676 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.676 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.676 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.676 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.676 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.676 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:49.676 { 00:18:49.676 "cntlid": 113, 00:18:49.676 "qid": 0, 00:18:49.676 "state": "enabled", 00:18:49.676 "thread": "nvmf_tgt_poll_group_000", 00:18:49.677 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:49.677 "listen_address": { 00:18:49.677 "trtype": "TCP", 00:18:49.677 "adrfam": "IPv4", 00:18:49.677 "traddr": "10.0.0.2", 00:18:49.677 "trsvcid": "4420" 00:18:49.677 }, 00:18:49.677 "peer_address": { 00:18:49.677 "trtype": "TCP", 00:18:49.677 "adrfam": "IPv4", 00:18:49.677 "traddr": "10.0.0.1", 00:18:49.677 "trsvcid": "49646" 00:18:49.677 }, 00:18:49.677 "auth": { 00:18:49.677 "state": "completed", 00:18:49.677 "digest": "sha512", 00:18:49.677 "dhgroup": "ffdhe3072" 00:18:49.677 } 00:18:49.677 } 00:18:49.677 ]' 00:18:49.677 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:49.677 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:49.677 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:49.935 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:49.935 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:49.935 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.935 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.935 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.935 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRmZGI2Yjg0NDg1NzM0Y2U1NzY0YzNlNzYxMGJiODhiYzUzZjRiYjAzZjk4YmQzGzV93Q==: --dhchap-ctrl-secret DHHC-1:03:N2RiZThhOTJlODI2NmM5N2I1OGY4MWNlYzFiOWVjMzBkMGI0YjQ3YTg2MDM2MzJjYTk0ZGY5YzZmNmMxNjgwN5ionDU=: 00:18:49.935 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWRmZGI2Yjg0NDg1NzM0Y2U1NzY0YzNlNzYxMGJiODhiYzUzZjRiYjAzZjk4YmQzGzV93Q==: --dhchap-ctrl-secret DHHC-1:03:N2RiZThhOTJlODI2NmM5N2I1OGY4MWNlYzFiOWVjMzBkMGI0YjQ3YTg2MDM2MzJjYTk0ZGY5YzZmNmMxNjgwN5ionDU=: 00:18:50.503 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.503 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.503 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:50.503 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.503 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.503 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.503 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:50.503 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:50.503 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:50.762 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:18:50.762 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:50.762 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:50.762 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:50.762 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:50.762 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.762 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.762 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.762 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.762 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.762 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.762 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.762 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.021 00:18:51.021 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:51.021 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:51.021 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.279 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.279 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.279 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.279 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.279 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.279 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:51.279 { 00:18:51.279 "cntlid": 115, 00:18:51.279 "qid": 0, 00:18:51.279 "state": "enabled", 00:18:51.279 "thread": "nvmf_tgt_poll_group_000", 00:18:51.279 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:51.279 "listen_address": { 00:18:51.279 "trtype": "TCP", 00:18:51.279 "adrfam": "IPv4", 00:18:51.279 "traddr": "10.0.0.2", 00:18:51.279 "trsvcid": "4420" 00:18:51.279 }, 00:18:51.279 "peer_address": { 00:18:51.279 "trtype": "TCP", 00:18:51.279 "adrfam": "IPv4", 00:18:51.279 "traddr": "10.0.0.1", 00:18:51.279 "trsvcid": "49670" 00:18:51.279 }, 00:18:51.279 "auth": { 00:18:51.279 "state": "completed", 00:18:51.279 "digest": "sha512", 00:18:51.279 "dhgroup": "ffdhe3072" 00:18:51.279 } 00:18:51.279 } 00:18:51.279 ]' 00:18:51.279 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:51.279 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:51.279 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:51.279 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:51.279 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:51.279 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.279 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.279 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.537 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzdkNjYxYmFiNjQ3YjAxNGRlZjVjZDhkMDcyMzM4YzWRYJIq: --dhchap-ctrl-secret DHHC-1:02:ZWQyMGJkZGYwNzc1YjI2NDFiNzBmNWZjMmExODU4NzFkYjNkNmEzODJiMzFkZjc2ixuwiQ==: 00:18:51.537 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MzdkNjYxYmFiNjQ3YjAxNGRlZjVjZDhkMDcyMzM4YzWRYJIq: --dhchap-ctrl-secret DHHC-1:02:ZWQyMGJkZGYwNzc1YjI2NDFiNzBmNWZjMmExODU4NzFkYjNkNmEzODJiMzFkZjc2ixuwiQ==: 00:18:52.103 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.103 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:52.103 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.103 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.103 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.103 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:52.103 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:52.103 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:52.362 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:18:52.362 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:52.362 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:52.362 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:52.362 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:52.362 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.362 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.362 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.362 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.362 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.362 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.362 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.362 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.621 00:18:52.621 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:52.621 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:52.621 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.880 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.880 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.880 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.880 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.880 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.880 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:52.880 { 00:18:52.880 "cntlid": 117, 00:18:52.880 "qid": 0, 00:18:52.880 "state": "enabled", 00:18:52.880 "thread": "nvmf_tgt_poll_group_000", 00:18:52.880 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:52.880 "listen_address": { 00:18:52.880 "trtype": "TCP", 00:18:52.880 "adrfam": "IPv4", 00:18:52.880 "traddr": "10.0.0.2", 00:18:52.880 "trsvcid": "4420" 00:18:52.880 }, 00:18:52.880 "peer_address": { 00:18:52.880 "trtype": "TCP", 00:18:52.880 "adrfam": "IPv4", 00:18:52.880 "traddr": "10.0.0.1", 00:18:52.880 "trsvcid": "49696" 00:18:52.880 }, 00:18:52.880 "auth": { 00:18:52.880 "state": "completed", 00:18:52.880 "digest": "sha512", 00:18:52.880 "dhgroup": "ffdhe3072" 00:18:52.880 } 00:18:52.880 } 00:18:52.880 ]' 00:18:52.880 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:52.880 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:52.880 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:52.880 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:52.880 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:53.139 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.139 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.139 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.139 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzNiZDZiNTdjYjExODkyOGVkMzUxM2I1M2JjMDQ3YmE0YjM1MGIxZTg0NGEwYmI16L/adQ==: --dhchap-ctrl-secret DHHC-1:01:ODI5NDkyMTUyNTIxNWVlZmZkYzBmYzUwZWFmNjE0NmOk3lGG: 00:18:53.139 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzNiZDZiNTdjYjExODkyOGVkMzUxM2I1M2JjMDQ3YmE0YjM1MGIxZTg0NGEwYmI16L/adQ==: --dhchap-ctrl-secret DHHC-1:01:ODI5NDkyMTUyNTIxNWVlZmZkYzBmYzUwZWFmNjE0NmOk3lGG: 00:18:53.706 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.706 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.706 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:53.706 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.706 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.706 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.706 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:53.706 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:53.706 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:53.965 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:18:53.965 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:53.965 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:53.965 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:53.965 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:53.965 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.965 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:53.965 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.965 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.965 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.965 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:53.965 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:53.965 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:54.224 00:18:54.224 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:54.224 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:54.224 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.482 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.482 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.482 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.482 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.482 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.482 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:54.482 { 00:18:54.482 "cntlid": 119, 00:18:54.482 "qid": 0, 00:18:54.482 "state": "enabled", 00:18:54.482 "thread": "nvmf_tgt_poll_group_000", 00:18:54.482 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:54.482 "listen_address": { 00:18:54.482 "trtype": "TCP", 00:18:54.482 "adrfam": "IPv4", 00:18:54.482 "traddr": "10.0.0.2", 00:18:54.482 "trsvcid": "4420" 00:18:54.482 }, 00:18:54.482 "peer_address": { 00:18:54.482 "trtype": "TCP", 00:18:54.482 "adrfam": "IPv4", 00:18:54.482 "traddr": "10.0.0.1", 00:18:54.482 "trsvcid": "45994" 00:18:54.482 }, 00:18:54.482 "auth": { 00:18:54.483 "state": "completed", 00:18:54.483 "digest": "sha512", 00:18:54.483 "dhgroup": "ffdhe3072" 00:18:54.483 } 00:18:54.483 } 00:18:54.483 ]' 00:18:54.483 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:54.483 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:54.483 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:54.483 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:54.483 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:54.483 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.483 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.483 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.741 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWU5ZTRmMDNmZTIyYzM0YmQzZWIwNzgwMjBlMmU5ZDc3OWU4MDQ5ZGQ4OGIzYjliN2E4MGRhYWE0OTVmNzA5NHf4llw=: 00:18:54.741 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWU5ZTRmMDNmZTIyYzM0YmQzZWIwNzgwMjBlMmU5ZDc3OWU4MDQ5ZGQ4OGIzYjliN2E4MGRhYWE0OTVmNzA5NHf4llw=: 00:18:55.309 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.309 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:55.309 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.309 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.309 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.309 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:55.309 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:55.309 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:55.309 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:55.569 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:18:55.569 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:55.569 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:55.569 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:55.569 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:55.569 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.569 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.569 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.569 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.569 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.569 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.569 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.569 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.828 00:18:55.828 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:55.828 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:55.828 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.087 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.087 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.087 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.087 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.087 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.087 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:56.087 { 00:18:56.087 "cntlid": 121, 00:18:56.087 "qid": 0, 00:18:56.087 "state": "enabled", 00:18:56.087 "thread": "nvmf_tgt_poll_group_000", 00:18:56.087 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:56.087 "listen_address": { 00:18:56.087 "trtype": "TCP", 00:18:56.087 "adrfam": "IPv4", 00:18:56.087 "traddr": "10.0.0.2", 00:18:56.087 "trsvcid": "4420" 00:18:56.087 }, 00:18:56.087 "peer_address": { 00:18:56.087 "trtype": "TCP", 00:18:56.087 "adrfam": "IPv4", 00:18:56.087 "traddr": "10.0.0.1", 00:18:56.087 "trsvcid": "46026" 00:18:56.087 }, 00:18:56.087 "auth": { 00:18:56.087 "state": "completed", 00:18:56.087 "digest": "sha512", 00:18:56.087 "dhgroup": "ffdhe4096" 00:18:56.087 } 00:18:56.087 } 00:18:56.087 ]' 00:18:56.087 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:56.087 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:56.087 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:56.087 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:56.087 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:56.087 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.087 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.087 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.346 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRmZGI2Yjg0NDg1NzM0Y2U1NzY0YzNlNzYxMGJiODhiYzUzZjRiYjAzZjk4YmQzGzV93Q==: --dhchap-ctrl-secret DHHC-1:03:N2RiZThhOTJlODI2NmM5N2I1OGY4MWNlYzFiOWVjMzBkMGI0YjQ3YTg2MDM2MzJjYTk0ZGY5YzZmNmMxNjgwN5ionDU=: 00:18:56.347 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWRmZGI2Yjg0NDg1NzM0Y2U1NzY0YzNlNzYxMGJiODhiYzUzZjRiYjAzZjk4YmQzGzV93Q==: --dhchap-ctrl-secret DHHC-1:03:N2RiZThhOTJlODI2NmM5N2I1OGY4MWNlYzFiOWVjMzBkMGI0YjQ3YTg2MDM2MzJjYTk0ZGY5YzZmNmMxNjgwN5ionDU=: 00:18:56.915 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.915 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:56.915 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.915 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.915 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.915 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:56.916 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:56.916 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:57.176 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:18:57.176 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:57.176 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:57.176 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:57.176 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:57.176 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.176 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.176 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.176 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.176 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.176 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.176 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.176 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.435 00:18:57.435 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:57.435 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:57.435 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.694 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.694 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.694 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.694 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.694 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.694 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:57.694 { 00:18:57.694 "cntlid": 123, 00:18:57.694 "qid": 0, 00:18:57.694 "state": "enabled", 00:18:57.694 "thread": "nvmf_tgt_poll_group_000", 00:18:57.694 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:57.694 "listen_address": { 00:18:57.694 "trtype": "TCP", 00:18:57.694 "adrfam": "IPv4", 00:18:57.694 "traddr": "10.0.0.2", 00:18:57.694 "trsvcid": "4420" 00:18:57.694 }, 00:18:57.694 "peer_address": { 00:18:57.694 "trtype": "TCP", 00:18:57.694 "adrfam": "IPv4", 00:18:57.694 "traddr": "10.0.0.1", 00:18:57.694 "trsvcid": "46058" 00:18:57.694 }, 00:18:57.694 "auth": { 00:18:57.694 "state": "completed", 00:18:57.694 "digest": "sha512", 00:18:57.694 "dhgroup": "ffdhe4096" 00:18:57.694 } 00:18:57.694 } 00:18:57.695 ]' 00:18:57.695 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:57.695 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:57.695 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:57.695 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:57.695 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:57.695 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.695 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.695 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.954 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzdkNjYxYmFiNjQ3YjAxNGRlZjVjZDhkMDcyMzM4YzWRYJIq: --dhchap-ctrl-secret DHHC-1:02:ZWQyMGJkZGYwNzc1YjI2NDFiNzBmNWZjMmExODU4NzFkYjNkNmEzODJiMzFkZjc2ixuwiQ==: 00:18:57.954 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MzdkNjYxYmFiNjQ3YjAxNGRlZjVjZDhkMDcyMzM4YzWRYJIq: --dhchap-ctrl-secret DHHC-1:02:ZWQyMGJkZGYwNzc1YjI2NDFiNzBmNWZjMmExODU4NzFkYjNkNmEzODJiMzFkZjc2ixuwiQ==: 00:18:58.522 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.522 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:58.522 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.522 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.522 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.522 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:58.522 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:58.522 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:58.781 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:18:58.781 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:58.781 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:58.781 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:58.781 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:58.781 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.781 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.781 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.781 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.781 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.781 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.781 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.781 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.040 00:18:59.040 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:59.040 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.040 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:59.299 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.299 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.299 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.300 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.300 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.300 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:59.300 { 00:18:59.300 "cntlid": 125, 00:18:59.300 "qid": 0, 00:18:59.300 "state": "enabled", 00:18:59.300 "thread": "nvmf_tgt_poll_group_000", 00:18:59.300 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:59.300 "listen_address": { 00:18:59.300 "trtype": "TCP", 00:18:59.300 "adrfam": "IPv4", 00:18:59.300 "traddr": "10.0.0.2", 00:18:59.300 "trsvcid": "4420" 00:18:59.300 }, 00:18:59.300 "peer_address": { 00:18:59.300 "trtype": "TCP", 00:18:59.300 "adrfam": "IPv4", 00:18:59.300 "traddr": "10.0.0.1", 00:18:59.300 "trsvcid": "46088" 00:18:59.300 }, 00:18:59.300 "auth": { 00:18:59.300 "state": "completed", 00:18:59.300 "digest": "sha512", 00:18:59.300 "dhgroup": "ffdhe4096" 00:18:59.300 } 00:18:59.300 } 00:18:59.300 ]' 00:18:59.300 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:59.300 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:59.300 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:59.300 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:59.300 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:59.300 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.300 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.300 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.559 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzNiZDZiNTdjYjExODkyOGVkMzUxM2I1M2JjMDQ3YmE0YjM1MGIxZTg0NGEwYmI16L/adQ==: --dhchap-ctrl-secret DHHC-1:01:ODI5NDkyMTUyNTIxNWVlZmZkYzBmYzUwZWFmNjE0NmOk3lGG: 00:18:59.559 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzNiZDZiNTdjYjExODkyOGVkMzUxM2I1M2JjMDQ3YmE0YjM1MGIxZTg0NGEwYmI16L/adQ==: --dhchap-ctrl-secret DHHC-1:01:ODI5NDkyMTUyNTIxNWVlZmZkYzBmYzUwZWFmNjE0NmOk3lGG: 00:19:00.127 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.127 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:00.127 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.127 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.127 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.127 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:00.127 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:00.127 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:00.386 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:19:00.386 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:00.386 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:00.386 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:00.386 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:00.386 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.386 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:19:00.386 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.386 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.386 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.386 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:00.386 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:00.386 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:00.645 00:19:00.645 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:00.645 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.645 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:00.907 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.907 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.907 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.907 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.907 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.907 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:00.907 { 00:19:00.907 "cntlid": 127, 00:19:00.907 "qid": 0, 00:19:00.907 "state": "enabled", 00:19:00.907 "thread": "nvmf_tgt_poll_group_000", 00:19:00.907 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:00.907 "listen_address": { 00:19:00.907 "trtype": "TCP", 00:19:00.907 "adrfam": "IPv4", 00:19:00.907 "traddr": "10.0.0.2", 00:19:00.907 "trsvcid": "4420" 00:19:00.907 }, 00:19:00.907 "peer_address": { 00:19:00.907 "trtype": "TCP", 00:19:00.907 "adrfam": "IPv4", 00:19:00.907 "traddr": "10.0.0.1", 00:19:00.907 "trsvcid": "46108" 00:19:00.907 }, 00:19:00.907 "auth": { 00:19:00.907 "state": "completed", 00:19:00.907 "digest": "sha512", 00:19:00.907 "dhgroup": "ffdhe4096" 00:19:00.907 } 00:19:00.907 } 00:19:00.907 ]' 00:19:00.907 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:00.907 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:00.907 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:00.907 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:00.907 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:01.207 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.207 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.207 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.207 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWU5ZTRmMDNmZTIyYzM0YmQzZWIwNzgwMjBlMmU5ZDc3OWU4MDQ5ZGQ4OGIzYjliN2E4MGRhYWE0OTVmNzA5NHf4llw=: 00:19:01.207 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWU5ZTRmMDNmZTIyYzM0YmQzZWIwNzgwMjBlMmU5ZDc3OWU4MDQ5ZGQ4OGIzYjliN2E4MGRhYWE0OTVmNzA5NHf4llw=: 00:19:01.833 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.833 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.833 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:01.833 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.833 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.833 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.833 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:01.833 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:01.833 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:01.833 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:02.092 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:19:02.092 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:02.092 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:02.092 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:02.092 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:02.092 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.092 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.092 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.092 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.092 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.092 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.092 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.092 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.351 00:19:02.351 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:02.351 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:02.351 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.609 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.609 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.609 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.609 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.609 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.609 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:02.609 { 00:19:02.609 "cntlid": 129, 00:19:02.609 "qid": 0, 00:19:02.609 "state": "enabled", 00:19:02.609 "thread": "nvmf_tgt_poll_group_000", 00:19:02.609 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:02.609 "listen_address": { 00:19:02.609 "trtype": "TCP", 00:19:02.609 "adrfam": "IPv4", 00:19:02.609 "traddr": "10.0.0.2", 00:19:02.609 "trsvcid": "4420" 00:19:02.609 }, 00:19:02.609 "peer_address": { 00:19:02.609 "trtype": "TCP", 00:19:02.609 "adrfam": "IPv4", 00:19:02.609 "traddr": "10.0.0.1", 00:19:02.609 "trsvcid": "46126" 00:19:02.609 }, 00:19:02.609 "auth": { 00:19:02.609 "state": "completed", 00:19:02.609 "digest": "sha512", 00:19:02.609 "dhgroup": "ffdhe6144" 00:19:02.609 } 00:19:02.609 } 00:19:02.609 ]' 00:19:02.609 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:02.609 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:02.609 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:02.609 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:02.609 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:02.609 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.609 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.609 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.868 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRmZGI2Yjg0NDg1NzM0Y2U1NzY0YzNlNzYxMGJiODhiYzUzZjRiYjAzZjk4YmQzGzV93Q==: --dhchap-ctrl-secret DHHC-1:03:N2RiZThhOTJlODI2NmM5N2I1OGY4MWNlYzFiOWVjMzBkMGI0YjQ3YTg2MDM2MzJjYTk0ZGY5YzZmNmMxNjgwN5ionDU=: 00:19:02.868 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWRmZGI2Yjg0NDg1NzM0Y2U1NzY0YzNlNzYxMGJiODhiYzUzZjRiYjAzZjk4YmQzGzV93Q==: --dhchap-ctrl-secret DHHC-1:03:N2RiZThhOTJlODI2NmM5N2I1OGY4MWNlYzFiOWVjMzBkMGI0YjQ3YTg2MDM2MzJjYTk0ZGY5YzZmNmMxNjgwN5ionDU=: 00:19:03.435 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.435 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:03.435 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.435 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.435 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.435 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:03.435 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:03.435 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:03.694 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:19:03.694 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:03.694 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:03.694 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:03.694 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:03.694 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.694 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.694 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.694 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.694 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.694 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.694 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.694 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.952 00:19:03.952 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:03.952 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:03.952 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.210 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.210 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.210 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.210 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.210 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.210 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:04.210 { 00:19:04.210 "cntlid": 131, 00:19:04.210 "qid": 0, 00:19:04.210 "state": "enabled", 00:19:04.210 "thread": "nvmf_tgt_poll_group_000", 00:19:04.210 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:04.210 "listen_address": { 00:19:04.210 "trtype": "TCP", 00:19:04.210 "adrfam": "IPv4", 00:19:04.210 "traddr": "10.0.0.2", 00:19:04.210 "trsvcid": "4420" 00:19:04.210 }, 00:19:04.210 "peer_address": { 00:19:04.210 "trtype": "TCP", 00:19:04.210 "adrfam": "IPv4", 00:19:04.210 "traddr": "10.0.0.1", 00:19:04.210 "trsvcid": "46152" 00:19:04.210 }, 00:19:04.210 "auth": { 00:19:04.210 "state": "completed", 00:19:04.210 "digest": "sha512", 00:19:04.210 "dhgroup": "ffdhe6144" 00:19:04.210 } 00:19:04.210 } 00:19:04.210 ]' 00:19:04.210 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:04.210 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:04.210 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:04.469 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:04.469 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:04.469 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.469 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.469 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.469 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzdkNjYxYmFiNjQ3YjAxNGRlZjVjZDhkMDcyMzM4YzWRYJIq: --dhchap-ctrl-secret DHHC-1:02:ZWQyMGJkZGYwNzc1YjI2NDFiNzBmNWZjMmExODU4NzFkYjNkNmEzODJiMzFkZjc2ixuwiQ==: 00:19:04.728 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MzdkNjYxYmFiNjQ3YjAxNGRlZjVjZDhkMDcyMzM4YzWRYJIq: --dhchap-ctrl-secret DHHC-1:02:ZWQyMGJkZGYwNzc1YjI2NDFiNzBmNWZjMmExODU4NzFkYjNkNmEzODJiMzFkZjc2ixuwiQ==: 00:19:05.295 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.295 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:05.295 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.295 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.295 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.295 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:05.295 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:05.295 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:05.295 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:19:05.295 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:05.295 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:05.295 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:05.295 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:05.295 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.295 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.295 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.295 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.295 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.295 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.295 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.295 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.863 00:19:05.863 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:05.863 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:05.863 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.863 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.863 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.863 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.863 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.863 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.863 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:05.863 { 00:19:05.863 "cntlid": 133, 00:19:05.863 "qid": 0, 00:19:05.863 "state": "enabled", 00:19:05.863 "thread": "nvmf_tgt_poll_group_000", 00:19:05.863 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:05.863 "listen_address": { 00:19:05.863 "trtype": "TCP", 00:19:05.863 "adrfam": "IPv4", 00:19:05.863 "traddr": "10.0.0.2", 00:19:05.863 "trsvcid": "4420" 00:19:05.863 }, 00:19:05.863 "peer_address": { 00:19:05.863 "trtype": "TCP", 00:19:05.863 "adrfam": "IPv4", 00:19:05.863 "traddr": "10.0.0.1", 00:19:05.863 "trsvcid": "54118" 00:19:05.863 }, 00:19:05.863 "auth": { 00:19:05.863 "state": "completed", 00:19:05.863 "digest": "sha512", 00:19:05.863 "dhgroup": "ffdhe6144" 00:19:05.863 } 00:19:05.863 } 00:19:05.863 ]' 00:19:05.863 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:06.122 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:06.122 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:06.122 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:06.122 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:06.122 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.122 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.122 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.381 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzNiZDZiNTdjYjExODkyOGVkMzUxM2I1M2JjMDQ3YmE0YjM1MGIxZTg0NGEwYmI16L/adQ==: --dhchap-ctrl-secret DHHC-1:01:ODI5NDkyMTUyNTIxNWVlZmZkYzBmYzUwZWFmNjE0NmOk3lGG: 00:19:06.381 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzNiZDZiNTdjYjExODkyOGVkMzUxM2I1M2JjMDQ3YmE0YjM1MGIxZTg0NGEwYmI16L/adQ==: --dhchap-ctrl-secret DHHC-1:01:ODI5NDkyMTUyNTIxNWVlZmZkYzBmYzUwZWFmNjE0NmOk3lGG: 00:19:06.950 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.950 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:06.950 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.950 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.950 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.950 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:06.950 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:06.950 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:06.950 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:19:06.950 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:06.950 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:06.950 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:06.950 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:06.950 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.950 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:19:06.950 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.950 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.950 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.950 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:06.950 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:06.950 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:07.518 00:19:07.518 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:07.518 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.518 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:07.518 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.518 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.518 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.518 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.518 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.518 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:07.518 { 00:19:07.518 "cntlid": 135, 00:19:07.518 "qid": 0, 00:19:07.518 "state": "enabled", 00:19:07.518 "thread": "nvmf_tgt_poll_group_000", 00:19:07.518 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:07.518 "listen_address": { 00:19:07.518 "trtype": "TCP", 00:19:07.518 "adrfam": "IPv4", 00:19:07.518 "traddr": "10.0.0.2", 00:19:07.518 "trsvcid": "4420" 00:19:07.518 }, 00:19:07.518 "peer_address": { 00:19:07.518 "trtype": "TCP", 00:19:07.518 "adrfam": "IPv4", 00:19:07.518 "traddr": "10.0.0.1", 00:19:07.518 "trsvcid": "54138" 00:19:07.518 }, 00:19:07.518 "auth": { 00:19:07.518 "state": "completed", 00:19:07.518 "digest": "sha512", 00:19:07.518 "dhgroup": "ffdhe6144" 00:19:07.518 } 00:19:07.518 } 00:19:07.518 ]' 00:19:07.518 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:07.518 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:07.518 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:07.776 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:07.776 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:07.776 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.776 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.776 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.034 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWU5ZTRmMDNmZTIyYzM0YmQzZWIwNzgwMjBlMmU5ZDc3OWU4MDQ5ZGQ4OGIzYjliN2E4MGRhYWE0OTVmNzA5NHf4llw=: 00:19:08.034 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWU5ZTRmMDNmZTIyYzM0YmQzZWIwNzgwMjBlMmU5ZDc3OWU4MDQ5ZGQ4OGIzYjliN2E4MGRhYWE0OTVmNzA5NHf4llw=: 00:19:08.607 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.608 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:08.608 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.608 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.608 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.608 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:08.608 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:08.608 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:08.608 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:08.608 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:19:08.608 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:08.608 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:08.608 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:08.608 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:08.608 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.608 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.609 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.609 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.874 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.874 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.874 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.874 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.133 00:19:09.133 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:09.133 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:09.133 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.391 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.391 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.391 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.391 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.391 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.391 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:09.391 { 00:19:09.391 "cntlid": 137, 00:19:09.391 "qid": 0, 00:19:09.391 "state": "enabled", 00:19:09.391 "thread": "nvmf_tgt_poll_group_000", 00:19:09.391 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:09.391 "listen_address": { 00:19:09.391 "trtype": "TCP", 00:19:09.391 "adrfam": "IPv4", 00:19:09.391 "traddr": "10.0.0.2", 00:19:09.391 "trsvcid": "4420" 00:19:09.391 }, 00:19:09.391 "peer_address": { 00:19:09.391 "trtype": "TCP", 00:19:09.391 "adrfam": "IPv4", 00:19:09.391 "traddr": "10.0.0.1", 00:19:09.391 "trsvcid": "54176" 00:19:09.391 }, 00:19:09.391 "auth": { 00:19:09.391 "state": "completed", 00:19:09.391 "digest": "sha512", 00:19:09.391 "dhgroup": "ffdhe8192" 00:19:09.391 } 00:19:09.391 } 00:19:09.391 ]' 00:19:09.391 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:09.391 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:09.391 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:09.391 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:09.391 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:09.650 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.650 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.650 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.650 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRmZGI2Yjg0NDg1NzM0Y2U1NzY0YzNlNzYxMGJiODhiYzUzZjRiYjAzZjk4YmQzGzV93Q==: --dhchap-ctrl-secret DHHC-1:03:N2RiZThhOTJlODI2NmM5N2I1OGY4MWNlYzFiOWVjMzBkMGI0YjQ3YTg2MDM2MzJjYTk0ZGY5YzZmNmMxNjgwN5ionDU=: 00:19:09.650 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWRmZGI2Yjg0NDg1NzM0Y2U1NzY0YzNlNzYxMGJiODhiYzUzZjRiYjAzZjk4YmQzGzV93Q==: --dhchap-ctrl-secret DHHC-1:03:N2RiZThhOTJlODI2NmM5N2I1OGY4MWNlYzFiOWVjMzBkMGI0YjQ3YTg2MDM2MzJjYTk0ZGY5YzZmNmMxNjgwN5ionDU=: 00:19:10.217 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.217 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:10.217 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.217 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.217 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.217 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:10.217 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:10.217 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:10.476 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:19:10.476 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:10.476 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:10.476 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:10.476 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:10.476 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.476 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.476 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.476 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.476 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.476 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.476 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.476 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.043 00:19:11.043 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:11.043 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:11.043 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.302 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.302 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.302 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.302 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.302 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.302 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:11.302 { 00:19:11.302 "cntlid": 139, 00:19:11.302 "qid": 0, 00:19:11.302 "state": "enabled", 00:19:11.302 "thread": "nvmf_tgt_poll_group_000", 00:19:11.302 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:11.302 "listen_address": { 00:19:11.302 "trtype": "TCP", 00:19:11.302 "adrfam": "IPv4", 00:19:11.302 "traddr": "10.0.0.2", 00:19:11.302 "trsvcid": "4420" 00:19:11.302 }, 00:19:11.302 "peer_address": { 00:19:11.302 "trtype": "TCP", 00:19:11.302 "adrfam": "IPv4", 00:19:11.302 "traddr": "10.0.0.1", 00:19:11.302 "trsvcid": "54200" 00:19:11.302 }, 00:19:11.302 "auth": { 00:19:11.302 "state": "completed", 00:19:11.302 "digest": "sha512", 00:19:11.302 "dhgroup": "ffdhe8192" 00:19:11.302 } 00:19:11.302 } 00:19:11.302 ]' 00:19:11.302 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:11.302 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:11.302 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:11.302 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:11.302 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:11.302 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.302 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.302 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.561 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzdkNjYxYmFiNjQ3YjAxNGRlZjVjZDhkMDcyMzM4YzWRYJIq: --dhchap-ctrl-secret DHHC-1:02:ZWQyMGJkZGYwNzc1YjI2NDFiNzBmNWZjMmExODU4NzFkYjNkNmEzODJiMzFkZjc2ixuwiQ==: 00:19:11.561 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MzdkNjYxYmFiNjQ3YjAxNGRlZjVjZDhkMDcyMzM4YzWRYJIq: --dhchap-ctrl-secret DHHC-1:02:ZWQyMGJkZGYwNzc1YjI2NDFiNzBmNWZjMmExODU4NzFkYjNkNmEzODJiMzFkZjc2ixuwiQ==: 00:19:12.129 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.129 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.129 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:12.129 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.129 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.129 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.129 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:12.129 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:12.129 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:12.388 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:19:12.388 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:12.388 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:12.388 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:12.388 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:12.388 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.388 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.388 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.388 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.388 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.388 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.388 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.388 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.956 00:19:12.956 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:12.956 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:12.956 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.956 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.956 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.956 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.956 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.956 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.956 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:12.956 { 00:19:12.956 "cntlid": 141, 00:19:12.956 "qid": 0, 00:19:12.956 "state": "enabled", 00:19:12.956 "thread": "nvmf_tgt_poll_group_000", 00:19:12.956 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:12.956 "listen_address": { 00:19:12.956 "trtype": "TCP", 00:19:12.956 "adrfam": "IPv4", 00:19:12.956 "traddr": "10.0.0.2", 00:19:12.956 "trsvcid": "4420" 00:19:12.956 }, 00:19:12.956 "peer_address": { 00:19:12.956 "trtype": "TCP", 00:19:12.956 "adrfam": "IPv4", 00:19:12.956 "traddr": "10.0.0.1", 00:19:12.956 "trsvcid": "54232" 00:19:12.956 }, 00:19:12.956 "auth": { 00:19:12.956 "state": "completed", 00:19:12.956 "digest": "sha512", 00:19:12.956 "dhgroup": "ffdhe8192" 00:19:12.956 } 00:19:12.956 } 00:19:12.956 ]' 00:19:12.956 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:12.956 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:12.956 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:13.214 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:13.214 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:13.214 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.214 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.214 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.214 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzNiZDZiNTdjYjExODkyOGVkMzUxM2I1M2JjMDQ3YmE0YjM1MGIxZTg0NGEwYmI16L/adQ==: --dhchap-ctrl-secret DHHC-1:01:ODI5NDkyMTUyNTIxNWVlZmZkYzBmYzUwZWFmNjE0NmOk3lGG: 00:19:13.214 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzNiZDZiNTdjYjExODkyOGVkMzUxM2I1M2JjMDQ3YmE0YjM1MGIxZTg0NGEwYmI16L/adQ==: --dhchap-ctrl-secret DHHC-1:01:ODI5NDkyMTUyNTIxNWVlZmZkYzBmYzUwZWFmNjE0NmOk3lGG: 00:19:13.781 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.041 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.041 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:14.041 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.041 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.041 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.041 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:14.041 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:14.041 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:14.041 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:19:14.041 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:14.041 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:14.041 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:14.041 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:14.041 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.041 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:19:14.041 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.041 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.041 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.041 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:14.041 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:14.041 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:14.607 00:19:14.607 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:14.607 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:14.607 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.866 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.866 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.866 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.866 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.866 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.866 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:14.866 { 00:19:14.866 "cntlid": 143, 00:19:14.866 "qid": 0, 00:19:14.866 "state": "enabled", 00:19:14.866 "thread": "nvmf_tgt_poll_group_000", 00:19:14.866 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:14.866 "listen_address": { 00:19:14.866 "trtype": "TCP", 00:19:14.866 "adrfam": "IPv4", 00:19:14.866 "traddr": "10.0.0.2", 00:19:14.866 "trsvcid": "4420" 00:19:14.866 }, 00:19:14.866 "peer_address": { 00:19:14.866 "trtype": "TCP", 00:19:14.866 "adrfam": "IPv4", 00:19:14.866 "traddr": "10.0.0.1", 00:19:14.866 "trsvcid": "60428" 00:19:14.866 }, 00:19:14.866 "auth": { 00:19:14.866 "state": "completed", 00:19:14.866 "digest": "sha512", 00:19:14.866 "dhgroup": "ffdhe8192" 00:19:14.866 } 00:19:14.866 } 00:19:14.866 ]' 00:19:14.866 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:14.866 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:14.866 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:14.866 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:14.866 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:14.866 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.866 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.866 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.126 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWU5ZTRmMDNmZTIyYzM0YmQzZWIwNzgwMjBlMmU5ZDc3OWU4MDQ5ZGQ4OGIzYjliN2E4MGRhYWE0OTVmNzA5NHf4llw=: 00:19:15.126 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWU5ZTRmMDNmZTIyYzM0YmQzZWIwNzgwMjBlMmU5ZDc3OWU4MDQ5ZGQ4OGIzYjliN2E4MGRhYWE0OTVmNzA5NHf4llw=: 00:19:15.692 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.692 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:15.692 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.692 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.692 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.692 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:15.692 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:19:15.692 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:15.692 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:15.692 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:15.692 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:15.951 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:19:15.951 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:15.951 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:15.951 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:15.951 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:15.951 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.951 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.951 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.951 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.951 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.951 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.951 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.951 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.518 00:19:16.518 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:16.518 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:16.518 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.777 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.777 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.777 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.777 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.777 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.777 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:16.777 { 00:19:16.777 "cntlid": 145, 00:19:16.777 "qid": 0, 00:19:16.777 "state": "enabled", 00:19:16.777 "thread": "nvmf_tgt_poll_group_000", 00:19:16.777 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:16.777 "listen_address": { 00:19:16.777 "trtype": "TCP", 00:19:16.777 "adrfam": "IPv4", 00:19:16.777 "traddr": "10.0.0.2", 00:19:16.777 "trsvcid": "4420" 00:19:16.777 }, 00:19:16.777 "peer_address": { 00:19:16.777 "trtype": "TCP", 00:19:16.777 "adrfam": "IPv4", 00:19:16.777 "traddr": "10.0.0.1", 00:19:16.777 "trsvcid": "60458" 00:19:16.777 }, 00:19:16.777 "auth": { 00:19:16.777 "state": "completed", 00:19:16.777 "digest": "sha512", 00:19:16.777 "dhgroup": "ffdhe8192" 00:19:16.777 } 00:19:16.777 } 00:19:16.777 ]' 00:19:16.777 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:16.777 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:16.777 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:16.777 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:16.777 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:16.777 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.777 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.777 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.036 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWRmZGI2Yjg0NDg1NzM0Y2U1NzY0YzNlNzYxMGJiODhiYzUzZjRiYjAzZjk4YmQzGzV93Q==: --dhchap-ctrl-secret DHHC-1:03:N2RiZThhOTJlODI2NmM5N2I1OGY4MWNlYzFiOWVjMzBkMGI0YjQ3YTg2MDM2MzJjYTk0ZGY5YzZmNmMxNjgwN5ionDU=: 00:19:17.036 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWRmZGI2Yjg0NDg1NzM0Y2U1NzY0YzNlNzYxMGJiODhiYzUzZjRiYjAzZjk4YmQzGzV93Q==: --dhchap-ctrl-secret DHHC-1:03:N2RiZThhOTJlODI2NmM5N2I1OGY4MWNlYzFiOWVjMzBkMGI0YjQ3YTg2MDM2MzJjYTk0ZGY5YzZmNmMxNjgwN5ionDU=: 00:19:17.603 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.603 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:17.603 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.603 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.603 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.603 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:19:17.603 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.603 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.603 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.603 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:19:17.603 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:17.603 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:19:17.603 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:17.603 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:17.603 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:17.603 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:17.603 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:19:17.604 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:17.604 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:18.171 request: 00:19:18.171 { 00:19:18.171 "name": "nvme0", 00:19:18.171 "trtype": "tcp", 00:19:18.171 "traddr": "10.0.0.2", 00:19:18.171 "adrfam": "ipv4", 00:19:18.171 "trsvcid": "4420", 00:19:18.171 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:18.171 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:18.171 "prchk_reftag": false, 00:19:18.171 "prchk_guard": false, 00:19:18.171 "hdgst": false, 00:19:18.171 "ddgst": false, 00:19:18.171 "dhchap_key": "key2", 00:19:18.171 "allow_unrecognized_csi": false, 00:19:18.171 "method": "bdev_nvme_attach_controller", 00:19:18.171 "req_id": 1 00:19:18.171 } 00:19:18.171 Got JSON-RPC error response 00:19:18.171 response: 00:19:18.171 { 00:19:18.171 "code": -5, 00:19:18.171 "message": "Input/output error" 00:19:18.171 } 00:19:18.171 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:18.171 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:18.171 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:18.171 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:18.171 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:18.171 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.171 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.171 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.171 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.171 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.171 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.171 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.171 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:18.171 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:18.171 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:18.171 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:18.171 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:18.171 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:18.171 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:18.171 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:18.171 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:18.171 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:18.431 request: 00:19:18.431 { 00:19:18.431 "name": "nvme0", 00:19:18.431 "trtype": "tcp", 00:19:18.431 "traddr": "10.0.0.2", 00:19:18.431 "adrfam": "ipv4", 00:19:18.431 "trsvcid": "4420", 00:19:18.431 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:18.431 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:18.431 "prchk_reftag": false, 00:19:18.431 "prchk_guard": false, 00:19:18.431 "hdgst": false, 00:19:18.431 "ddgst": false, 00:19:18.431 "dhchap_key": "key1", 00:19:18.431 "dhchap_ctrlr_key": "ckey2", 00:19:18.431 "allow_unrecognized_csi": false, 00:19:18.431 "method": "bdev_nvme_attach_controller", 00:19:18.431 "req_id": 1 00:19:18.431 } 00:19:18.431 Got JSON-RPC error response 00:19:18.431 response: 00:19:18.431 { 00:19:18.431 "code": -5, 00:19:18.431 "message": "Input/output error" 00:19:18.431 } 00:19:18.431 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:18.431 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:18.431 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:18.431 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:18.431 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:18.431 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.431 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.431 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.431 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:19:18.431 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.431 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.431 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.431 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.431 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:18.431 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.431 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:18.431 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:18.431 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:18.431 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:18.431 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.431 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.431 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.000 request: 00:19:19.000 { 00:19:19.000 "name": "nvme0", 00:19:19.000 "trtype": "tcp", 00:19:19.000 "traddr": "10.0.0.2", 00:19:19.000 "adrfam": "ipv4", 00:19:19.000 "trsvcid": "4420", 00:19:19.000 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:19.000 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:19.000 "prchk_reftag": false, 00:19:19.000 "prchk_guard": false, 00:19:19.000 "hdgst": false, 00:19:19.000 "ddgst": false, 00:19:19.000 "dhchap_key": "key1", 00:19:19.000 "dhchap_ctrlr_key": "ckey1", 00:19:19.000 "allow_unrecognized_csi": false, 00:19:19.000 "method": "bdev_nvme_attach_controller", 00:19:19.000 "req_id": 1 00:19:19.000 } 00:19:19.000 Got JSON-RPC error response 00:19:19.000 response: 00:19:19.000 { 00:19:19.000 "code": -5, 00:19:19.000 "message": "Input/output error" 00:19:19.000 } 00:19:19.000 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:19.000 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:19.000 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:19.000 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:19.000 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:19.000 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.000 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.000 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.000 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 162612 00:19:19.000 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 162612 ']' 00:19:19.000 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 162612 00:19:19.000 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:19.000 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:19.000 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 162612 00:19:19.000 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:19.000 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:19.000 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 162612' 00:19:19.000 killing process with pid 162612 00:19:19.000 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 162612 00:19:19.000 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 162612 00:19:19.259 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:19.259 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:19.259 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:19.259 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.259 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=184628 00:19:19.259 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:19.259 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 184628 00:19:19.259 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 184628 ']' 00:19:19.259 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.259 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:19.259 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.259 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:19.259 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.517 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:19.517 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:19.517 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:19.517 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:19.517 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.517 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:19.517 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:19.517 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 184628 00:19:19.517 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 184628 ']' 00:19:19.517 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.517 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:19.517 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.517 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:19.517 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.777 null0 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ria 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.CWL ]] 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.CWL 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.3lx 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.Bmg ]] 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Bmg 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.a48 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.mS0 ]] 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.mS0 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.z4w 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:19.777 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:20.714 nvme0n1 00:19:20.714 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:20.714 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:20.714 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.714 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.714 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.714 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.714 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.714 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.714 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:20.714 { 00:19:20.714 "cntlid": 1, 00:19:20.714 "qid": 0, 00:19:20.714 "state": "enabled", 00:19:20.714 "thread": "nvmf_tgt_poll_group_000", 00:19:20.714 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:20.714 "listen_address": { 00:19:20.714 "trtype": "TCP", 00:19:20.714 "adrfam": "IPv4", 00:19:20.714 "traddr": "10.0.0.2", 00:19:20.714 "trsvcid": "4420" 00:19:20.714 }, 00:19:20.714 "peer_address": { 00:19:20.714 "trtype": "TCP", 00:19:20.714 "adrfam": "IPv4", 00:19:20.714 "traddr": "10.0.0.1", 00:19:20.714 "trsvcid": "60486" 00:19:20.714 }, 00:19:20.714 "auth": { 00:19:20.714 "state": "completed", 00:19:20.714 "digest": "sha512", 00:19:20.714 "dhgroup": "ffdhe8192" 00:19:20.714 } 00:19:20.714 } 00:19:20.714 ]' 00:19:20.714 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:20.973 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:20.973 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:20.973 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:20.973 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:20.973 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.973 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.973 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.232 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWU5ZTRmMDNmZTIyYzM0YmQzZWIwNzgwMjBlMmU5ZDc3OWU4MDQ5ZGQ4OGIzYjliN2E4MGRhYWE0OTVmNzA5NHf4llw=: 00:19:21.232 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWU5ZTRmMDNmZTIyYzM0YmQzZWIwNzgwMjBlMmU5ZDc3OWU4MDQ5ZGQ4OGIzYjliN2E4MGRhYWE0OTVmNzA5NHf4llw=: 00:19:21.801 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.801 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:21.801 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.801 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.801 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.801 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:19:21.801 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.801 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.801 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.801 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:21.801 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:21.801 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:21.801 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:21.801 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:21.801 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:21.801 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:21.801 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:21.801 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:21.801 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:21.801 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:21.801 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:22.061 request: 00:19:22.061 { 00:19:22.061 "name": "nvme0", 00:19:22.061 "trtype": "tcp", 00:19:22.061 "traddr": "10.0.0.2", 00:19:22.061 "adrfam": "ipv4", 00:19:22.061 "trsvcid": "4420", 00:19:22.061 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:22.061 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:22.061 "prchk_reftag": false, 00:19:22.061 "prchk_guard": false, 00:19:22.061 "hdgst": false, 00:19:22.061 "ddgst": false, 00:19:22.061 "dhchap_key": "key3", 00:19:22.061 "allow_unrecognized_csi": false, 00:19:22.061 "method": "bdev_nvme_attach_controller", 00:19:22.061 "req_id": 1 00:19:22.061 } 00:19:22.061 Got JSON-RPC error response 00:19:22.061 response: 00:19:22.061 { 00:19:22.061 "code": -5, 00:19:22.061 "message": "Input/output error" 00:19:22.061 } 00:19:22.061 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:22.061 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:22.061 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:22.061 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:22.061 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:19:22.061 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:19:22.061 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:22.061 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:22.320 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:22.320 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:22.320 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:22.320 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:22.320 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:22.320 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:22.320 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:22.320 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:22.320 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:22.320 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:22.578 request: 00:19:22.578 { 00:19:22.578 "name": "nvme0", 00:19:22.578 "trtype": "tcp", 00:19:22.578 "traddr": "10.0.0.2", 00:19:22.578 "adrfam": "ipv4", 00:19:22.578 "trsvcid": "4420", 00:19:22.578 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:22.578 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:22.578 "prchk_reftag": false, 00:19:22.578 "prchk_guard": false, 00:19:22.578 "hdgst": false, 00:19:22.578 "ddgst": false, 00:19:22.579 "dhchap_key": "key3", 00:19:22.579 "allow_unrecognized_csi": false, 00:19:22.579 "method": "bdev_nvme_attach_controller", 00:19:22.579 "req_id": 1 00:19:22.579 } 00:19:22.579 Got JSON-RPC error response 00:19:22.579 response: 00:19:22.579 { 00:19:22.579 "code": -5, 00:19:22.579 "message": "Input/output error" 00:19:22.579 } 00:19:22.579 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:22.579 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:22.579 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:22.579 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:22.579 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:22.579 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:19:22.579 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:22.579 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:22.579 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:22.579 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:22.838 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:22.838 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.838 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.838 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.838 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:22.838 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.838 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.838 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.838 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:22.838 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:22.838 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:22.838 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:22.838 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:22.838 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:22.838 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:22.838 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:22.838 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:22.838 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:23.097 request: 00:19:23.097 { 00:19:23.097 "name": "nvme0", 00:19:23.097 "trtype": "tcp", 00:19:23.097 "traddr": "10.0.0.2", 00:19:23.098 "adrfam": "ipv4", 00:19:23.098 "trsvcid": "4420", 00:19:23.098 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:23.098 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:23.098 "prchk_reftag": false, 00:19:23.098 "prchk_guard": false, 00:19:23.098 "hdgst": false, 00:19:23.098 "ddgst": false, 00:19:23.098 "dhchap_key": "key0", 00:19:23.098 "dhchap_ctrlr_key": "key1", 00:19:23.098 "allow_unrecognized_csi": false, 00:19:23.098 "method": "bdev_nvme_attach_controller", 00:19:23.098 "req_id": 1 00:19:23.098 } 00:19:23.098 Got JSON-RPC error response 00:19:23.098 response: 00:19:23.098 { 00:19:23.098 "code": -5, 00:19:23.098 "message": "Input/output error" 00:19:23.098 } 00:19:23.098 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:23.098 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:23.098 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:23.098 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:23.098 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:19:23.098 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:23.098 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:23.358 nvme0n1 00:19:23.358 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:19:23.358 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.358 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:19:23.616 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.616 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.616 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.875 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:19:23.875 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.875 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.875 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.875 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:23.875 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:23.875 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:24.443 nvme0n1 00:19:24.443 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:19:24.443 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:19:24.443 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.701 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.701 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:24.701 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.701 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.701 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.701 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:19:24.701 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:19:24.701 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.961 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.961 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NzNiZDZiNTdjYjExODkyOGVkMzUxM2I1M2JjMDQ3YmE0YjM1MGIxZTg0NGEwYmI16L/adQ==: --dhchap-ctrl-secret DHHC-1:03:OWU5ZTRmMDNmZTIyYzM0YmQzZWIwNzgwMjBlMmU5ZDc3OWU4MDQ5ZGQ4OGIzYjliN2E4MGRhYWE0OTVmNzA5NHf4llw=: 00:19:24.961 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzNiZDZiNTdjYjExODkyOGVkMzUxM2I1M2JjMDQ3YmE0YjM1MGIxZTg0NGEwYmI16L/adQ==: --dhchap-ctrl-secret DHHC-1:03:OWU5ZTRmMDNmZTIyYzM0YmQzZWIwNzgwMjBlMmU5ZDc3OWU4MDQ5ZGQ4OGIzYjliN2E4MGRhYWE0OTVmNzA5NHf4llw=: 00:19:25.529 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:19:25.529 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:19:25.529 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:19:25.529 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:19:25.529 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:19:25.529 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:19:25.529 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:19:25.529 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.529 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.787 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:19:25.787 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:25.787 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:19:25.787 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:25.787 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:25.787 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:25.787 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:25.787 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:25.787 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:25.787 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:26.046 request: 00:19:26.046 { 00:19:26.046 "name": "nvme0", 00:19:26.046 "trtype": "tcp", 00:19:26.046 "traddr": "10.0.0.2", 00:19:26.046 "adrfam": "ipv4", 00:19:26.046 "trsvcid": "4420", 00:19:26.046 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:26.046 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:26.046 "prchk_reftag": false, 00:19:26.046 "prchk_guard": false, 00:19:26.046 "hdgst": false, 00:19:26.046 "ddgst": false, 00:19:26.046 "dhchap_key": "key1", 00:19:26.046 "allow_unrecognized_csi": false, 00:19:26.046 "method": "bdev_nvme_attach_controller", 00:19:26.046 "req_id": 1 00:19:26.046 } 00:19:26.046 Got JSON-RPC error response 00:19:26.046 response: 00:19:26.046 { 00:19:26.046 "code": -5, 00:19:26.046 "message": "Input/output error" 00:19:26.046 } 00:19:26.046 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:26.046 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:26.046 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:26.046 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:26.046 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:26.046 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:26.046 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:26.983 nvme0n1 00:19:26.983 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:19:26.983 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:19:26.983 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.983 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.983 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.983 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.242 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:27.242 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.242 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.242 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.242 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:19:27.242 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:27.242 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:27.501 nvme0n1 00:19:27.501 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:19:27.501 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:19:27.501 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.760 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.760 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.760 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.018 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:28.018 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.018 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.018 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.018 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MzdkNjYxYmFiNjQ3YjAxNGRlZjVjZDhkMDcyMzM4YzWRYJIq: '' 2s 00:19:28.018 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:28.018 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:28.018 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MzdkNjYxYmFiNjQ3YjAxNGRlZjVjZDhkMDcyMzM4YzWRYJIq: 00:19:28.018 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:19:28.018 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:28.018 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:28.018 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MzdkNjYxYmFiNjQ3YjAxNGRlZjVjZDhkMDcyMzM4YzWRYJIq: ]] 00:19:28.018 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MzdkNjYxYmFiNjQ3YjAxNGRlZjVjZDhkMDcyMzM4YzWRYJIq: 00:19:28.018 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:19:28.018 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:28.018 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:29.921 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:19:29.921 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:19:29.921 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:29.921 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:29.921 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:29.921 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:29.921 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:19:29.921 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:19:29.921 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.921 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.921 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.921 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NzNiZDZiNTdjYjExODkyOGVkMzUxM2I1M2JjMDQ3YmE0YjM1MGIxZTg0NGEwYmI16L/adQ==: 2s 00:19:29.921 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:29.921 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:29.921 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:19:29.921 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NzNiZDZiNTdjYjExODkyOGVkMzUxM2I1M2JjMDQ3YmE0YjM1MGIxZTg0NGEwYmI16L/adQ==: 00:19:29.921 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:29.921 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:29.921 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:19:29.921 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NzNiZDZiNTdjYjExODkyOGVkMzUxM2I1M2JjMDQ3YmE0YjM1MGIxZTg0NGEwYmI16L/adQ==: ]] 00:19:29.921 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NzNiZDZiNTdjYjExODkyOGVkMzUxM2I1M2JjMDQ3YmE0YjM1MGIxZTg0NGEwYmI16L/adQ==: 00:19:29.921 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:29.921 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:32.446 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:19:32.446 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:19:32.446 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:32.446 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:32.446 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:32.446 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:32.446 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:19:32.446 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.446 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:32.446 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.446 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.446 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.446 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:32.446 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:32.446 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:32.704 nvme0n1 00:19:32.962 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:32.962 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.962 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.962 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.962 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:32.962 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:33.221 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:19:33.221 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:19:33.221 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.480 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.480 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:33.480 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.480 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.480 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.480 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:19:33.480 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:19:33.738 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:19:33.738 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:19:33.738 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.997 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.997 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:33.997 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.997 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.997 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.997 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:33.997 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:33.997 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:33.997 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:19:33.997 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:33.997 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:19:33.997 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:33.997 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:33.997 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:34.256 request: 00:19:34.256 { 00:19:34.256 "name": "nvme0", 00:19:34.256 "dhchap_key": "key1", 00:19:34.256 "dhchap_ctrlr_key": "key3", 00:19:34.256 "method": "bdev_nvme_set_keys", 00:19:34.256 "req_id": 1 00:19:34.256 } 00:19:34.256 Got JSON-RPC error response 00:19:34.256 response: 00:19:34.256 { 00:19:34.256 "code": -13, 00:19:34.256 "message": "Permission denied" 00:19:34.256 } 00:19:34.514 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:34.514 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:34.514 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:34.514 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:34.514 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:34.514 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:34.514 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.514 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:19:34.514 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:19:35.892 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:35.892 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:35.892 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.892 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:19:35.892 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:35.892 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.892 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.892 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.892 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:35.892 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:35.892 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:36.460 nvme0n1 00:19:36.460 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:36.460 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.460 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.460 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.460 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:36.460 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:36.460 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:36.460 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:19:36.460 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:36.460 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:19:36.460 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:36.460 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:36.460 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:37.027 request: 00:19:37.027 { 00:19:37.027 "name": "nvme0", 00:19:37.027 "dhchap_key": "key2", 00:19:37.027 "dhchap_ctrlr_key": "key0", 00:19:37.027 "method": "bdev_nvme_set_keys", 00:19:37.027 "req_id": 1 00:19:37.027 } 00:19:37.027 Got JSON-RPC error response 00:19:37.027 response: 00:19:37.027 { 00:19:37.027 "code": -13, 00:19:37.027 "message": "Permission denied" 00:19:37.027 } 00:19:37.027 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:37.027 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:37.027 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:37.027 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:37.027 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:37.027 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:37.027 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.286 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:19:37.286 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:19:38.222 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:38.222 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:38.222 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.482 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:19:38.482 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:19:38.482 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:19:38.482 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 162768 00:19:38.482 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 162768 ']' 00:19:38.482 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 162768 00:19:38.482 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:38.482 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:38.482 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 162768 00:19:38.482 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:38.482 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:38.482 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 162768' 00:19:38.482 killing process with pid 162768 00:19:38.482 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 162768 00:19:38.482 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 162768 00:19:38.741 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:38.741 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:38.741 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:19:38.741 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:38.741 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:19:38.741 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:38.741 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:38.741 rmmod nvme_tcp 00:19:38.741 rmmod nvme_fabrics 00:19:38.741 rmmod nvme_keyring 00:19:38.741 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:38.741 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:19:38.741 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:19:38.741 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 184628 ']' 00:19:38.741 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 184628 00:19:38.741 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 184628 ']' 00:19:38.741 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 184628 00:19:38.741 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:38.741 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:38.741 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 184628 00:19:39.001 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:39.001 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:39.001 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 184628' 00:19:39.001 killing process with pid 184628 00:19:39.001 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 184628 00:19:39.001 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 184628 00:19:39.001 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:39.001 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:39.001 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:39.001 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:19:39.001 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:19:39.001 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:39.001 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:19:39.001 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:39.001 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:39.001 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.001 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:39.001 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.ria /tmp/spdk.key-sha256.3lx /tmp/spdk.key-sha384.a48 /tmp/spdk.key-sha512.z4w /tmp/spdk.key-sha512.CWL /tmp/spdk.key-sha384.Bmg /tmp/spdk.key-sha256.mS0 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:41.537 00:19:41.537 real 2m31.888s 00:19:41.537 user 5m49.699s 00:19:41.537 sys 0m24.526s 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.537 ************************************ 00:19:41.537 END TEST nvmf_auth_target 00:19:41.537 ************************************ 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:41.537 ************************************ 00:19:41.537 START TEST nvmf_bdevio_no_huge 00:19:41.537 ************************************ 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:41.537 * Looking for test storage... 00:19:41.537 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:41.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.537 --rc genhtml_branch_coverage=1 00:19:41.537 --rc genhtml_function_coverage=1 00:19:41.537 --rc genhtml_legend=1 00:19:41.537 --rc geninfo_all_blocks=1 00:19:41.537 --rc geninfo_unexecuted_blocks=1 00:19:41.537 00:19:41.537 ' 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:41.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.537 --rc genhtml_branch_coverage=1 00:19:41.537 --rc genhtml_function_coverage=1 00:19:41.537 --rc genhtml_legend=1 00:19:41.537 --rc geninfo_all_blocks=1 00:19:41.537 --rc geninfo_unexecuted_blocks=1 00:19:41.537 00:19:41.537 ' 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:41.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.537 --rc genhtml_branch_coverage=1 00:19:41.537 --rc genhtml_function_coverage=1 00:19:41.537 --rc genhtml_legend=1 00:19:41.537 --rc geninfo_all_blocks=1 00:19:41.537 --rc geninfo_unexecuted_blocks=1 00:19:41.537 00:19:41.537 ' 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:41.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.537 --rc genhtml_branch_coverage=1 00:19:41.537 --rc genhtml_function_coverage=1 00:19:41.537 --rc genhtml_legend=1 00:19:41.537 --rc geninfo_all_blocks=1 00:19:41.537 --rc geninfo_unexecuted_blocks=1 00:19:41.537 00:19:41.537 ' 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:41.537 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:41.537 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:41.537 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:41.537 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:41.538 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:41.538 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:41.538 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:41.538 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:41.538 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:41.538 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:19:41.538 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:41.538 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:41.538 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:41.538 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.538 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.538 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.538 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:41.538 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.538 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:19:41.538 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:41.538 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:41.538 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:41.538 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:41.538 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:41.538 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:41.538 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:41.538 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:41.538 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:41.538 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:41.538 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:41.538 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:41.538 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:41.538 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:41.538 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:41.538 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:41.538 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:41.538 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:41.538 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.538 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:41.538 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.538 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:41.538 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:41.538 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:19:41.538 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:48.216 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:48.216 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:48.216 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:48.217 Found net devices under 0000:86:00.0: cvl_0_0 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:48.217 Found net devices under 0000:86:00.1: cvl_0_1 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:48.217 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:48.217 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.407 ms 00:19:48.217 00:19:48.217 --- 10.0.0.2 ping statistics --- 00:19:48.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.217 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:48.217 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:48.217 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:19:48.217 00:19:48.217 --- 10.0.0.1 ping statistics --- 00:19:48.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.217 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:48.217 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:48.217 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:48.217 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:48.217 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:48.217 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:48.217 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=191521 00:19:48.217 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 191521 00:19:48.217 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:48.217 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 191521 ']' 00:19:48.217 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.217 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:48.217 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.217 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:48.217 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:48.217 [2024-11-20 12:33:53.062051] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:19:48.217 [2024-11-20 12:33:53.062103] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:48.217 [2024-11-20 12:33:53.146291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:48.217 [2024-11-20 12:33:53.192990] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:48.217 [2024-11-20 12:33:53.193024] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:48.217 [2024-11-20 12:33:53.193031] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:48.217 [2024-11-20 12:33:53.193037] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:48.217 [2024-11-20 12:33:53.193042] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:48.217 [2024-11-20 12:33:53.194151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:48.217 [2024-11-20 12:33:53.194259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:19:48.217 [2024-11-20 12:33:53.194365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:19:48.217 [2024-11-20 12:33:53.194364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:48.217 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:48.217 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:19:48.217 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:48.217 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:48.217 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:48.217 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:48.217 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:48.217 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.217 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:48.217 [2024-11-20 12:33:53.943349] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:48.217 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.217 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:48.217 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.217 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:48.217 Malloc0 00:19:48.217 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.217 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:48.217 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.217 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:48.217 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.218 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:48.218 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.218 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:48.476 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.476 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:48.476 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.476 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:48.476 [2024-11-20 12:33:53.987635] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:48.476 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.476 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:48.476 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:48.476 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:19:48.476 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:19:48.476 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:48.476 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:48.476 { 00:19:48.476 "params": { 00:19:48.476 "name": "Nvme$subsystem", 00:19:48.476 "trtype": "$TEST_TRANSPORT", 00:19:48.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:48.476 "adrfam": "ipv4", 00:19:48.476 "trsvcid": "$NVMF_PORT", 00:19:48.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:48.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:48.476 "hdgst": ${hdgst:-false}, 00:19:48.476 "ddgst": ${ddgst:-false} 00:19:48.476 }, 00:19:48.476 "method": "bdev_nvme_attach_controller" 00:19:48.476 } 00:19:48.476 EOF 00:19:48.476 )") 00:19:48.476 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:19:48.476 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:19:48.476 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:19:48.476 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:48.476 "params": { 00:19:48.476 "name": "Nvme1", 00:19:48.476 "trtype": "tcp", 00:19:48.476 "traddr": "10.0.0.2", 00:19:48.476 "adrfam": "ipv4", 00:19:48.476 "trsvcid": "4420", 00:19:48.476 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.476 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:48.476 "hdgst": false, 00:19:48.476 "ddgst": false 00:19:48.476 }, 00:19:48.476 "method": "bdev_nvme_attach_controller" 00:19:48.476 }' 00:19:48.476 [2024-11-20 12:33:54.037771] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:19:48.476 [2024-11-20 12:33:54.037814] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid191771 ] 00:19:48.476 [2024-11-20 12:33:54.116459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:48.476 [2024-11-20 12:33:54.164561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:48.476 [2024-11-20 12:33:54.164668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:48.476 [2024-11-20 12:33:54.164669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:48.734 I/O targets: 00:19:48.734 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:48.734 00:19:48.734 00:19:48.734 CUnit - A unit testing framework for C - Version 2.1-3 00:19:48.734 http://cunit.sourceforge.net/ 00:19:48.734 00:19:48.734 00:19:48.734 Suite: bdevio tests on: Nvme1n1 00:19:48.734 Test: blockdev write read block ...passed 00:19:48.734 Test: blockdev write zeroes read block ...passed 00:19:48.734 Test: blockdev write zeroes read no split ...passed 00:19:48.734 Test: blockdev write zeroes read split ...passed 00:19:48.734 Test: blockdev write zeroes read split partial ...passed 00:19:48.734 Test: blockdev reset ...[2024-11-20 12:33:54.494586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:48.734 [2024-11-20 12:33:54.494650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc98920 (9): Bad file descriptor 00:19:48.992 [2024-11-20 12:33:54.515526] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:19:48.992 passed 00:19:48.992 Test: blockdev write read 8 blocks ...passed 00:19:48.992 Test: blockdev write read size > 128k ...passed 00:19:48.992 Test: blockdev write read invalid size ...passed 00:19:48.992 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:48.992 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:48.992 Test: blockdev write read max offset ...passed 00:19:48.992 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:48.992 Test: blockdev writev readv 8 blocks ...passed 00:19:48.992 Test: blockdev writev readv 30 x 1block ...passed 00:19:48.992 Test: blockdev writev readv block ...passed 00:19:48.992 Test: blockdev writev readv size > 128k ...passed 00:19:48.992 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:48.992 Test: blockdev comparev and writev ...[2024-11-20 12:33:54.727030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:48.992 [2024-11-20 12:33:54.727060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:48.992 [2024-11-20 12:33:54.727074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:48.992 [2024-11-20 12:33:54.727082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:48.992 [2024-11-20 12:33:54.727332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:48.992 [2024-11-20 12:33:54.727344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:48.992 [2024-11-20 12:33:54.727356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:48.992 [2024-11-20 12:33:54.727364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:48.992 [2024-11-20 12:33:54.727587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:48.992 [2024-11-20 12:33:54.727598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:48.992 [2024-11-20 12:33:54.727609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:48.992 [2024-11-20 12:33:54.727617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:48.992 [2024-11-20 12:33:54.727851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:48.992 [2024-11-20 12:33:54.727862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:48.992 [2024-11-20 12:33:54.727874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:48.992 [2024-11-20 12:33:54.727882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:49.250 passed 00:19:49.250 Test: blockdev nvme passthru rw ...passed 00:19:49.250 Test: blockdev nvme passthru vendor specific ...[2024-11-20 12:33:54.809523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:49.250 [2024-11-20 12:33:54.809539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:49.250 [2024-11-20 12:33:54.809642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:49.250 [2024-11-20 12:33:54.809653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:49.250 [2024-11-20 12:33:54.809750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:49.250 [2024-11-20 12:33:54.809761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:49.250 [2024-11-20 12:33:54.809867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:49.250 [2024-11-20 12:33:54.809877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:49.250 passed 00:19:49.250 Test: blockdev nvme admin passthru ...passed 00:19:49.250 Test: blockdev copy ...passed 00:19:49.250 00:19:49.250 Run Summary: Type Total Ran Passed Failed Inactive 00:19:49.250 suites 1 1 n/a 0 0 00:19:49.250 tests 23 23 23 0 0 00:19:49.250 asserts 152 152 152 0 n/a 00:19:49.250 00:19:49.250 Elapsed time = 1.008 seconds 00:19:49.508 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:49.508 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.508 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:49.508 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.508 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:49.508 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:49.508 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:49.508 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:19:49.508 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:49.508 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:19:49.508 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:49.508 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:49.508 rmmod nvme_tcp 00:19:49.508 rmmod nvme_fabrics 00:19:49.508 rmmod nvme_keyring 00:19:49.508 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:49.508 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:19:49.508 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:19:49.508 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 191521 ']' 00:19:49.508 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 191521 00:19:49.508 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 191521 ']' 00:19:49.508 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 191521 00:19:49.508 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:19:49.508 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:49.508 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 191521 00:19:49.508 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:19:49.508 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:19:49.508 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 191521' 00:19:49.508 killing process with pid 191521 00:19:49.508 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 191521 00:19:49.508 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 191521 00:19:50.075 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:50.075 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:50.075 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:50.075 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:19:50.075 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:19:50.075 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:50.075 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:19:50.075 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:50.075 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:50.075 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:50.075 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:50.075 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:51.979 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:51.979 00:19:51.979 real 0m10.800s 00:19:51.979 user 0m13.026s 00:19:51.979 sys 0m5.381s 00:19:51.979 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:51.979 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:51.979 ************************************ 00:19:51.979 END TEST nvmf_bdevio_no_huge 00:19:51.979 ************************************ 00:19:51.979 12:33:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:51.979 12:33:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:51.979 12:33:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:51.979 12:33:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:51.979 ************************************ 00:19:51.979 START TEST nvmf_tls 00:19:51.979 ************************************ 00:19:51.979 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:52.240 * Looking for test storage... 00:19:52.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:52.240 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:52.240 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:19:52.240 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:52.240 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:52.240 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:52.240 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:52.240 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:52.240 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:52.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:52.241 --rc genhtml_branch_coverage=1 00:19:52.241 --rc genhtml_function_coverage=1 00:19:52.241 --rc genhtml_legend=1 00:19:52.241 --rc geninfo_all_blocks=1 00:19:52.241 --rc geninfo_unexecuted_blocks=1 00:19:52.241 00:19:52.241 ' 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:52.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:52.241 --rc genhtml_branch_coverage=1 00:19:52.241 --rc genhtml_function_coverage=1 00:19:52.241 --rc genhtml_legend=1 00:19:52.241 --rc geninfo_all_blocks=1 00:19:52.241 --rc geninfo_unexecuted_blocks=1 00:19:52.241 00:19:52.241 ' 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:52.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:52.241 --rc genhtml_branch_coverage=1 00:19:52.241 --rc genhtml_function_coverage=1 00:19:52.241 --rc genhtml_legend=1 00:19:52.241 --rc geninfo_all_blocks=1 00:19:52.241 --rc geninfo_unexecuted_blocks=1 00:19:52.241 00:19:52.241 ' 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:52.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:52.241 --rc genhtml_branch_coverage=1 00:19:52.241 --rc genhtml_function_coverage=1 00:19:52.241 --rc genhtml_legend=1 00:19:52.241 --rc geninfo_all_blocks=1 00:19:52.241 --rc geninfo_unexecuted_blocks=1 00:19:52.241 00:19:52.241 ' 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.241 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:52.242 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.242 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:19:52.242 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:52.242 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:52.242 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:52.242 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:52.242 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:52.242 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:52.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:52.242 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:52.242 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:52.242 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:52.242 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:52.242 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:19:52.242 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:52.242 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:52.242 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:52.242 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:52.242 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:52.242 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.242 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:52.242 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.242 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:52.242 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:52.242 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:19:52.242 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.813 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:58.813 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:19:58.813 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:58.813 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:58.813 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:58.813 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:58.813 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:58.813 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:19:58.813 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:58.813 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:58.814 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:58.814 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:58.814 Found net devices under 0000:86:00.0: cvl_0_0 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:58.814 Found net devices under 0000:86:00.1: cvl_0_1 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:58.814 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:58.814 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.465 ms 00:19:58.814 00:19:58.814 --- 10.0.0.2 ping statistics --- 00:19:58.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.814 rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:58.814 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:58.814 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:19:58.814 00:19:58.814 --- 10.0.0.1 ping statistics --- 00:19:58.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.814 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:58.814 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:58.815 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:58.815 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:58.815 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:58.815 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:58.815 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.815 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=195555 00:19:58.815 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 195555 00:19:58.815 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:58.815 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 195555 ']' 00:19:58.815 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.815 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:58.815 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.815 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:58.815 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.815 [2024-11-20 12:34:03.942564] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:19:58.815 [2024-11-20 12:34:03.942605] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:58.815 [2024-11-20 12:34:04.002597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.815 [2024-11-20 12:34:04.043333] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:58.815 [2024-11-20 12:34:04.043367] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:58.815 [2024-11-20 12:34:04.043374] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:58.815 [2024-11-20 12:34:04.043380] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:58.815 [2024-11-20 12:34:04.043385] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:58.815 [2024-11-20 12:34:04.043941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:58.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:58.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:58.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:58.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:58.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:19:58.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:58.815 true 00:19:58.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:58.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:19:58.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:19:58.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:19:58.815 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:59.074 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:19:59.074 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:59.332 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:19:59.332 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:19:59.332 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:59.333 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:59.333 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:19:59.591 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:19:59.591 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:19:59.591 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:59.591 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:19:59.850 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:19:59.850 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:19:59.850 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:59.850 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:59.850 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:20:00.109 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:20:00.109 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:20:00.109 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:00.369 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:00.369 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:20:00.628 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:20:00.628 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:20:00.628 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:00.628 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:00.628 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:00.628 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:00.628 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:20:00.628 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:00.628 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:00.628 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:00.628 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:00.628 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:00.628 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:00.628 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:00.628 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:20:00.628 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:00.628 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:00.628 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:00.628 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:00.628 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.luUc38Pkc4 00:20:00.628 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:20:00.628 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.mQVxy1z8NQ 00:20:00.628 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:00.628 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:00.628 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.luUc38Pkc4 00:20:00.628 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.mQVxy1z8NQ 00:20:00.628 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:00.887 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:01.146 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.luUc38Pkc4 00:20:01.146 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.luUc38Pkc4 00:20:01.146 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:01.146 [2024-11-20 12:34:06.845408] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:01.146 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:01.405 12:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:01.664 [2024-11-20 12:34:07.206322] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:01.664 [2024-11-20 12:34:07.206525] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:01.664 12:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:01.664 malloc0 00:20:01.664 12:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:01.922 12:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.luUc38Pkc4 00:20:02.181 12:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:02.440 12:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.luUc38Pkc4 00:20:12.415 Initializing NVMe Controllers 00:20:12.415 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:12.415 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:12.415 Initialization complete. Launching workers. 00:20:12.415 ======================================================== 00:20:12.415 Latency(us) 00:20:12.415 Device Information : IOPS MiB/s Average min max 00:20:12.415 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16800.38 65.63 3809.50 820.14 5595.07 00:20:12.415 ======================================================== 00:20:12.415 Total : 16800.38 65.63 3809.50 820.14 5595.07 00:20:12.415 00:20:12.415 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.luUc38Pkc4 00:20:12.415 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:12.415 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:12.415 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:12.415 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.luUc38Pkc4 00:20:12.415 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:12.415 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=197900 00:20:12.415 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:12.415 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:12.415 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 197900 /var/tmp/bdevperf.sock 00:20:12.415 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 197900 ']' 00:20:12.415 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:12.415 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:12.415 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:12.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:12.415 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:12.415 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.415 [2024-11-20 12:34:18.150532] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:20:12.416 [2024-11-20 12:34:18.150579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid197900 ] 00:20:12.675 [2024-11-20 12:34:18.224593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.675 [2024-11-20 12:34:18.265996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:12.675 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:12.675 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:12.675 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.luUc38Pkc4 00:20:12.934 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:13.193 [2024-11-20 12:34:18.707653] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:13.193 TLSTESTn1 00:20:13.193 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:13.193 Running I/O for 10 seconds... 00:20:15.137 5362.00 IOPS, 20.95 MiB/s [2024-11-20T11:34:22.281Z] 5451.50 IOPS, 21.29 MiB/s [2024-11-20T11:34:23.218Z] 5474.00 IOPS, 21.38 MiB/s [2024-11-20T11:34:24.156Z] 5493.25 IOPS, 21.46 MiB/s [2024-11-20T11:34:25.092Z] 5514.60 IOPS, 21.54 MiB/s [2024-11-20T11:34:26.029Z] 5531.67 IOPS, 21.61 MiB/s [2024-11-20T11:34:26.966Z] 5538.14 IOPS, 21.63 MiB/s [2024-11-20T11:34:28.345Z] 5527.75 IOPS, 21.59 MiB/s [2024-11-20T11:34:29.329Z] 5515.67 IOPS, 21.55 MiB/s [2024-11-20T11:34:29.329Z] 5519.40 IOPS, 21.56 MiB/s 00:20:23.563 Latency(us) 00:20:23.563 [2024-11-20T11:34:29.329Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.563 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:23.563 Verification LBA range: start 0x0 length 0x2000 00:20:23.563 TLSTESTn1 : 10.01 5524.16 21.58 0.00 0.00 23138.40 4962.01 25340.59 00:20:23.563 [2024-11-20T11:34:29.329Z] =================================================================================================================== 00:20:23.563 [2024-11-20T11:34:29.329Z] Total : 5524.16 21.58 0.00 0.00 23138.40 4962.01 25340.59 00:20:23.563 { 00:20:23.563 "results": [ 00:20:23.563 { 00:20:23.563 "job": "TLSTESTn1", 00:20:23.563 "core_mask": "0x4", 00:20:23.563 "workload": "verify", 00:20:23.563 "status": "finished", 00:20:23.563 "verify_range": { 00:20:23.563 "start": 0, 00:20:23.563 "length": 8192 00:20:23.563 }, 00:20:23.563 "queue_depth": 128, 00:20:23.563 "io_size": 4096, 00:20:23.563 "runtime": 10.014365, 00:20:23.563 "iops": 5524.164537641678, 00:20:23.563 "mibps": 21.578767725162805, 00:20:23.563 "io_failed": 0, 00:20:23.563 "io_timeout": 0, 00:20:23.563 "avg_latency_us": 23138.401490314965, 00:20:23.563 "min_latency_us": 4962.011428571429, 00:20:23.563 "max_latency_us": 25340.586666666666 00:20:23.563 } 00:20:23.563 ], 00:20:23.563 "core_count": 1 00:20:23.563 } 00:20:23.563 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:23.563 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 197900 00:20:23.563 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 197900 ']' 00:20:23.563 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 197900 00:20:23.563 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:23.563 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:23.563 12:34:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 197900 00:20:23.563 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:23.563 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:23.563 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 197900' 00:20:23.563 killing process with pid 197900 00:20:23.563 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 197900 00:20:23.564 Received shutdown signal, test time was about 10.000000 seconds 00:20:23.564 00:20:23.564 Latency(us) 00:20:23.564 [2024-11-20T11:34:29.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.564 [2024-11-20T11:34:29.330Z] =================================================================================================================== 00:20:23.564 [2024-11-20T11:34:29.330Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:23.564 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 197900 00:20:23.564 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mQVxy1z8NQ 00:20:23.564 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:23.564 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mQVxy1z8NQ 00:20:23.564 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:23.564 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:23.564 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:23.564 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:23.564 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mQVxy1z8NQ 00:20:23.564 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:23.564 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:23.564 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:23.564 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.mQVxy1z8NQ 00:20:23.564 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:23.564 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:23.564 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=199737 00:20:23.564 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:23.564 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 199737 /var/tmp/bdevperf.sock 00:20:23.564 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 199737 ']' 00:20:23.564 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:23.564 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:23.564 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:23.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:23.564 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:23.564 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:23.564 [2024-11-20 12:34:29.196250] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:20:23.565 [2024-11-20 12:34:29.196295] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid199737 ] 00:20:23.565 [2024-11-20 12:34:29.252890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.854 [2024-11-20 12:34:29.296934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:23.854 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:23.854 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:23.854 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.mQVxy1z8NQ 00:20:23.854 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:24.113 [2024-11-20 12:34:29.730890] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:24.113 [2024-11-20 12:34:29.740298] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:24.113 [2024-11-20 12:34:29.741118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc7d170 (107): Transport endpoint is not connected 00:20:24.113 [2024-11-20 12:34:29.742112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc7d170 (9): Bad file descriptor 00:20:24.113 [2024-11-20 12:34:29.743114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:24.113 [2024-11-20 12:34:29.743125] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:24.113 [2024-11-20 12:34:29.743133] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:24.113 [2024-11-20 12:34:29.743144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:24.113 request: 00:20:24.113 { 00:20:24.113 "name": "TLSTEST", 00:20:24.113 "trtype": "tcp", 00:20:24.113 "traddr": "10.0.0.2", 00:20:24.113 "adrfam": "ipv4", 00:20:24.113 "trsvcid": "4420", 00:20:24.113 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.113 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:24.113 "prchk_reftag": false, 00:20:24.113 "prchk_guard": false, 00:20:24.113 "hdgst": false, 00:20:24.113 "ddgst": false, 00:20:24.113 "psk": "key0", 00:20:24.113 "allow_unrecognized_csi": false, 00:20:24.113 "method": "bdev_nvme_attach_controller", 00:20:24.113 "req_id": 1 00:20:24.113 } 00:20:24.113 Got JSON-RPC error response 00:20:24.113 response: 00:20:24.113 { 00:20:24.113 "code": -5, 00:20:24.113 "message": "Input/output error" 00:20:24.113 } 00:20:24.113 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 199737 00:20:24.113 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 199737 ']' 00:20:24.113 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 199737 00:20:24.113 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:24.113 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:24.113 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 199737 00:20:24.113 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:24.113 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:24.113 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 199737' 00:20:24.113 killing process with pid 199737 00:20:24.113 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 199737 00:20:24.113 Received shutdown signal, test time was about 10.000000 seconds 00:20:24.113 00:20:24.113 Latency(us) 00:20:24.113 [2024-11-20T11:34:29.879Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.113 [2024-11-20T11:34:29.879Z] =================================================================================================================== 00:20:24.113 [2024-11-20T11:34:29.879Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:24.113 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 199737 00:20:24.372 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:24.372 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:24.372 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:24.372 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:24.372 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:24.372 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.luUc38Pkc4 00:20:24.372 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:24.372 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.luUc38Pkc4 00:20:24.372 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:24.372 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:24.372 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:24.372 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:24.372 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.luUc38Pkc4 00:20:24.372 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:24.372 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:24.372 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:24.372 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.luUc38Pkc4 00:20:24.372 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:24.372 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=199758 00:20:24.372 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:24.372 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:24.373 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 199758 /var/tmp/bdevperf.sock 00:20:24.373 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 199758 ']' 00:20:24.373 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:24.373 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:24.373 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:24.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:24.373 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:24.373 12:34:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.373 [2024-11-20 12:34:30.015355] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:20:24.373 [2024-11-20 12:34:30.015409] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid199758 ] 00:20:24.373 [2024-11-20 12:34:30.095126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.373 [2024-11-20 12:34:30.135172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:24.631 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:24.631 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:24.631 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.luUc38Pkc4 00:20:24.891 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:20:24.891 [2024-11-20 12:34:30.593952] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:24.891 [2024-11-20 12:34:30.602450] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:24.891 [2024-11-20 12:34:30.602473] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:24.891 [2024-11-20 12:34:30.602501] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:24.891 [2024-11-20 12:34:30.603307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2213170 (107): Transport endpoint is not connected 00:20:24.891 [2024-11-20 12:34:30.604299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2213170 (9): Bad file descriptor 00:20:24.891 [2024-11-20 12:34:30.605301] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:24.891 [2024-11-20 12:34:30.605316] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:24.891 [2024-11-20 12:34:30.605323] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:24.891 [2024-11-20 12:34:30.605334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:24.891 request: 00:20:24.891 { 00:20:24.891 "name": "TLSTEST", 00:20:24.891 "trtype": "tcp", 00:20:24.891 "traddr": "10.0.0.2", 00:20:24.891 "adrfam": "ipv4", 00:20:24.891 "trsvcid": "4420", 00:20:24.891 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.891 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:24.891 "prchk_reftag": false, 00:20:24.891 "prchk_guard": false, 00:20:24.891 "hdgst": false, 00:20:24.891 "ddgst": false, 00:20:24.891 "psk": "key0", 00:20:24.891 "allow_unrecognized_csi": false, 00:20:24.891 "method": "bdev_nvme_attach_controller", 00:20:24.891 "req_id": 1 00:20:24.891 } 00:20:24.891 Got JSON-RPC error response 00:20:24.891 response: 00:20:24.891 { 00:20:24.891 "code": -5, 00:20:24.891 "message": "Input/output error" 00:20:24.891 } 00:20:24.891 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 199758 00:20:24.891 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 199758 ']' 00:20:24.891 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 199758 00:20:24.891 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:24.891 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:24.891 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 199758 00:20:25.151 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:25.151 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:25.151 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 199758' 00:20:25.152 killing process with pid 199758 00:20:25.152 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 199758 00:20:25.152 Received shutdown signal, test time was about 10.000000 seconds 00:20:25.152 00:20:25.152 Latency(us) 00:20:25.152 [2024-11-20T11:34:30.918Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.152 [2024-11-20T11:34:30.918Z] =================================================================================================================== 00:20:25.152 [2024-11-20T11:34:30.918Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:25.152 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 199758 00:20:25.152 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:25.152 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:25.152 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:25.152 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:25.152 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:25.152 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.luUc38Pkc4 00:20:25.152 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:25.152 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.luUc38Pkc4 00:20:25.152 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:25.152 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:25.152 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:25.152 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:25.152 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.luUc38Pkc4 00:20:25.152 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:25.152 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:25.152 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:25.152 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.luUc38Pkc4 00:20:25.152 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:25.152 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=199990 00:20:25.152 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:25.152 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:25.152 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 199990 /var/tmp/bdevperf.sock 00:20:25.152 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 199990 ']' 00:20:25.152 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:25.152 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:25.152 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:25.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:25.152 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:25.152 12:34:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.152 [2024-11-20 12:34:30.882989] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:20:25.152 [2024-11-20 12:34:30.883040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid199990 ] 00:20:25.411 [2024-11-20 12:34:30.956263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.411 [2024-11-20 12:34:30.993884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:25.411 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:25.411 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:25.411 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.luUc38Pkc4 00:20:25.669 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:25.930 [2024-11-20 12:34:31.443291] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:25.930 [2024-11-20 12:34:31.454620] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:25.930 [2024-11-20 12:34:31.454643] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:25.930 [2024-11-20 12:34:31.454664] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:25.930 [2024-11-20 12:34:31.455604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d66170 (107): Transport endpoint is not connected 00:20:25.930 [2024-11-20 12:34:31.456598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d66170 (9): Bad file descriptor 00:20:25.930 [2024-11-20 12:34:31.457600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:20:25.930 [2024-11-20 12:34:31.457611] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:25.930 [2024-11-20 12:34:31.457620] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:20:25.930 [2024-11-20 12:34:31.457632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:20:25.930 request: 00:20:25.930 { 00:20:25.930 "name": "TLSTEST", 00:20:25.930 "trtype": "tcp", 00:20:25.930 "traddr": "10.0.0.2", 00:20:25.930 "adrfam": "ipv4", 00:20:25.930 "trsvcid": "4420", 00:20:25.930 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:25.930 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:25.930 "prchk_reftag": false, 00:20:25.930 "prchk_guard": false, 00:20:25.930 "hdgst": false, 00:20:25.930 "ddgst": false, 00:20:25.930 "psk": "key0", 00:20:25.930 "allow_unrecognized_csi": false, 00:20:25.930 "method": "bdev_nvme_attach_controller", 00:20:25.930 "req_id": 1 00:20:25.930 } 00:20:25.930 Got JSON-RPC error response 00:20:25.930 response: 00:20:25.930 { 00:20:25.930 "code": -5, 00:20:25.930 "message": "Input/output error" 00:20:25.930 } 00:20:25.930 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 199990 00:20:25.930 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 199990 ']' 00:20:25.930 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 199990 00:20:25.930 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:25.930 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:25.930 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 199990 00:20:25.930 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:25.930 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:25.930 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 199990' 00:20:25.930 killing process with pid 199990 00:20:25.930 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 199990 00:20:25.930 Received shutdown signal, test time was about 10.000000 seconds 00:20:25.930 00:20:25.930 Latency(us) 00:20:25.930 [2024-11-20T11:34:31.696Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.930 [2024-11-20T11:34:31.696Z] =================================================================================================================== 00:20:25.930 [2024-11-20T11:34:31.696Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:25.930 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 199990 00:20:25.930 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:25.930 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:25.930 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:25.930 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:25.930 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:25.930 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:25.930 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:25.930 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:25.930 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:25.930 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:25.930 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:25.930 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:25.930 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:25.930 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:25.930 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:26.189 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:26.189 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:26.189 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:26.189 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=200152 00:20:26.189 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:26.189 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:26.189 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 200152 /var/tmp/bdevperf.sock 00:20:26.189 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 200152 ']' 00:20:26.189 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:26.189 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:26.189 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:26.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:26.189 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:26.189 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:26.190 [2024-11-20 12:34:31.739555] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:20:26.190 [2024-11-20 12:34:31.739604] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid200152 ] 00:20:26.190 [2024-11-20 12:34:31.812962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.190 [2024-11-20 12:34:31.851134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.190 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:26.190 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:26.190 12:34:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:20:26.448 [2024-11-20 12:34:32.112954] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:20:26.448 [2024-11-20 12:34:32.112985] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:26.448 request: 00:20:26.448 { 00:20:26.448 "name": "key0", 00:20:26.448 "path": "", 00:20:26.448 "method": "keyring_file_add_key", 00:20:26.448 "req_id": 1 00:20:26.448 } 00:20:26.448 Got JSON-RPC error response 00:20:26.448 response: 00:20:26.448 { 00:20:26.448 "code": -1, 00:20:26.448 "message": "Operation not permitted" 00:20:26.448 } 00:20:26.448 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:26.708 [2024-11-20 12:34:32.301534] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:26.708 [2024-11-20 12:34:32.301560] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:26.708 request: 00:20:26.708 { 00:20:26.708 "name": "TLSTEST", 00:20:26.708 "trtype": "tcp", 00:20:26.708 "traddr": "10.0.0.2", 00:20:26.708 "adrfam": "ipv4", 00:20:26.708 "trsvcid": "4420", 00:20:26.708 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:26.708 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:26.708 "prchk_reftag": false, 00:20:26.708 "prchk_guard": false, 00:20:26.708 "hdgst": false, 00:20:26.708 "ddgst": false, 00:20:26.708 "psk": "key0", 00:20:26.708 "allow_unrecognized_csi": false, 00:20:26.708 "method": "bdev_nvme_attach_controller", 00:20:26.708 "req_id": 1 00:20:26.708 } 00:20:26.708 Got JSON-RPC error response 00:20:26.708 response: 00:20:26.708 { 00:20:26.708 "code": -126, 00:20:26.708 "message": "Required key not available" 00:20:26.708 } 00:20:26.708 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 200152 00:20:26.708 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 200152 ']' 00:20:26.708 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 200152 00:20:26.708 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:26.708 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:26.708 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 200152 00:20:26.708 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:26.708 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:26.708 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 200152' 00:20:26.708 killing process with pid 200152 00:20:26.708 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 200152 00:20:26.708 Received shutdown signal, test time was about 10.000000 seconds 00:20:26.708 00:20:26.708 Latency(us) 00:20:26.708 [2024-11-20T11:34:32.474Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.708 [2024-11-20T11:34:32.474Z] =================================================================================================================== 00:20:26.708 [2024-11-20T11:34:32.474Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:26.709 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 200152 00:20:26.967 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:26.967 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:26.967 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:26.967 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:26.968 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:26.968 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 195555 00:20:26.968 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 195555 ']' 00:20:26.968 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 195555 00:20:26.968 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:26.968 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:26.968 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 195555 00:20:26.968 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:26.968 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:26.968 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 195555' 00:20:26.968 killing process with pid 195555 00:20:26.968 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 195555 00:20:26.968 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 195555 00:20:27.227 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:27.227 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:27.227 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:27.227 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:27.227 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:27.227 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:20:27.227 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:27.227 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:27.227 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:20:27.227 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.lPlypwmqUq 00:20:27.227 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:27.227 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.lPlypwmqUq 00:20:27.227 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:20:27.227 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:27.227 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:27.227 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:27.227 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=200253 00:20:27.227 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:27.227 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 200253 00:20:27.227 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 200253 ']' 00:20:27.227 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:27.227 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:27.227 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:27.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:27.227 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:27.227 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:27.227 [2024-11-20 12:34:32.824605] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:20:27.227 [2024-11-20 12:34:32.824655] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:27.227 [2024-11-20 12:34:32.890800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.228 [2024-11-20 12:34:32.932715] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:27.228 [2024-11-20 12:34:32.932750] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:27.228 [2024-11-20 12:34:32.932761] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:27.228 [2024-11-20 12:34:32.932767] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:27.228 [2024-11-20 12:34:32.932772] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:27.228 [2024-11-20 12:34:32.933319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:27.487 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:27.487 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:27.487 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:27.487 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:27.487 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:27.487 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:27.487 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.lPlypwmqUq 00:20:27.487 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.lPlypwmqUq 00:20:27.487 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:27.487 [2024-11-20 12:34:33.244580] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:27.746 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:27.746 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:28.005 [2024-11-20 12:34:33.633572] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:28.005 [2024-11-20 12:34:33.633763] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:28.005 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:28.264 malloc0 00:20:28.264 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:28.523 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.lPlypwmqUq 00:20:28.523 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:28.782 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lPlypwmqUq 00:20:28.782 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:28.783 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:28.783 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:28.783 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.lPlypwmqUq 00:20:28.783 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:28.783 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:28.783 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=200626 00:20:28.783 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:28.783 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 200626 /var/tmp/bdevperf.sock 00:20:28.783 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 200626 ']' 00:20:28.783 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:28.783 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:28.783 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:28.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:28.783 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:28.783 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:28.783 [2024-11-20 12:34:34.474295] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:20:28.783 [2024-11-20 12:34:34.474349] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid200626 ] 00:20:29.042 [2024-11-20 12:34:34.551570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.042 [2024-11-20 12:34:34.593980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:29.042 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:29.042 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:29.042 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lPlypwmqUq 00:20:29.300 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:29.300 [2024-11-20 12:34:35.039816] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:29.559 TLSTESTn1 00:20:29.559 12:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:29.559 Running I/O for 10 seconds... 00:20:31.874 5378.00 IOPS, 21.01 MiB/s [2024-11-20T11:34:38.577Z] 5459.50 IOPS, 21.33 MiB/s [2024-11-20T11:34:39.523Z] 5509.67 IOPS, 21.52 MiB/s [2024-11-20T11:34:40.459Z] 5496.50 IOPS, 21.47 MiB/s [2024-11-20T11:34:41.396Z] 5495.60 IOPS, 21.47 MiB/s [2024-11-20T11:34:42.333Z] 5481.00 IOPS, 21.41 MiB/s [2024-11-20T11:34:43.268Z] 5497.86 IOPS, 21.48 MiB/s [2024-11-20T11:34:44.644Z] 5440.75 IOPS, 21.25 MiB/s [2024-11-20T11:34:45.284Z] 5373.89 IOPS, 20.99 MiB/s [2024-11-20T11:34:45.284Z] 5334.60 IOPS, 20.84 MiB/s 00:20:39.518 Latency(us) 00:20:39.518 [2024-11-20T11:34:45.284Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.518 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:39.518 Verification LBA range: start 0x0 length 0x2000 00:20:39.518 TLSTESTn1 : 10.02 5338.15 20.85 0.00 0.00 23941.86 6147.90 28211.69 00:20:39.518 [2024-11-20T11:34:45.284Z] =================================================================================================================== 00:20:39.518 [2024-11-20T11:34:45.284Z] Total : 5338.15 20.85 0.00 0.00 23941.86 6147.90 28211.69 00:20:39.518 { 00:20:39.518 "results": [ 00:20:39.518 { 00:20:39.518 "job": "TLSTESTn1", 00:20:39.518 "core_mask": "0x4", 00:20:39.518 "workload": "verify", 00:20:39.518 "status": "finished", 00:20:39.518 "verify_range": { 00:20:39.518 "start": 0, 00:20:39.518 "length": 8192 00:20:39.518 }, 00:20:39.518 "queue_depth": 128, 00:20:39.518 "io_size": 4096, 00:20:39.518 "runtime": 10.017147, 00:20:39.518 "iops": 5338.1466798879965, 00:20:39.518 "mibps": 20.852135468312486, 00:20:39.518 "io_failed": 0, 00:20:39.518 "io_timeout": 0, 00:20:39.518 "avg_latency_us": 23941.863141897156, 00:20:39.518 "min_latency_us": 6147.900952380953, 00:20:39.518 "max_latency_us": 28211.687619047618 00:20:39.518 } 00:20:39.518 ], 00:20:39.518 "core_count": 1 00:20:39.518 } 00:20:39.777 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:39.777 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 200626 00:20:39.777 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 200626 ']' 00:20:39.777 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 200626 00:20:39.777 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:39.777 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:39.777 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 200626 00:20:39.777 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:39.777 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:39.777 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 200626' 00:20:39.777 killing process with pid 200626 00:20:39.777 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 200626 00:20:39.777 Received shutdown signal, test time was about 10.000000 seconds 00:20:39.777 00:20:39.777 Latency(us) 00:20:39.777 [2024-11-20T11:34:45.543Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.777 [2024-11-20T11:34:45.543Z] =================================================================================================================== 00:20:39.777 [2024-11-20T11:34:45.543Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:39.777 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 200626 00:20:39.777 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.lPlypwmqUq 00:20:39.777 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lPlypwmqUq 00:20:39.777 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:39.777 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lPlypwmqUq 00:20:39.777 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:39.777 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:39.777 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:39.777 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:39.777 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lPlypwmqUq 00:20:39.777 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:39.777 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:39.777 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:39.777 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.lPlypwmqUq 00:20:39.777 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:39.777 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=202348 00:20:39.777 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:39.777 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:39.777 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 202348 /var/tmp/bdevperf.sock 00:20:39.777 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 202348 ']' 00:20:39.777 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:39.777 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:39.777 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:39.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:39.777 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:39.777 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:40.035 [2024-11-20 12:34:45.551509] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:20:40.036 [2024-11-20 12:34:45.551561] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid202348 ] 00:20:40.036 [2024-11-20 12:34:45.622193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.036 [2024-11-20 12:34:45.658904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:40.036 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:40.036 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:40.036 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lPlypwmqUq 00:20:40.294 [2024-11-20 12:34:45.932876] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.lPlypwmqUq': 0100666 00:20:40.294 [2024-11-20 12:34:45.932908] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:40.294 request: 00:20:40.294 { 00:20:40.294 "name": "key0", 00:20:40.294 "path": "/tmp/tmp.lPlypwmqUq", 00:20:40.294 "method": "keyring_file_add_key", 00:20:40.294 "req_id": 1 00:20:40.294 } 00:20:40.294 Got JSON-RPC error response 00:20:40.294 response: 00:20:40.294 { 00:20:40.294 "code": -1, 00:20:40.294 "message": "Operation not permitted" 00:20:40.294 } 00:20:40.294 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:40.552 [2024-11-20 12:34:46.133479] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:40.552 [2024-11-20 12:34:46.133503] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:40.552 request: 00:20:40.552 { 00:20:40.552 "name": "TLSTEST", 00:20:40.552 "trtype": "tcp", 00:20:40.552 "traddr": "10.0.0.2", 00:20:40.552 "adrfam": "ipv4", 00:20:40.552 "trsvcid": "4420", 00:20:40.552 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:40.552 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:40.552 "prchk_reftag": false, 00:20:40.552 "prchk_guard": false, 00:20:40.552 "hdgst": false, 00:20:40.552 "ddgst": false, 00:20:40.552 "psk": "key0", 00:20:40.552 "allow_unrecognized_csi": false, 00:20:40.552 "method": "bdev_nvme_attach_controller", 00:20:40.552 "req_id": 1 00:20:40.552 } 00:20:40.552 Got JSON-RPC error response 00:20:40.552 response: 00:20:40.552 { 00:20:40.552 "code": -126, 00:20:40.552 "message": "Required key not available" 00:20:40.552 } 00:20:40.552 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 202348 00:20:40.552 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 202348 ']' 00:20:40.552 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 202348 00:20:40.552 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:40.552 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:40.553 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 202348 00:20:40.553 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:40.553 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:40.553 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 202348' 00:20:40.553 killing process with pid 202348 00:20:40.553 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 202348 00:20:40.553 Received shutdown signal, test time was about 10.000000 seconds 00:20:40.553 00:20:40.553 Latency(us) 00:20:40.553 [2024-11-20T11:34:46.319Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.553 [2024-11-20T11:34:46.319Z] =================================================================================================================== 00:20:40.553 [2024-11-20T11:34:46.319Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:40.553 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 202348 00:20:40.811 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:40.811 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:40.811 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:40.811 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:40.811 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:40.811 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 200253 00:20:40.811 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 200253 ']' 00:20:40.811 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 200253 00:20:40.811 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:40.811 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:40.811 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 200253 00:20:40.811 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:40.811 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:40.811 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 200253' 00:20:40.811 killing process with pid 200253 00:20:40.811 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 200253 00:20:40.811 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 200253 00:20:41.071 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:20:41.071 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:41.071 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:41.071 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:41.071 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=202588 00:20:41.071 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:41.071 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 202588 00:20:41.071 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 202588 ']' 00:20:41.071 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:41.071 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:41.071 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:41.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:41.071 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:41.071 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:41.071 [2024-11-20 12:34:46.643858] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:20:41.071 [2024-11-20 12:34:46.643914] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:41.071 [2024-11-20 12:34:46.712781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.071 [2024-11-20 12:34:46.752735] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:41.071 [2024-11-20 12:34:46.752769] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:41.071 [2024-11-20 12:34:46.752777] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:41.071 [2024-11-20 12:34:46.752782] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:41.071 [2024-11-20 12:34:46.752787] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:41.071 [2024-11-20 12:34:46.753350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:41.330 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:41.330 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:41.330 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:41.330 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:41.330 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:41.330 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:41.330 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.lPlypwmqUq 00:20:41.330 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:41.330 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.lPlypwmqUq 00:20:41.330 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:20:41.330 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:41.330 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:20:41.330 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:41.330 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.lPlypwmqUq 00:20:41.330 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.lPlypwmqUq 00:20:41.330 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:41.330 [2024-11-20 12:34:47.060002] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:41.330 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:41.588 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:41.847 [2024-11-20 12:34:47.481087] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:41.847 [2024-11-20 12:34:47.481300] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:41.847 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:42.105 malloc0 00:20:42.105 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:42.364 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.lPlypwmqUq 00:20:42.364 [2024-11-20 12:34:48.066480] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.lPlypwmqUq': 0100666 00:20:42.364 [2024-11-20 12:34:48.066509] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:42.365 request: 00:20:42.365 { 00:20:42.365 "name": "key0", 00:20:42.365 "path": "/tmp/tmp.lPlypwmqUq", 00:20:42.365 "method": "keyring_file_add_key", 00:20:42.365 "req_id": 1 00:20:42.365 } 00:20:42.365 Got JSON-RPC error response 00:20:42.365 response: 00:20:42.365 { 00:20:42.365 "code": -1, 00:20:42.365 "message": "Operation not permitted" 00:20:42.365 } 00:20:42.365 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:42.624 [2024-11-20 12:34:48.258985] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:20:42.624 [2024-11-20 12:34:48.259015] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:42.624 request: 00:20:42.624 { 00:20:42.624 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:42.624 "host": "nqn.2016-06.io.spdk:host1", 00:20:42.624 "psk": "key0", 00:20:42.624 "method": "nvmf_subsystem_add_host", 00:20:42.624 "req_id": 1 00:20:42.624 } 00:20:42.624 Got JSON-RPC error response 00:20:42.624 response: 00:20:42.624 { 00:20:42.624 "code": -32603, 00:20:42.624 "message": "Internal error" 00:20:42.624 } 00:20:42.624 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:42.624 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:42.624 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:42.624 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:42.624 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 202588 00:20:42.624 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 202588 ']' 00:20:42.624 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 202588 00:20:42.624 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:42.624 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:42.624 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 202588 00:20:42.624 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:42.624 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:42.624 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 202588' 00:20:42.624 killing process with pid 202588 00:20:42.624 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 202588 00:20:42.624 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 202588 00:20:42.883 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.lPlypwmqUq 00:20:42.883 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:20:42.883 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:42.883 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:42.883 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:42.883 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=202889 00:20:42.883 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:42.883 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 202889 00:20:42.883 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 202889 ']' 00:20:42.883 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.883 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:42.883 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.883 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:42.883 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:42.883 [2024-11-20 12:34:48.570365] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:20:42.883 [2024-11-20 12:34:48.570418] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:43.142 [2024-11-20 12:34:48.651011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.142 [2024-11-20 12:34:48.691603] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:43.142 [2024-11-20 12:34:48.691639] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:43.142 [2024-11-20 12:34:48.691647] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:43.142 [2024-11-20 12:34:48.691653] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:43.142 [2024-11-20 12:34:48.691659] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:43.142 [2024-11-20 12:34:48.692215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.142 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:43.142 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:43.142 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:43.142 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:43.142 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:43.142 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:43.142 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.lPlypwmqUq 00:20:43.142 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.lPlypwmqUq 00:20:43.142 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:43.401 [2024-11-20 12:34:48.995700] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:43.401 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:43.658 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:43.658 [2024-11-20 12:34:49.384686] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:43.658 [2024-11-20 12:34:49.384882] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:43.658 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:43.916 malloc0 00:20:43.916 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:44.175 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.lPlypwmqUq 00:20:44.434 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:44.434 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=203291 00:20:44.434 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:44.434 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:44.434 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 203291 /var/tmp/bdevperf.sock 00:20:44.434 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 203291 ']' 00:20:44.434 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:44.434 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:44.434 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:44.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:44.434 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:44.434 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:44.693 [2024-11-20 12:34:50.238086] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:20:44.693 [2024-11-20 12:34:50.238140] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid203291 ] 00:20:44.693 [2024-11-20 12:34:50.311422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.693 [2024-11-20 12:34:50.351687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:44.693 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:44.693 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:44.693 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lPlypwmqUq 00:20:44.952 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:45.211 [2024-11-20 12:34:50.802448] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:45.211 TLSTESTn1 00:20:45.211 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:45.471 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:20:45.471 "subsystems": [ 00:20:45.471 { 00:20:45.471 "subsystem": "keyring", 00:20:45.471 "config": [ 00:20:45.471 { 00:20:45.471 "method": "keyring_file_add_key", 00:20:45.471 "params": { 00:20:45.471 "name": "key0", 00:20:45.471 "path": "/tmp/tmp.lPlypwmqUq" 00:20:45.471 } 00:20:45.471 } 00:20:45.471 ] 00:20:45.471 }, 00:20:45.471 { 00:20:45.471 "subsystem": "iobuf", 00:20:45.471 "config": [ 00:20:45.471 { 00:20:45.471 "method": "iobuf_set_options", 00:20:45.471 "params": { 00:20:45.471 "small_pool_count": 8192, 00:20:45.471 "large_pool_count": 1024, 00:20:45.471 "small_bufsize": 8192, 00:20:45.471 "large_bufsize": 135168, 00:20:45.471 "enable_numa": false 00:20:45.471 } 00:20:45.471 } 00:20:45.471 ] 00:20:45.472 }, 00:20:45.472 { 00:20:45.472 "subsystem": "sock", 00:20:45.472 "config": [ 00:20:45.472 { 00:20:45.472 "method": "sock_set_default_impl", 00:20:45.472 "params": { 00:20:45.472 "impl_name": "posix" 00:20:45.472 } 00:20:45.472 }, 00:20:45.472 { 00:20:45.472 "method": "sock_impl_set_options", 00:20:45.472 "params": { 00:20:45.472 "impl_name": "ssl", 00:20:45.472 "recv_buf_size": 4096, 00:20:45.472 "send_buf_size": 4096, 00:20:45.472 "enable_recv_pipe": true, 00:20:45.472 "enable_quickack": false, 00:20:45.472 "enable_placement_id": 0, 00:20:45.472 "enable_zerocopy_send_server": true, 00:20:45.472 "enable_zerocopy_send_client": false, 00:20:45.472 "zerocopy_threshold": 0, 00:20:45.472 "tls_version": 0, 00:20:45.472 "enable_ktls": false 00:20:45.472 } 00:20:45.472 }, 00:20:45.472 { 00:20:45.472 "method": "sock_impl_set_options", 00:20:45.472 "params": { 00:20:45.472 "impl_name": "posix", 00:20:45.472 "recv_buf_size": 2097152, 00:20:45.472 "send_buf_size": 2097152, 00:20:45.472 "enable_recv_pipe": true, 00:20:45.472 "enable_quickack": false, 00:20:45.472 "enable_placement_id": 0, 00:20:45.472 "enable_zerocopy_send_server": true, 00:20:45.472 "enable_zerocopy_send_client": false, 00:20:45.472 "zerocopy_threshold": 0, 00:20:45.472 "tls_version": 0, 00:20:45.472 "enable_ktls": false 00:20:45.472 } 00:20:45.472 } 00:20:45.472 ] 00:20:45.472 }, 00:20:45.472 { 00:20:45.472 "subsystem": "vmd", 00:20:45.472 "config": [] 00:20:45.472 }, 00:20:45.472 { 00:20:45.472 "subsystem": "accel", 00:20:45.472 "config": [ 00:20:45.472 { 00:20:45.472 "method": "accel_set_options", 00:20:45.472 "params": { 00:20:45.472 "small_cache_size": 128, 00:20:45.472 "large_cache_size": 16, 00:20:45.472 "task_count": 2048, 00:20:45.472 "sequence_count": 2048, 00:20:45.472 "buf_count": 2048 00:20:45.472 } 00:20:45.472 } 00:20:45.472 ] 00:20:45.472 }, 00:20:45.472 { 00:20:45.472 "subsystem": "bdev", 00:20:45.472 "config": [ 00:20:45.472 { 00:20:45.472 "method": "bdev_set_options", 00:20:45.472 "params": { 00:20:45.472 "bdev_io_pool_size": 65535, 00:20:45.472 "bdev_io_cache_size": 256, 00:20:45.472 "bdev_auto_examine": true, 00:20:45.472 "iobuf_small_cache_size": 128, 00:20:45.472 "iobuf_large_cache_size": 16 00:20:45.472 } 00:20:45.472 }, 00:20:45.472 { 00:20:45.472 "method": "bdev_raid_set_options", 00:20:45.472 "params": { 00:20:45.472 "process_window_size_kb": 1024, 00:20:45.472 "process_max_bandwidth_mb_sec": 0 00:20:45.472 } 00:20:45.472 }, 00:20:45.472 { 00:20:45.472 "method": "bdev_iscsi_set_options", 00:20:45.472 "params": { 00:20:45.472 "timeout_sec": 30 00:20:45.472 } 00:20:45.472 }, 00:20:45.472 { 00:20:45.472 "method": "bdev_nvme_set_options", 00:20:45.472 "params": { 00:20:45.472 "action_on_timeout": "none", 00:20:45.472 "timeout_us": 0, 00:20:45.472 "timeout_admin_us": 0, 00:20:45.472 "keep_alive_timeout_ms": 10000, 00:20:45.472 "arbitration_burst": 0, 00:20:45.472 "low_priority_weight": 0, 00:20:45.472 "medium_priority_weight": 0, 00:20:45.472 "high_priority_weight": 0, 00:20:45.472 "nvme_adminq_poll_period_us": 10000, 00:20:45.472 "nvme_ioq_poll_period_us": 0, 00:20:45.472 "io_queue_requests": 0, 00:20:45.472 "delay_cmd_submit": true, 00:20:45.472 "transport_retry_count": 4, 00:20:45.472 "bdev_retry_count": 3, 00:20:45.472 "transport_ack_timeout": 0, 00:20:45.472 "ctrlr_loss_timeout_sec": 0, 00:20:45.472 "reconnect_delay_sec": 0, 00:20:45.472 "fast_io_fail_timeout_sec": 0, 00:20:45.472 "disable_auto_failback": false, 00:20:45.472 "generate_uuids": false, 00:20:45.472 "transport_tos": 0, 00:20:45.472 "nvme_error_stat": false, 00:20:45.472 "rdma_srq_size": 0, 00:20:45.472 "io_path_stat": false, 00:20:45.472 "allow_accel_sequence": false, 00:20:45.472 "rdma_max_cq_size": 0, 00:20:45.472 "rdma_cm_event_timeout_ms": 0, 00:20:45.472 "dhchap_digests": [ 00:20:45.472 "sha256", 00:20:45.472 "sha384", 00:20:45.472 "sha512" 00:20:45.472 ], 00:20:45.472 "dhchap_dhgroups": [ 00:20:45.472 "null", 00:20:45.472 "ffdhe2048", 00:20:45.472 "ffdhe3072", 00:20:45.472 "ffdhe4096", 00:20:45.472 "ffdhe6144", 00:20:45.472 "ffdhe8192" 00:20:45.472 ] 00:20:45.472 } 00:20:45.472 }, 00:20:45.472 { 00:20:45.472 "method": "bdev_nvme_set_hotplug", 00:20:45.472 "params": { 00:20:45.472 "period_us": 100000, 00:20:45.472 "enable": false 00:20:45.472 } 00:20:45.472 }, 00:20:45.472 { 00:20:45.472 "method": "bdev_malloc_create", 00:20:45.472 "params": { 00:20:45.472 "name": "malloc0", 00:20:45.472 "num_blocks": 8192, 00:20:45.472 "block_size": 4096, 00:20:45.472 "physical_block_size": 4096, 00:20:45.472 "uuid": "98b207b2-f838-47a8-a18f-af13c13da5a8", 00:20:45.472 "optimal_io_boundary": 0, 00:20:45.472 "md_size": 0, 00:20:45.472 "dif_type": 0, 00:20:45.472 "dif_is_head_of_md": false, 00:20:45.472 "dif_pi_format": 0 00:20:45.472 } 00:20:45.472 }, 00:20:45.472 { 00:20:45.472 "method": "bdev_wait_for_examine" 00:20:45.472 } 00:20:45.472 ] 00:20:45.472 }, 00:20:45.472 { 00:20:45.472 "subsystem": "nbd", 00:20:45.472 "config": [] 00:20:45.472 }, 00:20:45.472 { 00:20:45.472 "subsystem": "scheduler", 00:20:45.472 "config": [ 00:20:45.472 { 00:20:45.472 "method": "framework_set_scheduler", 00:20:45.472 "params": { 00:20:45.472 "name": "static" 00:20:45.472 } 00:20:45.472 } 00:20:45.472 ] 00:20:45.472 }, 00:20:45.472 { 00:20:45.472 "subsystem": "nvmf", 00:20:45.472 "config": [ 00:20:45.472 { 00:20:45.472 "method": "nvmf_set_config", 00:20:45.472 "params": { 00:20:45.472 "discovery_filter": "match_any", 00:20:45.472 "admin_cmd_passthru": { 00:20:45.472 "identify_ctrlr": false 00:20:45.472 }, 00:20:45.472 "dhchap_digests": [ 00:20:45.472 "sha256", 00:20:45.472 "sha384", 00:20:45.472 "sha512" 00:20:45.472 ], 00:20:45.472 "dhchap_dhgroups": [ 00:20:45.472 "null", 00:20:45.472 "ffdhe2048", 00:20:45.472 "ffdhe3072", 00:20:45.472 "ffdhe4096", 00:20:45.472 "ffdhe6144", 00:20:45.472 "ffdhe8192" 00:20:45.472 ] 00:20:45.472 } 00:20:45.472 }, 00:20:45.472 { 00:20:45.472 "method": "nvmf_set_max_subsystems", 00:20:45.472 "params": { 00:20:45.472 "max_subsystems": 1024 00:20:45.472 } 00:20:45.472 }, 00:20:45.472 { 00:20:45.472 "method": "nvmf_set_crdt", 00:20:45.472 "params": { 00:20:45.472 "crdt1": 0, 00:20:45.472 "crdt2": 0, 00:20:45.472 "crdt3": 0 00:20:45.472 } 00:20:45.472 }, 00:20:45.472 { 00:20:45.472 "method": "nvmf_create_transport", 00:20:45.472 "params": { 00:20:45.472 "trtype": "TCP", 00:20:45.472 "max_queue_depth": 128, 00:20:45.472 "max_io_qpairs_per_ctrlr": 127, 00:20:45.472 "in_capsule_data_size": 4096, 00:20:45.472 "max_io_size": 131072, 00:20:45.472 "io_unit_size": 131072, 00:20:45.472 "max_aq_depth": 128, 00:20:45.472 "num_shared_buffers": 511, 00:20:45.472 "buf_cache_size": 4294967295, 00:20:45.472 "dif_insert_or_strip": false, 00:20:45.472 "zcopy": false, 00:20:45.472 "c2h_success": false, 00:20:45.472 "sock_priority": 0, 00:20:45.472 "abort_timeout_sec": 1, 00:20:45.472 "ack_timeout": 0, 00:20:45.472 "data_wr_pool_size": 0 00:20:45.472 } 00:20:45.472 }, 00:20:45.472 { 00:20:45.472 "method": "nvmf_create_subsystem", 00:20:45.472 "params": { 00:20:45.473 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.473 "allow_any_host": false, 00:20:45.473 "serial_number": "SPDK00000000000001", 00:20:45.473 "model_number": "SPDK bdev Controller", 00:20:45.473 "max_namespaces": 10, 00:20:45.473 "min_cntlid": 1, 00:20:45.473 "max_cntlid": 65519, 00:20:45.473 "ana_reporting": false 00:20:45.473 } 00:20:45.473 }, 00:20:45.473 { 00:20:45.473 "method": "nvmf_subsystem_add_host", 00:20:45.473 "params": { 00:20:45.473 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.473 "host": "nqn.2016-06.io.spdk:host1", 00:20:45.473 "psk": "key0" 00:20:45.473 } 00:20:45.473 }, 00:20:45.473 { 00:20:45.473 "method": "nvmf_subsystem_add_ns", 00:20:45.473 "params": { 00:20:45.473 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.473 "namespace": { 00:20:45.473 "nsid": 1, 00:20:45.473 "bdev_name": "malloc0", 00:20:45.473 "nguid": "98B207B2F83847A8A18FAF13C13DA5A8", 00:20:45.473 "uuid": "98b207b2-f838-47a8-a18f-af13c13da5a8", 00:20:45.473 "no_auto_visible": false 00:20:45.473 } 00:20:45.473 } 00:20:45.473 }, 00:20:45.473 { 00:20:45.473 "method": "nvmf_subsystem_add_listener", 00:20:45.473 "params": { 00:20:45.473 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.473 "listen_address": { 00:20:45.473 "trtype": "TCP", 00:20:45.473 "adrfam": "IPv4", 00:20:45.473 "traddr": "10.0.0.2", 00:20:45.473 "trsvcid": "4420" 00:20:45.473 }, 00:20:45.473 "secure_channel": true 00:20:45.473 } 00:20:45.473 } 00:20:45.473 ] 00:20:45.473 } 00:20:45.473 ] 00:20:45.473 }' 00:20:45.473 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:45.732 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:20:45.732 "subsystems": [ 00:20:45.732 { 00:20:45.732 "subsystem": "keyring", 00:20:45.732 "config": [ 00:20:45.732 { 00:20:45.732 "method": "keyring_file_add_key", 00:20:45.732 "params": { 00:20:45.732 "name": "key0", 00:20:45.732 "path": "/tmp/tmp.lPlypwmqUq" 00:20:45.732 } 00:20:45.732 } 00:20:45.732 ] 00:20:45.732 }, 00:20:45.732 { 00:20:45.732 "subsystem": "iobuf", 00:20:45.732 "config": [ 00:20:45.732 { 00:20:45.732 "method": "iobuf_set_options", 00:20:45.732 "params": { 00:20:45.732 "small_pool_count": 8192, 00:20:45.732 "large_pool_count": 1024, 00:20:45.732 "small_bufsize": 8192, 00:20:45.732 "large_bufsize": 135168, 00:20:45.732 "enable_numa": false 00:20:45.732 } 00:20:45.732 } 00:20:45.732 ] 00:20:45.732 }, 00:20:45.732 { 00:20:45.732 "subsystem": "sock", 00:20:45.732 "config": [ 00:20:45.732 { 00:20:45.732 "method": "sock_set_default_impl", 00:20:45.732 "params": { 00:20:45.732 "impl_name": "posix" 00:20:45.732 } 00:20:45.732 }, 00:20:45.732 { 00:20:45.732 "method": "sock_impl_set_options", 00:20:45.732 "params": { 00:20:45.732 "impl_name": "ssl", 00:20:45.732 "recv_buf_size": 4096, 00:20:45.732 "send_buf_size": 4096, 00:20:45.732 "enable_recv_pipe": true, 00:20:45.732 "enable_quickack": false, 00:20:45.732 "enable_placement_id": 0, 00:20:45.732 "enable_zerocopy_send_server": true, 00:20:45.732 "enable_zerocopy_send_client": false, 00:20:45.732 "zerocopy_threshold": 0, 00:20:45.732 "tls_version": 0, 00:20:45.732 "enable_ktls": false 00:20:45.733 } 00:20:45.733 }, 00:20:45.733 { 00:20:45.733 "method": "sock_impl_set_options", 00:20:45.733 "params": { 00:20:45.733 "impl_name": "posix", 00:20:45.733 "recv_buf_size": 2097152, 00:20:45.733 "send_buf_size": 2097152, 00:20:45.733 "enable_recv_pipe": true, 00:20:45.733 "enable_quickack": false, 00:20:45.733 "enable_placement_id": 0, 00:20:45.733 "enable_zerocopy_send_server": true, 00:20:45.733 "enable_zerocopy_send_client": false, 00:20:45.733 "zerocopy_threshold": 0, 00:20:45.733 "tls_version": 0, 00:20:45.733 "enable_ktls": false 00:20:45.733 } 00:20:45.733 } 00:20:45.733 ] 00:20:45.733 }, 00:20:45.733 { 00:20:45.733 "subsystem": "vmd", 00:20:45.733 "config": [] 00:20:45.733 }, 00:20:45.733 { 00:20:45.733 "subsystem": "accel", 00:20:45.733 "config": [ 00:20:45.733 { 00:20:45.733 "method": "accel_set_options", 00:20:45.733 "params": { 00:20:45.733 "small_cache_size": 128, 00:20:45.733 "large_cache_size": 16, 00:20:45.733 "task_count": 2048, 00:20:45.733 "sequence_count": 2048, 00:20:45.733 "buf_count": 2048 00:20:45.733 } 00:20:45.733 } 00:20:45.733 ] 00:20:45.733 }, 00:20:45.733 { 00:20:45.733 "subsystem": "bdev", 00:20:45.733 "config": [ 00:20:45.733 { 00:20:45.733 "method": "bdev_set_options", 00:20:45.733 "params": { 00:20:45.733 "bdev_io_pool_size": 65535, 00:20:45.733 "bdev_io_cache_size": 256, 00:20:45.733 "bdev_auto_examine": true, 00:20:45.733 "iobuf_small_cache_size": 128, 00:20:45.733 "iobuf_large_cache_size": 16 00:20:45.733 } 00:20:45.733 }, 00:20:45.733 { 00:20:45.733 "method": "bdev_raid_set_options", 00:20:45.733 "params": { 00:20:45.733 "process_window_size_kb": 1024, 00:20:45.733 "process_max_bandwidth_mb_sec": 0 00:20:45.733 } 00:20:45.733 }, 00:20:45.733 { 00:20:45.733 "method": "bdev_iscsi_set_options", 00:20:45.733 "params": { 00:20:45.733 "timeout_sec": 30 00:20:45.733 } 00:20:45.733 }, 00:20:45.733 { 00:20:45.733 "method": "bdev_nvme_set_options", 00:20:45.733 "params": { 00:20:45.733 "action_on_timeout": "none", 00:20:45.733 "timeout_us": 0, 00:20:45.733 "timeout_admin_us": 0, 00:20:45.733 "keep_alive_timeout_ms": 10000, 00:20:45.733 "arbitration_burst": 0, 00:20:45.733 "low_priority_weight": 0, 00:20:45.733 "medium_priority_weight": 0, 00:20:45.733 "high_priority_weight": 0, 00:20:45.733 "nvme_adminq_poll_period_us": 10000, 00:20:45.733 "nvme_ioq_poll_period_us": 0, 00:20:45.733 "io_queue_requests": 512, 00:20:45.733 "delay_cmd_submit": true, 00:20:45.733 "transport_retry_count": 4, 00:20:45.733 "bdev_retry_count": 3, 00:20:45.733 "transport_ack_timeout": 0, 00:20:45.733 "ctrlr_loss_timeout_sec": 0, 00:20:45.733 "reconnect_delay_sec": 0, 00:20:45.733 "fast_io_fail_timeout_sec": 0, 00:20:45.733 "disable_auto_failback": false, 00:20:45.733 "generate_uuids": false, 00:20:45.733 "transport_tos": 0, 00:20:45.733 "nvme_error_stat": false, 00:20:45.733 "rdma_srq_size": 0, 00:20:45.733 "io_path_stat": false, 00:20:45.733 "allow_accel_sequence": false, 00:20:45.733 "rdma_max_cq_size": 0, 00:20:45.733 "rdma_cm_event_timeout_ms": 0, 00:20:45.733 "dhchap_digests": [ 00:20:45.733 "sha256", 00:20:45.733 "sha384", 00:20:45.733 "sha512" 00:20:45.733 ], 00:20:45.733 "dhchap_dhgroups": [ 00:20:45.733 "null", 00:20:45.733 "ffdhe2048", 00:20:45.733 "ffdhe3072", 00:20:45.733 "ffdhe4096", 00:20:45.733 "ffdhe6144", 00:20:45.733 "ffdhe8192" 00:20:45.733 ] 00:20:45.733 } 00:20:45.733 }, 00:20:45.733 { 00:20:45.733 "method": "bdev_nvme_attach_controller", 00:20:45.733 "params": { 00:20:45.733 "name": "TLSTEST", 00:20:45.733 "trtype": "TCP", 00:20:45.733 "adrfam": "IPv4", 00:20:45.733 "traddr": "10.0.0.2", 00:20:45.733 "trsvcid": "4420", 00:20:45.733 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.733 "prchk_reftag": false, 00:20:45.733 "prchk_guard": false, 00:20:45.733 "ctrlr_loss_timeout_sec": 0, 00:20:45.733 "reconnect_delay_sec": 0, 00:20:45.733 "fast_io_fail_timeout_sec": 0, 00:20:45.733 "psk": "key0", 00:20:45.733 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:45.733 "hdgst": false, 00:20:45.733 "ddgst": false, 00:20:45.733 "multipath": "multipath" 00:20:45.733 } 00:20:45.733 }, 00:20:45.733 { 00:20:45.733 "method": "bdev_nvme_set_hotplug", 00:20:45.733 "params": { 00:20:45.733 "period_us": 100000, 00:20:45.733 "enable": false 00:20:45.733 } 00:20:45.733 }, 00:20:45.733 { 00:20:45.733 "method": "bdev_wait_for_examine" 00:20:45.733 } 00:20:45.733 ] 00:20:45.733 }, 00:20:45.733 { 00:20:45.733 "subsystem": "nbd", 00:20:45.733 "config": [] 00:20:45.733 } 00:20:45.733 ] 00:20:45.733 }' 00:20:45.733 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 203291 00:20:45.733 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 203291 ']' 00:20:45.733 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 203291 00:20:45.733 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:45.733 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:45.733 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 203291 00:20:45.733 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:45.733 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:45.733 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 203291' 00:20:45.733 killing process with pid 203291 00:20:45.733 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 203291 00:20:45.733 Received shutdown signal, test time was about 10.000000 seconds 00:20:45.733 00:20:45.733 Latency(us) 00:20:45.733 [2024-11-20T11:34:51.499Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.733 [2024-11-20T11:34:51.499Z] =================================================================================================================== 00:20:45.733 [2024-11-20T11:34:51.499Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:45.733 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 203291 00:20:45.992 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 202889 00:20:45.992 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 202889 ']' 00:20:45.992 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 202889 00:20:45.992 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:45.992 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:45.992 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 202889 00:20:45.992 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:45.992 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:45.992 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 202889' 00:20:45.992 killing process with pid 202889 00:20:45.992 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 202889 00:20:45.992 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 202889 00:20:46.252 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:46.252 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:46.252 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:46.252 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:20:46.252 "subsystems": [ 00:20:46.252 { 00:20:46.252 "subsystem": "keyring", 00:20:46.252 "config": [ 00:20:46.252 { 00:20:46.252 "method": "keyring_file_add_key", 00:20:46.252 "params": { 00:20:46.252 "name": "key0", 00:20:46.252 "path": "/tmp/tmp.lPlypwmqUq" 00:20:46.252 } 00:20:46.252 } 00:20:46.252 ] 00:20:46.252 }, 00:20:46.252 { 00:20:46.252 "subsystem": "iobuf", 00:20:46.252 "config": [ 00:20:46.252 { 00:20:46.252 "method": "iobuf_set_options", 00:20:46.252 "params": { 00:20:46.252 "small_pool_count": 8192, 00:20:46.252 "large_pool_count": 1024, 00:20:46.252 "small_bufsize": 8192, 00:20:46.252 "large_bufsize": 135168, 00:20:46.252 "enable_numa": false 00:20:46.252 } 00:20:46.252 } 00:20:46.252 ] 00:20:46.252 }, 00:20:46.252 { 00:20:46.252 "subsystem": "sock", 00:20:46.252 "config": [ 00:20:46.252 { 00:20:46.252 "method": "sock_set_default_impl", 00:20:46.252 "params": { 00:20:46.252 "impl_name": "posix" 00:20:46.252 } 00:20:46.252 }, 00:20:46.252 { 00:20:46.252 "method": "sock_impl_set_options", 00:20:46.252 "params": { 00:20:46.252 "impl_name": "ssl", 00:20:46.252 "recv_buf_size": 4096, 00:20:46.252 "send_buf_size": 4096, 00:20:46.252 "enable_recv_pipe": true, 00:20:46.252 "enable_quickack": false, 00:20:46.252 "enable_placement_id": 0, 00:20:46.252 "enable_zerocopy_send_server": true, 00:20:46.252 "enable_zerocopy_send_client": false, 00:20:46.252 "zerocopy_threshold": 0, 00:20:46.252 "tls_version": 0, 00:20:46.252 "enable_ktls": false 00:20:46.252 } 00:20:46.252 }, 00:20:46.252 { 00:20:46.252 "method": "sock_impl_set_options", 00:20:46.252 "params": { 00:20:46.252 "impl_name": "posix", 00:20:46.252 "recv_buf_size": 2097152, 00:20:46.252 "send_buf_size": 2097152, 00:20:46.252 "enable_recv_pipe": true, 00:20:46.252 "enable_quickack": false, 00:20:46.252 "enable_placement_id": 0, 00:20:46.252 "enable_zerocopy_send_server": true, 00:20:46.252 "enable_zerocopy_send_client": false, 00:20:46.252 "zerocopy_threshold": 0, 00:20:46.252 "tls_version": 0, 00:20:46.252 "enable_ktls": false 00:20:46.252 } 00:20:46.252 } 00:20:46.252 ] 00:20:46.252 }, 00:20:46.252 { 00:20:46.252 "subsystem": "vmd", 00:20:46.252 "config": [] 00:20:46.252 }, 00:20:46.252 { 00:20:46.252 "subsystem": "accel", 00:20:46.252 "config": [ 00:20:46.252 { 00:20:46.252 "method": "accel_set_options", 00:20:46.252 "params": { 00:20:46.252 "small_cache_size": 128, 00:20:46.252 "large_cache_size": 16, 00:20:46.252 "task_count": 2048, 00:20:46.252 "sequence_count": 2048, 00:20:46.252 "buf_count": 2048 00:20:46.252 } 00:20:46.252 } 00:20:46.252 ] 00:20:46.252 }, 00:20:46.252 { 00:20:46.252 "subsystem": "bdev", 00:20:46.252 "config": [ 00:20:46.252 { 00:20:46.252 "method": "bdev_set_options", 00:20:46.252 "params": { 00:20:46.252 "bdev_io_pool_size": 65535, 00:20:46.252 "bdev_io_cache_size": 256, 00:20:46.252 "bdev_auto_examine": true, 00:20:46.252 "iobuf_small_cache_size": 128, 00:20:46.252 "iobuf_large_cache_size": 16 00:20:46.252 } 00:20:46.252 }, 00:20:46.252 { 00:20:46.252 "method": "bdev_raid_set_options", 00:20:46.252 "params": { 00:20:46.252 "process_window_size_kb": 1024, 00:20:46.252 "process_max_bandwidth_mb_sec": 0 00:20:46.252 } 00:20:46.252 }, 00:20:46.252 { 00:20:46.252 "method": "bdev_iscsi_set_options", 00:20:46.252 "params": { 00:20:46.252 "timeout_sec": 30 00:20:46.252 } 00:20:46.252 }, 00:20:46.252 { 00:20:46.252 "method": "bdev_nvme_set_options", 00:20:46.252 "params": { 00:20:46.252 "action_on_timeout": "none", 00:20:46.252 "timeout_us": 0, 00:20:46.252 "timeout_admin_us": 0, 00:20:46.252 "keep_alive_timeout_ms": 10000, 00:20:46.252 "arbitration_burst": 0, 00:20:46.252 "low_priority_weight": 0, 00:20:46.252 "medium_priority_weight": 0, 00:20:46.252 "high_priority_weight": 0, 00:20:46.252 "nvme_adminq_poll_period_us": 10000, 00:20:46.252 "nvme_ioq_poll_period_us": 0, 00:20:46.252 "io_queue_requests": 0, 00:20:46.252 "delay_cmd_submit": true, 00:20:46.252 "transport_retry_count": 4, 00:20:46.252 "bdev_retry_count": 3, 00:20:46.252 "transport_ack_timeout": 0, 00:20:46.252 "ctrlr_loss_timeout_sec": 0, 00:20:46.252 "reconnect_delay_sec": 0, 00:20:46.252 "fast_io_fail_timeout_sec": 0, 00:20:46.252 "disable_auto_failback": false, 00:20:46.252 "generate_uuids": false, 00:20:46.252 "transport_tos": 0, 00:20:46.252 "nvme_error_stat": false, 00:20:46.252 "rdma_srq_size": 0, 00:20:46.252 "io_path_stat": false, 00:20:46.252 "allow_accel_sequence": false, 00:20:46.252 "rdma_max_cq_size": 0, 00:20:46.252 "rdma_cm_event_timeout_ms": 0, 00:20:46.252 "dhchap_digests": [ 00:20:46.252 "sha256", 00:20:46.252 "sha384", 00:20:46.252 "sha512" 00:20:46.252 ], 00:20:46.252 "dhchap_dhgroups": [ 00:20:46.252 "null", 00:20:46.252 "ffdhe2048", 00:20:46.252 "ffdhe3072", 00:20:46.252 "ffdhe4096", 00:20:46.252 "ffdhe6144", 00:20:46.252 "ffdhe8192" 00:20:46.252 ] 00:20:46.252 } 00:20:46.252 }, 00:20:46.252 { 00:20:46.252 "method": "bdev_nvme_set_hotplug", 00:20:46.252 "params": { 00:20:46.252 "period_us": 100000, 00:20:46.252 "enable": false 00:20:46.252 } 00:20:46.252 }, 00:20:46.252 { 00:20:46.252 "method": "bdev_malloc_create", 00:20:46.252 "params": { 00:20:46.252 "name": "malloc0", 00:20:46.252 "num_blocks": 8192, 00:20:46.252 "block_size": 4096, 00:20:46.252 "physical_block_size": 4096, 00:20:46.252 "uuid": "98b207b2-f838-47a8-a18f-af13c13da5a8", 00:20:46.252 "optimal_io_boundary": 0, 00:20:46.252 "md_size": 0, 00:20:46.252 "dif_type": 0, 00:20:46.252 "dif_is_head_of_md": false, 00:20:46.252 "dif_pi_format": 0 00:20:46.252 } 00:20:46.252 }, 00:20:46.252 { 00:20:46.252 "method": "bdev_wait_for_examine" 00:20:46.252 } 00:20:46.252 ] 00:20:46.252 }, 00:20:46.252 { 00:20:46.252 "subsystem": "nbd", 00:20:46.252 "config": [] 00:20:46.252 }, 00:20:46.252 { 00:20:46.252 "subsystem": "scheduler", 00:20:46.252 "config": [ 00:20:46.252 { 00:20:46.252 "method": "framework_set_scheduler", 00:20:46.253 "params": { 00:20:46.253 "name": "static" 00:20:46.253 } 00:20:46.253 } 00:20:46.253 ] 00:20:46.253 }, 00:20:46.253 { 00:20:46.253 "subsystem": "nvmf", 00:20:46.253 "config": [ 00:20:46.253 { 00:20:46.253 "method": "nvmf_set_config", 00:20:46.253 "params": { 00:20:46.253 "discovery_filter": "match_any", 00:20:46.253 "admin_cmd_passthru": { 00:20:46.253 "identify_ctrlr": false 00:20:46.253 }, 00:20:46.253 "dhchap_digests": [ 00:20:46.253 "sha256", 00:20:46.253 "sha384", 00:20:46.253 "sha512" 00:20:46.253 ], 00:20:46.253 "dhchap_dhgroups": [ 00:20:46.253 "null", 00:20:46.253 "ffdhe2048", 00:20:46.253 "ffdhe3072", 00:20:46.253 "ffdhe4096", 00:20:46.253 "ffdhe6144", 00:20:46.253 "ffdhe8192" 00:20:46.253 ] 00:20:46.253 } 00:20:46.253 }, 00:20:46.253 { 00:20:46.253 "method": "nvmf_set_max_subsystems", 00:20:46.253 "params": { 00:20:46.253 "max_subsystems": 1024 00:20:46.253 } 00:20:46.253 }, 00:20:46.253 { 00:20:46.253 "method": "nvmf_set_crdt", 00:20:46.253 "params": { 00:20:46.253 "crdt1": 0, 00:20:46.253 "crdt2": 0, 00:20:46.253 "crdt3": 0 00:20:46.253 } 00:20:46.253 }, 00:20:46.253 { 00:20:46.253 "method": "nvmf_create_transport", 00:20:46.253 "params": { 00:20:46.253 "trtype": "TCP", 00:20:46.253 "max_queue_depth": 128, 00:20:46.253 "max_io_qpairs_per_ctrlr": 127, 00:20:46.253 "in_capsule_data_size": 4096, 00:20:46.253 "max_io_size": 131072, 00:20:46.253 "io_unit_size": 131072, 00:20:46.253 "max_aq_depth": 128, 00:20:46.253 "num_shared_buffers": 511, 00:20:46.253 "buf_cache_size": 4294967295, 00:20:46.253 "dif_insert_or_strip": false, 00:20:46.253 "zcopy": false, 00:20:46.253 "c2h_success": false, 00:20:46.253 "sock_priority": 0, 00:20:46.253 "abort_timeout_sec": 1, 00:20:46.253 "ack_timeout": 0, 00:20:46.253 "data_wr_pool_size": 0 00:20:46.253 } 00:20:46.253 }, 00:20:46.253 { 00:20:46.253 "method": "nvmf_create_subsystem", 00:20:46.253 "params": { 00:20:46.253 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.253 "allow_any_host": false, 00:20:46.253 "serial_number": "SPDK00000000000001", 00:20:46.253 "model_number": "SPDK bdev Controller", 00:20:46.253 "max_namespaces": 10, 00:20:46.253 "min_cntlid": 1, 00:20:46.253 "max_cntlid": 65519, 00:20:46.253 "ana_reporting": false 00:20:46.253 } 00:20:46.253 }, 00:20:46.253 { 00:20:46.253 "method": "nvmf_subsystem_add_host", 00:20:46.253 "params": { 00:20:46.253 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.253 "host": "nqn.2016-06.io.spdk:host1", 00:20:46.253 "psk": "key0" 00:20:46.253 } 00:20:46.253 }, 00:20:46.253 { 00:20:46.253 "method": "nvmf_subsystem_add_ns", 00:20:46.253 "params": { 00:20:46.253 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.253 "namespace": { 00:20:46.253 "nsid": 1, 00:20:46.253 "bdev_name": "malloc0", 00:20:46.253 "nguid": "98B207B2F83847A8A18FAF13C13DA5A8", 00:20:46.253 "uuid": "98b207b2-f838-47a8-a18f-af13c13da5a8", 00:20:46.253 "no_auto_visible": false 00:20:46.253 } 00:20:46.253 } 00:20:46.253 }, 00:20:46.253 { 00:20:46.253 "method": "nvmf_subsystem_add_listener", 00:20:46.253 "params": { 00:20:46.253 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.253 "listen_address": { 00:20:46.253 "trtype": "TCP", 00:20:46.253 "adrfam": "IPv4", 00:20:46.253 "traddr": "10.0.0.2", 00:20:46.253 "trsvcid": "4420" 00:20:46.253 }, 00:20:46.253 "secure_channel": true 00:20:46.253 } 00:20:46.253 } 00:20:46.253 ] 00:20:46.253 } 00:20:46.253 ] 00:20:46.253 }' 00:20:46.253 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:46.253 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=203578 00:20:46.253 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:46.253 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 203578 00:20:46.253 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 203578 ']' 00:20:46.253 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.253 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:46.253 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.253 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:46.253 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:46.253 [2024-11-20 12:34:51.898551] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:20:46.253 [2024-11-20 12:34:51.898595] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:46.253 [2024-11-20 12:34:51.968170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.253 [2024-11-20 12:34:52.008364] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:46.253 [2024-11-20 12:34:52.008397] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:46.253 [2024-11-20 12:34:52.008404] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:46.253 [2024-11-20 12:34:52.008410] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:46.253 [2024-11-20 12:34:52.008415] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:46.253 [2024-11-20 12:34:52.009004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:46.512 [2024-11-20 12:34:52.220883] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:46.512 [2024-11-20 12:34:52.252915] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:46.512 [2024-11-20 12:34:52.253105] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:47.080 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:47.080 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:47.080 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:47.080 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:47.080 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:47.080 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:47.080 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=203657 00:20:47.080 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 203657 /var/tmp/bdevperf.sock 00:20:47.080 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 203657 ']' 00:20:47.081 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:47.081 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:47.081 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:47.081 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:47.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:47.081 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:20:47.081 "subsystems": [ 00:20:47.081 { 00:20:47.081 "subsystem": "keyring", 00:20:47.081 "config": [ 00:20:47.081 { 00:20:47.081 "method": "keyring_file_add_key", 00:20:47.081 "params": { 00:20:47.081 "name": "key0", 00:20:47.081 "path": "/tmp/tmp.lPlypwmqUq" 00:20:47.081 } 00:20:47.081 } 00:20:47.081 ] 00:20:47.081 }, 00:20:47.081 { 00:20:47.081 "subsystem": "iobuf", 00:20:47.081 "config": [ 00:20:47.081 { 00:20:47.081 "method": "iobuf_set_options", 00:20:47.081 "params": { 00:20:47.081 "small_pool_count": 8192, 00:20:47.081 "large_pool_count": 1024, 00:20:47.081 "small_bufsize": 8192, 00:20:47.081 "large_bufsize": 135168, 00:20:47.081 "enable_numa": false 00:20:47.081 } 00:20:47.081 } 00:20:47.081 ] 00:20:47.081 }, 00:20:47.081 { 00:20:47.081 "subsystem": "sock", 00:20:47.081 "config": [ 00:20:47.081 { 00:20:47.081 "method": "sock_set_default_impl", 00:20:47.081 "params": { 00:20:47.081 "impl_name": "posix" 00:20:47.081 } 00:20:47.081 }, 00:20:47.081 { 00:20:47.081 "method": "sock_impl_set_options", 00:20:47.081 "params": { 00:20:47.081 "impl_name": "ssl", 00:20:47.081 "recv_buf_size": 4096, 00:20:47.081 "send_buf_size": 4096, 00:20:47.081 "enable_recv_pipe": true, 00:20:47.081 "enable_quickack": false, 00:20:47.081 "enable_placement_id": 0, 00:20:47.081 "enable_zerocopy_send_server": true, 00:20:47.081 "enable_zerocopy_send_client": false, 00:20:47.081 "zerocopy_threshold": 0, 00:20:47.081 "tls_version": 0, 00:20:47.081 "enable_ktls": false 00:20:47.081 } 00:20:47.081 }, 00:20:47.081 { 00:20:47.081 "method": "sock_impl_set_options", 00:20:47.081 "params": { 00:20:47.081 "impl_name": "posix", 00:20:47.081 "recv_buf_size": 2097152, 00:20:47.081 "send_buf_size": 2097152, 00:20:47.081 "enable_recv_pipe": true, 00:20:47.081 "enable_quickack": false, 00:20:47.081 "enable_placement_id": 0, 00:20:47.081 "enable_zerocopy_send_server": true, 00:20:47.081 "enable_zerocopy_send_client": false, 00:20:47.081 "zerocopy_threshold": 0, 00:20:47.081 "tls_version": 0, 00:20:47.081 "enable_ktls": false 00:20:47.081 } 00:20:47.081 } 00:20:47.081 ] 00:20:47.081 }, 00:20:47.081 { 00:20:47.081 "subsystem": "vmd", 00:20:47.081 "config": [] 00:20:47.081 }, 00:20:47.081 { 00:20:47.081 "subsystem": "accel", 00:20:47.081 "config": [ 00:20:47.081 { 00:20:47.081 "method": "accel_set_options", 00:20:47.081 "params": { 00:20:47.081 "small_cache_size": 128, 00:20:47.081 "large_cache_size": 16, 00:20:47.081 "task_count": 2048, 00:20:47.081 "sequence_count": 2048, 00:20:47.081 "buf_count": 2048 00:20:47.081 } 00:20:47.081 } 00:20:47.081 ] 00:20:47.081 }, 00:20:47.081 { 00:20:47.081 "subsystem": "bdev", 00:20:47.081 "config": [ 00:20:47.081 { 00:20:47.081 "method": "bdev_set_options", 00:20:47.081 "params": { 00:20:47.081 "bdev_io_pool_size": 65535, 00:20:47.081 "bdev_io_cache_size": 256, 00:20:47.081 "bdev_auto_examine": true, 00:20:47.081 "iobuf_small_cache_size": 128, 00:20:47.081 "iobuf_large_cache_size": 16 00:20:47.081 } 00:20:47.081 }, 00:20:47.081 { 00:20:47.081 "method": "bdev_raid_set_options", 00:20:47.081 "params": { 00:20:47.081 "process_window_size_kb": 1024, 00:20:47.081 "process_max_bandwidth_mb_sec": 0 00:20:47.081 } 00:20:47.081 }, 00:20:47.081 { 00:20:47.081 "method": "bdev_iscsi_set_options", 00:20:47.081 "params": { 00:20:47.081 "timeout_sec": 30 00:20:47.081 } 00:20:47.081 }, 00:20:47.081 { 00:20:47.081 "method": "bdev_nvme_set_options", 00:20:47.081 "params": { 00:20:47.081 "action_on_timeout": "none", 00:20:47.081 "timeout_us": 0, 00:20:47.081 "timeout_admin_us": 0, 00:20:47.081 "keep_alive_timeout_ms": 10000, 00:20:47.081 "arbitration_burst": 0, 00:20:47.081 "low_priority_weight": 0, 00:20:47.081 "medium_priority_weight": 0, 00:20:47.081 "high_priority_weight": 0, 00:20:47.081 "nvme_adminq_poll_period_us": 10000, 00:20:47.081 "nvme_ioq_poll_period_us": 0, 00:20:47.081 "io_queue_requests": 512, 00:20:47.081 "delay_cmd_submit": true, 00:20:47.081 "transport_retry_count": 4, 00:20:47.081 "bdev_retry_count": 3, 00:20:47.081 "transport_ack_timeout": 0, 00:20:47.081 "ctrlr_loss_timeout_sec": 0, 00:20:47.081 "reconnect_delay_sec": 0, 00:20:47.081 "fast_io_fail_timeout_sec": 0, 00:20:47.081 "disable_auto_failback": false, 00:20:47.081 "generate_uuids": false, 00:20:47.081 "transport_tos": 0, 00:20:47.081 "nvme_error_stat": false, 00:20:47.081 "rdma_srq_size": 0, 00:20:47.081 "io_path_stat": false, 00:20:47.081 "allow_accel_sequence": false, 00:20:47.081 "rdma_max_cq_size": 0, 00:20:47.081 "rdma_cm_event_timeout_ms": 0, 00:20:47.081 "dhchap_digests": [ 00:20:47.081 "sha256", 00:20:47.081 "sha384", 00:20:47.081 "sha512" 00:20:47.081 ], 00:20:47.081 "dhchap_dhgroups": [ 00:20:47.081 "null", 00:20:47.081 "ffdhe2048", 00:20:47.081 "ffdhe3072", 00:20:47.081 "ffdhe4096", 00:20:47.081 "ffdhe6144", 00:20:47.081 "ffdhe8192" 00:20:47.081 ] 00:20:47.081 } 00:20:47.081 }, 00:20:47.081 { 00:20:47.081 "method": "bdev_nvme_attach_controller", 00:20:47.081 "params": { 00:20:47.081 "name": "TLSTEST", 00:20:47.081 "trtype": "TCP", 00:20:47.081 "adrfam": "IPv4", 00:20:47.081 "traddr": "10.0.0.2", 00:20:47.081 "trsvcid": "4420", 00:20:47.081 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.081 "prchk_reftag": false, 00:20:47.081 "prchk_guard": false, 00:20:47.081 "ctrlr_loss_timeout_sec": 0, 00:20:47.081 "reconnect_delay_sec": 0, 00:20:47.081 "fast_io_fail_timeout_sec": 0, 00:20:47.081 "psk": "key0", 00:20:47.081 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:47.081 "hdgst": false, 00:20:47.081 "ddgst": false, 00:20:47.081 "multipath": "multipath" 00:20:47.081 } 00:20:47.081 }, 00:20:47.081 { 00:20:47.081 "method": "bdev_nvme_set_hotplug", 00:20:47.081 "params": { 00:20:47.081 "period_us": 100000, 00:20:47.081 "enable": false 00:20:47.081 } 00:20:47.081 }, 00:20:47.081 { 00:20:47.081 "method": "bdev_wait_for_examine" 00:20:47.081 } 00:20:47.081 ] 00:20:47.081 }, 00:20:47.081 { 00:20:47.081 "subsystem": "nbd", 00:20:47.081 "config": [] 00:20:47.081 } 00:20:47.081 ] 00:20:47.081 }' 00:20:47.081 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:47.081 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:47.081 [2024-11-20 12:34:52.820362] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:20:47.081 [2024-11-20 12:34:52.820416] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid203657 ] 00:20:47.341 [2024-11-20 12:34:52.898097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.341 [2024-11-20 12:34:52.939989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:47.341 [2024-11-20 12:34:53.092248] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:48.278 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:48.278 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:48.278 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:48.278 Running I/O for 10 seconds... 00:20:50.150 5344.00 IOPS, 20.88 MiB/s [2024-11-20T11:34:56.853Z] 5490.00 IOPS, 21.45 MiB/s [2024-11-20T11:34:57.790Z] 5537.00 IOPS, 21.63 MiB/s [2024-11-20T11:34:59.167Z] 5556.25 IOPS, 21.70 MiB/s [2024-11-20T11:35:00.103Z] 5581.40 IOPS, 21.80 MiB/s [2024-11-20T11:35:01.038Z] 5526.33 IOPS, 21.59 MiB/s [2024-11-20T11:35:02.120Z] 5463.14 IOPS, 21.34 MiB/s [2024-11-20T11:35:03.057Z] 5416.88 IOPS, 21.16 MiB/s [2024-11-20T11:35:03.993Z] 5362.00 IOPS, 20.95 MiB/s [2024-11-20T11:35:03.993Z] 5343.00 IOPS, 20.87 MiB/s 00:20:58.227 Latency(us) 00:20:58.227 [2024-11-20T11:35:03.993Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.227 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:58.227 Verification LBA range: start 0x0 length 0x2000 00:20:58.227 TLSTESTn1 : 10.02 5346.92 20.89 0.00 0.00 23904.45 6772.05 29335.16 00:20:58.227 [2024-11-20T11:35:03.993Z] =================================================================================================================== 00:20:58.227 [2024-11-20T11:35:03.993Z] Total : 5346.92 20.89 0.00 0.00 23904.45 6772.05 29335.16 00:20:58.227 { 00:20:58.227 "results": [ 00:20:58.227 { 00:20:58.227 "job": "TLSTESTn1", 00:20:58.227 "core_mask": "0x4", 00:20:58.227 "workload": "verify", 00:20:58.227 "status": "finished", 00:20:58.227 "verify_range": { 00:20:58.227 "start": 0, 00:20:58.227 "length": 8192 00:20:58.227 }, 00:20:58.227 "queue_depth": 128, 00:20:58.227 "io_size": 4096, 00:20:58.227 "runtime": 10.016606, 00:20:58.227 "iops": 5346.920903148232, 00:20:58.227 "mibps": 20.886409777922783, 00:20:58.227 "io_failed": 0, 00:20:58.227 "io_timeout": 0, 00:20:58.227 "avg_latency_us": 23904.448840687175, 00:20:58.227 "min_latency_us": 6772.053333333333, 00:20:58.227 "max_latency_us": 29335.161904761906 00:20:58.227 } 00:20:58.227 ], 00:20:58.227 "core_count": 1 00:20:58.227 } 00:20:58.227 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:58.227 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 203657 00:20:58.227 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 203657 ']' 00:20:58.227 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 203657 00:20:58.227 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:58.227 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:58.227 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 203657 00:20:58.227 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:58.227 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:58.227 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 203657' 00:20:58.227 killing process with pid 203657 00:20:58.227 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 203657 00:20:58.227 Received shutdown signal, test time was about 10.000000 seconds 00:20:58.227 00:20:58.227 Latency(us) 00:20:58.227 [2024-11-20T11:35:03.993Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.227 [2024-11-20T11:35:03.993Z] =================================================================================================================== 00:20:58.227 [2024-11-20T11:35:03.993Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:58.227 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 203657 00:20:58.487 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 203578 00:20:58.487 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 203578 ']' 00:20:58.487 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 203578 00:20:58.487 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:58.487 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:58.487 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 203578 00:20:58.487 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:58.487 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:58.487 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 203578' 00:20:58.487 killing process with pid 203578 00:20:58.487 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 203578 00:20:58.487 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 203578 00:20:58.487 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:58.487 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:58.487 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:58.487 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:58.747 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=205551 00:20:58.747 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:58.747 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 205551 00:20:58.747 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 205551 ']' 00:20:58.747 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:58.747 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:58.747 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:58.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:58.747 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:58.747 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:58.747 [2024-11-20 12:35:04.301617] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:20:58.747 [2024-11-20 12:35:04.301665] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:58.747 [2024-11-20 12:35:04.378713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.747 [2024-11-20 12:35:04.419024] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:58.747 [2024-11-20 12:35:04.419060] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:58.747 [2024-11-20 12:35:04.419066] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:58.747 [2024-11-20 12:35:04.419072] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:58.747 [2024-11-20 12:35:04.419077] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:58.747 [2024-11-20 12:35:04.419640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:59.006 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:59.006 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:59.006 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:59.006 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:59.006 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:59.006 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:59.006 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.lPlypwmqUq 00:20:59.006 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.lPlypwmqUq 00:20:59.006 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:59.007 [2024-11-20 12:35:04.718967] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:59.007 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:59.266 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:59.525 [2024-11-20 12:35:05.103951] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:59.525 [2024-11-20 12:35:05.104147] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:59.525 12:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:59.784 malloc0 00:20:59.784 12:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:59.784 12:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.lPlypwmqUq 00:21:00.043 12:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:00.302 12:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=205930 00:21:00.302 12:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:00.302 12:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:00.302 12:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 205930 /var/tmp/bdevperf.sock 00:21:00.302 12:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 205930 ']' 00:21:00.302 12:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:00.302 12:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:00.302 12:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:00.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:00.302 12:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:00.302 12:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:00.302 [2024-11-20 12:35:05.977681] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:21:00.302 [2024-11-20 12:35:05.977733] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid205930 ] 00:21:00.302 [2024-11-20 12:35:06.053448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.561 [2024-11-20 12:35:06.094375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:00.561 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:00.561 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:00.561 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lPlypwmqUq 00:21:00.820 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:00.820 [2024-11-20 12:35:06.545610] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:01.079 nvme0n1 00:21:01.079 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:01.079 Running I/O for 1 seconds... 00:21:02.016 5574.00 IOPS, 21.77 MiB/s 00:21:02.016 Latency(us) 00:21:02.016 [2024-11-20T11:35:07.783Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.017 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:02.017 Verification LBA range: start 0x0 length 0x2000 00:21:02.017 nvme0n1 : 1.02 5595.94 21.86 0.00 0.00 22664.22 5804.62 24591.60 00:21:02.017 [2024-11-20T11:35:07.783Z] =================================================================================================================== 00:21:02.017 [2024-11-20T11:35:07.783Z] Total : 5595.94 21.86 0.00 0.00 22664.22 5804.62 24591.60 00:21:02.017 { 00:21:02.017 "results": [ 00:21:02.017 { 00:21:02.017 "job": "nvme0n1", 00:21:02.017 "core_mask": "0x2", 00:21:02.017 "workload": "verify", 00:21:02.017 "status": "finished", 00:21:02.017 "verify_range": { 00:21:02.017 "start": 0, 00:21:02.017 "length": 8192 00:21:02.017 }, 00:21:02.017 "queue_depth": 128, 00:21:02.017 "io_size": 4096, 00:21:02.017 "runtime": 1.018953, 00:21:02.017 "iops": 5595.9401464051825, 00:21:02.017 "mibps": 21.859141196895244, 00:21:02.017 "io_failed": 0, 00:21:02.017 "io_timeout": 0, 00:21:02.017 "avg_latency_us": 22664.22136660487, 00:21:02.017 "min_latency_us": 5804.617142857142, 00:21:02.017 "max_latency_us": 24591.60380952381 00:21:02.017 } 00:21:02.017 ], 00:21:02.017 "core_count": 1 00:21:02.017 } 00:21:02.017 12:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 205930 00:21:02.017 12:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 205930 ']' 00:21:02.017 12:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 205930 00:21:02.017 12:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:02.017 12:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:02.017 12:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 205930 00:21:02.277 12:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:02.277 12:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:02.277 12:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 205930' 00:21:02.277 killing process with pid 205930 00:21:02.277 12:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 205930 00:21:02.277 Received shutdown signal, test time was about 1.000000 seconds 00:21:02.277 00:21:02.277 Latency(us) 00:21:02.277 [2024-11-20T11:35:08.043Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.277 [2024-11-20T11:35:08.043Z] =================================================================================================================== 00:21:02.277 [2024-11-20T11:35:08.043Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:02.277 12:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 205930 00:21:02.277 12:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 205551 00:21:02.277 12:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 205551 ']' 00:21:02.277 12:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 205551 00:21:02.277 12:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:02.277 12:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:02.277 12:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 205551 00:21:02.277 12:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:02.277 12:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:02.277 12:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 205551' 00:21:02.277 killing process with pid 205551 00:21:02.277 12:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 205551 00:21:02.277 12:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 205551 00:21:02.536 12:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:21:02.536 12:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:02.536 12:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:02.536 12:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.537 12:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:02.537 12:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=206183 00:21:02.537 12:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 206183 00:21:02.537 12:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 206183 ']' 00:21:02.537 12:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.537 12:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:02.537 12:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.537 12:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:02.537 12:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.537 [2024-11-20 12:35:08.216615] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:21:02.537 [2024-11-20 12:35:08.216667] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:02.537 [2024-11-20 12:35:08.296158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.796 [2024-11-20 12:35:08.335758] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:02.796 [2024-11-20 12:35:08.335787] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:02.796 [2024-11-20 12:35:08.335794] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:02.796 [2024-11-20 12:35:08.335800] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:02.796 [2024-11-20 12:35:08.335805] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:02.796 [2024-11-20 12:35:08.336384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:03.367 12:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:03.367 12:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:03.367 12:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:03.367 12:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:03.367 12:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:03.367 12:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:03.367 12:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:21:03.367 12:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.367 12:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:03.367 [2024-11-20 12:35:09.062574] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:03.367 malloc0 00:21:03.367 [2024-11-20 12:35:09.090569] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:03.367 [2024-11-20 12:35:09.090762] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:03.367 12:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.367 12:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=206428 00:21:03.367 12:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 206428 /var/tmp/bdevperf.sock 00:21:03.367 12:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:03.367 12:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 206428 ']' 00:21:03.367 12:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:03.367 12:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:03.367 12:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:03.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:03.367 12:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:03.367 12:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:03.626 [2024-11-20 12:35:09.166346] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:21:03.626 [2024-11-20 12:35:09.166391] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid206428 ] 00:21:03.626 [2024-11-20 12:35:09.240061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.626 [2024-11-20 12:35:09.281460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:03.626 12:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:03.626 12:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:03.626 12:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lPlypwmqUq 00:21:03.884 12:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:04.142 [2024-11-20 12:35:09.711826] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:04.142 nvme0n1 00:21:04.142 12:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:04.142 Running I/O for 1 seconds... 00:21:05.520 4947.00 IOPS, 19.32 MiB/s 00:21:05.520 Latency(us) 00:21:05.520 [2024-11-20T11:35:11.286Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.520 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:05.520 Verification LBA range: start 0x0 length 0x2000 00:21:05.520 nvme0n1 : 1.02 4992.08 19.50 0.00 0.00 25459.49 6616.02 34453.21 00:21:05.520 [2024-11-20T11:35:11.286Z] =================================================================================================================== 00:21:05.520 [2024-11-20T11:35:11.286Z] Total : 4992.08 19.50 0.00 0.00 25459.49 6616.02 34453.21 00:21:05.520 { 00:21:05.520 "results": [ 00:21:05.520 { 00:21:05.520 "job": "nvme0n1", 00:21:05.520 "core_mask": "0x2", 00:21:05.520 "workload": "verify", 00:21:05.520 "status": "finished", 00:21:05.520 "verify_range": { 00:21:05.520 "start": 0, 00:21:05.520 "length": 8192 00:21:05.520 }, 00:21:05.520 "queue_depth": 128, 00:21:05.520 "io_size": 4096, 00:21:05.520 "runtime": 1.016611, 00:21:05.520 "iops": 4992.076615342545, 00:21:05.520 "mibps": 19.500299278681815, 00:21:05.520 "io_failed": 0, 00:21:05.520 "io_timeout": 0, 00:21:05.520 "avg_latency_us": 25459.48923255923, 00:21:05.520 "min_latency_us": 6616.015238095238, 00:21:05.520 "max_latency_us": 34453.21142857143 00:21:05.520 } 00:21:05.520 ], 00:21:05.520 "core_count": 1 00:21:05.520 } 00:21:05.520 12:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:21:05.520 12:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.520 12:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:05.520 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.520 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:21:05.520 "subsystems": [ 00:21:05.520 { 00:21:05.520 "subsystem": "keyring", 00:21:05.520 "config": [ 00:21:05.520 { 00:21:05.520 "method": "keyring_file_add_key", 00:21:05.520 "params": { 00:21:05.520 "name": "key0", 00:21:05.520 "path": "/tmp/tmp.lPlypwmqUq" 00:21:05.520 } 00:21:05.520 } 00:21:05.520 ] 00:21:05.520 }, 00:21:05.520 { 00:21:05.520 "subsystem": "iobuf", 00:21:05.520 "config": [ 00:21:05.520 { 00:21:05.520 "method": "iobuf_set_options", 00:21:05.520 "params": { 00:21:05.520 "small_pool_count": 8192, 00:21:05.520 "large_pool_count": 1024, 00:21:05.520 "small_bufsize": 8192, 00:21:05.520 "large_bufsize": 135168, 00:21:05.520 "enable_numa": false 00:21:05.520 } 00:21:05.520 } 00:21:05.520 ] 00:21:05.520 }, 00:21:05.520 { 00:21:05.520 "subsystem": "sock", 00:21:05.520 "config": [ 00:21:05.520 { 00:21:05.520 "method": "sock_set_default_impl", 00:21:05.520 "params": { 00:21:05.520 "impl_name": "posix" 00:21:05.520 } 00:21:05.520 }, 00:21:05.520 { 00:21:05.520 "method": "sock_impl_set_options", 00:21:05.520 "params": { 00:21:05.520 "impl_name": "ssl", 00:21:05.520 "recv_buf_size": 4096, 00:21:05.520 "send_buf_size": 4096, 00:21:05.520 "enable_recv_pipe": true, 00:21:05.520 "enable_quickack": false, 00:21:05.520 "enable_placement_id": 0, 00:21:05.520 "enable_zerocopy_send_server": true, 00:21:05.520 "enable_zerocopy_send_client": false, 00:21:05.520 "zerocopy_threshold": 0, 00:21:05.520 "tls_version": 0, 00:21:05.520 "enable_ktls": false 00:21:05.520 } 00:21:05.520 }, 00:21:05.520 { 00:21:05.520 "method": "sock_impl_set_options", 00:21:05.520 "params": { 00:21:05.520 "impl_name": "posix", 00:21:05.520 "recv_buf_size": 2097152, 00:21:05.520 "send_buf_size": 2097152, 00:21:05.520 "enable_recv_pipe": true, 00:21:05.520 "enable_quickack": false, 00:21:05.520 "enable_placement_id": 0, 00:21:05.520 "enable_zerocopy_send_server": true, 00:21:05.520 "enable_zerocopy_send_client": false, 00:21:05.520 "zerocopy_threshold": 0, 00:21:05.520 "tls_version": 0, 00:21:05.520 "enable_ktls": false 00:21:05.520 } 00:21:05.520 } 00:21:05.520 ] 00:21:05.520 }, 00:21:05.520 { 00:21:05.520 "subsystem": "vmd", 00:21:05.520 "config": [] 00:21:05.520 }, 00:21:05.520 { 00:21:05.520 "subsystem": "accel", 00:21:05.520 "config": [ 00:21:05.520 { 00:21:05.520 "method": "accel_set_options", 00:21:05.520 "params": { 00:21:05.520 "small_cache_size": 128, 00:21:05.520 "large_cache_size": 16, 00:21:05.520 "task_count": 2048, 00:21:05.520 "sequence_count": 2048, 00:21:05.520 "buf_count": 2048 00:21:05.520 } 00:21:05.520 } 00:21:05.520 ] 00:21:05.520 }, 00:21:05.520 { 00:21:05.520 "subsystem": "bdev", 00:21:05.520 "config": [ 00:21:05.520 { 00:21:05.520 "method": "bdev_set_options", 00:21:05.520 "params": { 00:21:05.520 "bdev_io_pool_size": 65535, 00:21:05.520 "bdev_io_cache_size": 256, 00:21:05.521 "bdev_auto_examine": true, 00:21:05.521 "iobuf_small_cache_size": 128, 00:21:05.521 "iobuf_large_cache_size": 16 00:21:05.521 } 00:21:05.521 }, 00:21:05.521 { 00:21:05.521 "method": "bdev_raid_set_options", 00:21:05.521 "params": { 00:21:05.521 "process_window_size_kb": 1024, 00:21:05.521 "process_max_bandwidth_mb_sec": 0 00:21:05.521 } 00:21:05.521 }, 00:21:05.521 { 00:21:05.521 "method": "bdev_iscsi_set_options", 00:21:05.521 "params": { 00:21:05.521 "timeout_sec": 30 00:21:05.521 } 00:21:05.521 }, 00:21:05.521 { 00:21:05.521 "method": "bdev_nvme_set_options", 00:21:05.521 "params": { 00:21:05.521 "action_on_timeout": "none", 00:21:05.521 "timeout_us": 0, 00:21:05.521 "timeout_admin_us": 0, 00:21:05.521 "keep_alive_timeout_ms": 10000, 00:21:05.521 "arbitration_burst": 0, 00:21:05.521 "low_priority_weight": 0, 00:21:05.521 "medium_priority_weight": 0, 00:21:05.521 "high_priority_weight": 0, 00:21:05.521 "nvme_adminq_poll_period_us": 10000, 00:21:05.521 "nvme_ioq_poll_period_us": 0, 00:21:05.521 "io_queue_requests": 0, 00:21:05.521 "delay_cmd_submit": true, 00:21:05.521 "transport_retry_count": 4, 00:21:05.521 "bdev_retry_count": 3, 00:21:05.521 "transport_ack_timeout": 0, 00:21:05.521 "ctrlr_loss_timeout_sec": 0, 00:21:05.521 "reconnect_delay_sec": 0, 00:21:05.521 "fast_io_fail_timeout_sec": 0, 00:21:05.521 "disable_auto_failback": false, 00:21:05.521 "generate_uuids": false, 00:21:05.521 "transport_tos": 0, 00:21:05.521 "nvme_error_stat": false, 00:21:05.521 "rdma_srq_size": 0, 00:21:05.521 "io_path_stat": false, 00:21:05.521 "allow_accel_sequence": false, 00:21:05.521 "rdma_max_cq_size": 0, 00:21:05.521 "rdma_cm_event_timeout_ms": 0, 00:21:05.521 "dhchap_digests": [ 00:21:05.521 "sha256", 00:21:05.521 "sha384", 00:21:05.521 "sha512" 00:21:05.521 ], 00:21:05.521 "dhchap_dhgroups": [ 00:21:05.521 "null", 00:21:05.521 "ffdhe2048", 00:21:05.521 "ffdhe3072", 00:21:05.521 "ffdhe4096", 00:21:05.521 "ffdhe6144", 00:21:05.521 "ffdhe8192" 00:21:05.521 ] 00:21:05.521 } 00:21:05.521 }, 00:21:05.521 { 00:21:05.521 "method": "bdev_nvme_set_hotplug", 00:21:05.521 "params": { 00:21:05.521 "period_us": 100000, 00:21:05.521 "enable": false 00:21:05.521 } 00:21:05.521 }, 00:21:05.521 { 00:21:05.521 "method": "bdev_malloc_create", 00:21:05.521 "params": { 00:21:05.521 "name": "malloc0", 00:21:05.521 "num_blocks": 8192, 00:21:05.521 "block_size": 4096, 00:21:05.521 "physical_block_size": 4096, 00:21:05.521 "uuid": "38be2fa8-fc71-45c8-bf32-c68f322f8976", 00:21:05.521 "optimal_io_boundary": 0, 00:21:05.521 "md_size": 0, 00:21:05.521 "dif_type": 0, 00:21:05.521 "dif_is_head_of_md": false, 00:21:05.521 "dif_pi_format": 0 00:21:05.521 } 00:21:05.521 }, 00:21:05.521 { 00:21:05.521 "method": "bdev_wait_for_examine" 00:21:05.521 } 00:21:05.521 ] 00:21:05.521 }, 00:21:05.521 { 00:21:05.521 "subsystem": "nbd", 00:21:05.521 "config": [] 00:21:05.521 }, 00:21:05.521 { 00:21:05.521 "subsystem": "scheduler", 00:21:05.521 "config": [ 00:21:05.521 { 00:21:05.521 "method": "framework_set_scheduler", 00:21:05.521 "params": { 00:21:05.521 "name": "static" 00:21:05.521 } 00:21:05.521 } 00:21:05.521 ] 00:21:05.521 }, 00:21:05.521 { 00:21:05.521 "subsystem": "nvmf", 00:21:05.521 "config": [ 00:21:05.521 { 00:21:05.521 "method": "nvmf_set_config", 00:21:05.521 "params": { 00:21:05.521 "discovery_filter": "match_any", 00:21:05.521 "admin_cmd_passthru": { 00:21:05.521 "identify_ctrlr": false 00:21:05.521 }, 00:21:05.521 "dhchap_digests": [ 00:21:05.521 "sha256", 00:21:05.521 "sha384", 00:21:05.521 "sha512" 00:21:05.521 ], 00:21:05.521 "dhchap_dhgroups": [ 00:21:05.521 "null", 00:21:05.521 "ffdhe2048", 00:21:05.521 "ffdhe3072", 00:21:05.521 "ffdhe4096", 00:21:05.521 "ffdhe6144", 00:21:05.521 "ffdhe8192" 00:21:05.521 ] 00:21:05.521 } 00:21:05.521 }, 00:21:05.521 { 00:21:05.521 "method": "nvmf_set_max_subsystems", 00:21:05.521 "params": { 00:21:05.521 "max_subsystems": 1024 00:21:05.521 } 00:21:05.521 }, 00:21:05.521 { 00:21:05.521 "method": "nvmf_set_crdt", 00:21:05.521 "params": { 00:21:05.521 "crdt1": 0, 00:21:05.521 "crdt2": 0, 00:21:05.521 "crdt3": 0 00:21:05.521 } 00:21:05.521 }, 00:21:05.521 { 00:21:05.521 "method": "nvmf_create_transport", 00:21:05.521 "params": { 00:21:05.521 "trtype": "TCP", 00:21:05.521 "max_queue_depth": 128, 00:21:05.521 "max_io_qpairs_per_ctrlr": 127, 00:21:05.521 "in_capsule_data_size": 4096, 00:21:05.521 "max_io_size": 131072, 00:21:05.521 "io_unit_size": 131072, 00:21:05.521 "max_aq_depth": 128, 00:21:05.521 "num_shared_buffers": 511, 00:21:05.521 "buf_cache_size": 4294967295, 00:21:05.521 "dif_insert_or_strip": false, 00:21:05.521 "zcopy": false, 00:21:05.521 "c2h_success": false, 00:21:05.521 "sock_priority": 0, 00:21:05.521 "abort_timeout_sec": 1, 00:21:05.521 "ack_timeout": 0, 00:21:05.521 "data_wr_pool_size": 0 00:21:05.521 } 00:21:05.521 }, 00:21:05.521 { 00:21:05.521 "method": "nvmf_create_subsystem", 00:21:05.521 "params": { 00:21:05.521 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.521 "allow_any_host": false, 00:21:05.521 "serial_number": "00000000000000000000", 00:21:05.521 "model_number": "SPDK bdev Controller", 00:21:05.521 "max_namespaces": 32, 00:21:05.521 "min_cntlid": 1, 00:21:05.521 "max_cntlid": 65519, 00:21:05.521 "ana_reporting": false 00:21:05.521 } 00:21:05.521 }, 00:21:05.521 { 00:21:05.521 "method": "nvmf_subsystem_add_host", 00:21:05.521 "params": { 00:21:05.521 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.521 "host": "nqn.2016-06.io.spdk:host1", 00:21:05.521 "psk": "key0" 00:21:05.521 } 00:21:05.521 }, 00:21:05.521 { 00:21:05.521 "method": "nvmf_subsystem_add_ns", 00:21:05.521 "params": { 00:21:05.521 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.521 "namespace": { 00:21:05.521 "nsid": 1, 00:21:05.521 "bdev_name": "malloc0", 00:21:05.521 "nguid": "38BE2FA8FC7145C8BF32C68F322F8976", 00:21:05.521 "uuid": "38be2fa8-fc71-45c8-bf32-c68f322f8976", 00:21:05.521 "no_auto_visible": false 00:21:05.521 } 00:21:05.521 } 00:21:05.521 }, 00:21:05.521 { 00:21:05.521 "method": "nvmf_subsystem_add_listener", 00:21:05.521 "params": { 00:21:05.521 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.521 "listen_address": { 00:21:05.521 "trtype": "TCP", 00:21:05.521 "adrfam": "IPv4", 00:21:05.521 "traddr": "10.0.0.2", 00:21:05.521 "trsvcid": "4420" 00:21:05.521 }, 00:21:05.521 "secure_channel": false, 00:21:05.521 "sock_impl": "ssl" 00:21:05.521 } 00:21:05.521 } 00:21:05.521 ] 00:21:05.521 } 00:21:05.521 ] 00:21:05.521 }' 00:21:05.521 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:05.521 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:21:05.521 "subsystems": [ 00:21:05.521 { 00:21:05.521 "subsystem": "keyring", 00:21:05.521 "config": [ 00:21:05.521 { 00:21:05.521 "method": "keyring_file_add_key", 00:21:05.521 "params": { 00:21:05.521 "name": "key0", 00:21:05.521 "path": "/tmp/tmp.lPlypwmqUq" 00:21:05.521 } 00:21:05.521 } 00:21:05.521 ] 00:21:05.521 }, 00:21:05.521 { 00:21:05.521 "subsystem": "iobuf", 00:21:05.521 "config": [ 00:21:05.521 { 00:21:05.521 "method": "iobuf_set_options", 00:21:05.521 "params": { 00:21:05.521 "small_pool_count": 8192, 00:21:05.521 "large_pool_count": 1024, 00:21:05.521 "small_bufsize": 8192, 00:21:05.521 "large_bufsize": 135168, 00:21:05.521 "enable_numa": false 00:21:05.521 } 00:21:05.521 } 00:21:05.521 ] 00:21:05.521 }, 00:21:05.521 { 00:21:05.521 "subsystem": "sock", 00:21:05.521 "config": [ 00:21:05.521 { 00:21:05.521 "method": "sock_set_default_impl", 00:21:05.521 "params": { 00:21:05.521 "impl_name": "posix" 00:21:05.521 } 00:21:05.521 }, 00:21:05.521 { 00:21:05.521 "method": "sock_impl_set_options", 00:21:05.521 "params": { 00:21:05.521 "impl_name": "ssl", 00:21:05.521 "recv_buf_size": 4096, 00:21:05.521 "send_buf_size": 4096, 00:21:05.521 "enable_recv_pipe": true, 00:21:05.521 "enable_quickack": false, 00:21:05.521 "enable_placement_id": 0, 00:21:05.521 "enable_zerocopy_send_server": true, 00:21:05.521 "enable_zerocopy_send_client": false, 00:21:05.521 "zerocopy_threshold": 0, 00:21:05.521 "tls_version": 0, 00:21:05.521 "enable_ktls": false 00:21:05.521 } 00:21:05.521 }, 00:21:05.521 { 00:21:05.521 "method": "sock_impl_set_options", 00:21:05.521 "params": { 00:21:05.521 "impl_name": "posix", 00:21:05.521 "recv_buf_size": 2097152, 00:21:05.521 "send_buf_size": 2097152, 00:21:05.522 "enable_recv_pipe": true, 00:21:05.522 "enable_quickack": false, 00:21:05.522 "enable_placement_id": 0, 00:21:05.522 "enable_zerocopy_send_server": true, 00:21:05.522 "enable_zerocopy_send_client": false, 00:21:05.522 "zerocopy_threshold": 0, 00:21:05.522 "tls_version": 0, 00:21:05.522 "enable_ktls": false 00:21:05.522 } 00:21:05.522 } 00:21:05.522 ] 00:21:05.522 }, 00:21:05.522 { 00:21:05.522 "subsystem": "vmd", 00:21:05.522 "config": [] 00:21:05.522 }, 00:21:05.522 { 00:21:05.522 "subsystem": "accel", 00:21:05.522 "config": [ 00:21:05.522 { 00:21:05.522 "method": "accel_set_options", 00:21:05.522 "params": { 00:21:05.522 "small_cache_size": 128, 00:21:05.522 "large_cache_size": 16, 00:21:05.522 "task_count": 2048, 00:21:05.522 "sequence_count": 2048, 00:21:05.522 "buf_count": 2048 00:21:05.522 } 00:21:05.522 } 00:21:05.522 ] 00:21:05.522 }, 00:21:05.522 { 00:21:05.522 "subsystem": "bdev", 00:21:05.522 "config": [ 00:21:05.522 { 00:21:05.522 "method": "bdev_set_options", 00:21:05.522 "params": { 00:21:05.522 "bdev_io_pool_size": 65535, 00:21:05.522 "bdev_io_cache_size": 256, 00:21:05.522 "bdev_auto_examine": true, 00:21:05.522 "iobuf_small_cache_size": 128, 00:21:05.522 "iobuf_large_cache_size": 16 00:21:05.522 } 00:21:05.522 }, 00:21:05.522 { 00:21:05.522 "method": "bdev_raid_set_options", 00:21:05.522 "params": { 00:21:05.522 "process_window_size_kb": 1024, 00:21:05.522 "process_max_bandwidth_mb_sec": 0 00:21:05.522 } 00:21:05.522 }, 00:21:05.522 { 00:21:05.522 "method": "bdev_iscsi_set_options", 00:21:05.522 "params": { 00:21:05.522 "timeout_sec": 30 00:21:05.522 } 00:21:05.522 }, 00:21:05.522 { 00:21:05.522 "method": "bdev_nvme_set_options", 00:21:05.522 "params": { 00:21:05.522 "action_on_timeout": "none", 00:21:05.522 "timeout_us": 0, 00:21:05.522 "timeout_admin_us": 0, 00:21:05.522 "keep_alive_timeout_ms": 10000, 00:21:05.522 "arbitration_burst": 0, 00:21:05.522 "low_priority_weight": 0, 00:21:05.522 "medium_priority_weight": 0, 00:21:05.522 "high_priority_weight": 0, 00:21:05.522 "nvme_adminq_poll_period_us": 10000, 00:21:05.522 "nvme_ioq_poll_period_us": 0, 00:21:05.522 "io_queue_requests": 512, 00:21:05.522 "delay_cmd_submit": true, 00:21:05.522 "transport_retry_count": 4, 00:21:05.522 "bdev_retry_count": 3, 00:21:05.522 "transport_ack_timeout": 0, 00:21:05.522 "ctrlr_loss_timeout_sec": 0, 00:21:05.522 "reconnect_delay_sec": 0, 00:21:05.522 "fast_io_fail_timeout_sec": 0, 00:21:05.522 "disable_auto_failback": false, 00:21:05.522 "generate_uuids": false, 00:21:05.522 "transport_tos": 0, 00:21:05.522 "nvme_error_stat": false, 00:21:05.522 "rdma_srq_size": 0, 00:21:05.522 "io_path_stat": false, 00:21:05.522 "allow_accel_sequence": false, 00:21:05.522 "rdma_max_cq_size": 0, 00:21:05.522 "rdma_cm_event_timeout_ms": 0, 00:21:05.522 "dhchap_digests": [ 00:21:05.522 "sha256", 00:21:05.522 "sha384", 00:21:05.522 "sha512" 00:21:05.522 ], 00:21:05.522 "dhchap_dhgroups": [ 00:21:05.522 "null", 00:21:05.522 "ffdhe2048", 00:21:05.522 "ffdhe3072", 00:21:05.522 "ffdhe4096", 00:21:05.522 "ffdhe6144", 00:21:05.522 "ffdhe8192" 00:21:05.522 ] 00:21:05.522 } 00:21:05.522 }, 00:21:05.522 { 00:21:05.522 "method": "bdev_nvme_attach_controller", 00:21:05.522 "params": { 00:21:05.522 "name": "nvme0", 00:21:05.522 "trtype": "TCP", 00:21:05.522 "adrfam": "IPv4", 00:21:05.522 "traddr": "10.0.0.2", 00:21:05.522 "trsvcid": "4420", 00:21:05.522 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.522 "prchk_reftag": false, 00:21:05.522 "prchk_guard": false, 00:21:05.522 "ctrlr_loss_timeout_sec": 0, 00:21:05.522 "reconnect_delay_sec": 0, 00:21:05.522 "fast_io_fail_timeout_sec": 0, 00:21:05.522 "psk": "key0", 00:21:05.522 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:05.522 "hdgst": false, 00:21:05.522 "ddgst": false, 00:21:05.522 "multipath": "multipath" 00:21:05.522 } 00:21:05.522 }, 00:21:05.522 { 00:21:05.522 "method": "bdev_nvme_set_hotplug", 00:21:05.522 "params": { 00:21:05.522 "period_us": 100000, 00:21:05.522 "enable": false 00:21:05.522 } 00:21:05.522 }, 00:21:05.522 { 00:21:05.522 "method": "bdev_enable_histogram", 00:21:05.522 "params": { 00:21:05.522 "name": "nvme0n1", 00:21:05.522 "enable": true 00:21:05.522 } 00:21:05.522 }, 00:21:05.522 { 00:21:05.522 "method": "bdev_wait_for_examine" 00:21:05.522 } 00:21:05.522 ] 00:21:05.522 }, 00:21:05.522 { 00:21:05.522 "subsystem": "nbd", 00:21:05.522 "config": [] 00:21:05.522 } 00:21:05.522 ] 00:21:05.522 }' 00:21:05.522 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 206428 00:21:05.522 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 206428 ']' 00:21:05.522 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 206428 00:21:05.522 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:05.782 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:05.782 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 206428 00:21:05.782 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:05.782 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:05.782 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 206428' 00:21:05.782 killing process with pid 206428 00:21:05.782 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 206428 00:21:05.782 Received shutdown signal, test time was about 1.000000 seconds 00:21:05.782 00:21:05.782 Latency(us) 00:21:05.782 [2024-11-20T11:35:11.548Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.782 [2024-11-20T11:35:11.548Z] =================================================================================================================== 00:21:05.782 [2024-11-20T11:35:11.548Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:05.782 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 206428 00:21:05.782 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 206183 00:21:05.782 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 206183 ']' 00:21:05.782 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 206183 00:21:05.782 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:05.782 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:05.782 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 206183 00:21:05.782 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:05.782 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:05.782 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 206183' 00:21:05.782 killing process with pid 206183 00:21:05.782 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 206183 00:21:05.782 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 206183 00:21:06.042 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:21:06.042 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:06.042 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:06.042 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:21:06.042 "subsystems": [ 00:21:06.042 { 00:21:06.042 "subsystem": "keyring", 00:21:06.042 "config": [ 00:21:06.042 { 00:21:06.042 "method": "keyring_file_add_key", 00:21:06.042 "params": { 00:21:06.042 "name": "key0", 00:21:06.042 "path": "/tmp/tmp.lPlypwmqUq" 00:21:06.042 } 00:21:06.042 } 00:21:06.042 ] 00:21:06.042 }, 00:21:06.042 { 00:21:06.042 "subsystem": "iobuf", 00:21:06.042 "config": [ 00:21:06.042 { 00:21:06.042 "method": "iobuf_set_options", 00:21:06.042 "params": { 00:21:06.042 "small_pool_count": 8192, 00:21:06.042 "large_pool_count": 1024, 00:21:06.042 "small_bufsize": 8192, 00:21:06.042 "large_bufsize": 135168, 00:21:06.042 "enable_numa": false 00:21:06.042 } 00:21:06.042 } 00:21:06.042 ] 00:21:06.042 }, 00:21:06.042 { 00:21:06.042 "subsystem": "sock", 00:21:06.042 "config": [ 00:21:06.042 { 00:21:06.042 "method": "sock_set_default_impl", 00:21:06.042 "params": { 00:21:06.042 "impl_name": "posix" 00:21:06.042 } 00:21:06.042 }, 00:21:06.042 { 00:21:06.042 "method": "sock_impl_set_options", 00:21:06.042 "params": { 00:21:06.042 "impl_name": "ssl", 00:21:06.042 "recv_buf_size": 4096, 00:21:06.042 "send_buf_size": 4096, 00:21:06.042 "enable_recv_pipe": true, 00:21:06.042 "enable_quickack": false, 00:21:06.042 "enable_placement_id": 0, 00:21:06.042 "enable_zerocopy_send_server": true, 00:21:06.042 "enable_zerocopy_send_client": false, 00:21:06.042 "zerocopy_threshold": 0, 00:21:06.042 "tls_version": 0, 00:21:06.042 "enable_ktls": false 00:21:06.042 } 00:21:06.042 }, 00:21:06.042 { 00:21:06.042 "method": "sock_impl_set_options", 00:21:06.042 "params": { 00:21:06.042 "impl_name": "posix", 00:21:06.042 "recv_buf_size": 2097152, 00:21:06.042 "send_buf_size": 2097152, 00:21:06.042 "enable_recv_pipe": true, 00:21:06.042 "enable_quickack": false, 00:21:06.042 "enable_placement_id": 0, 00:21:06.042 "enable_zerocopy_send_server": true, 00:21:06.042 "enable_zerocopy_send_client": false, 00:21:06.042 "zerocopy_threshold": 0, 00:21:06.042 "tls_version": 0, 00:21:06.042 "enable_ktls": false 00:21:06.042 } 00:21:06.042 } 00:21:06.042 ] 00:21:06.042 }, 00:21:06.042 { 00:21:06.042 "subsystem": "vmd", 00:21:06.042 "config": [] 00:21:06.042 }, 00:21:06.042 { 00:21:06.042 "subsystem": "accel", 00:21:06.042 "config": [ 00:21:06.042 { 00:21:06.042 "method": "accel_set_options", 00:21:06.042 "params": { 00:21:06.042 "small_cache_size": 128, 00:21:06.042 "large_cache_size": 16, 00:21:06.042 "task_count": 2048, 00:21:06.042 "sequence_count": 2048, 00:21:06.042 "buf_count": 2048 00:21:06.042 } 00:21:06.042 } 00:21:06.042 ] 00:21:06.042 }, 00:21:06.042 { 00:21:06.042 "subsystem": "bdev", 00:21:06.042 "config": [ 00:21:06.042 { 00:21:06.042 "method": "bdev_set_options", 00:21:06.042 "params": { 00:21:06.042 "bdev_io_pool_size": 65535, 00:21:06.042 "bdev_io_cache_size": 256, 00:21:06.042 "bdev_auto_examine": true, 00:21:06.042 "iobuf_small_cache_size": 128, 00:21:06.042 "iobuf_large_cache_size": 16 00:21:06.042 } 00:21:06.042 }, 00:21:06.042 { 00:21:06.042 "method": "bdev_raid_set_options", 00:21:06.042 "params": { 00:21:06.042 "process_window_size_kb": 1024, 00:21:06.042 "process_max_bandwidth_mb_sec": 0 00:21:06.042 } 00:21:06.042 }, 00:21:06.042 { 00:21:06.042 "method": "bdev_iscsi_set_options", 00:21:06.042 "params": { 00:21:06.042 "timeout_sec": 30 00:21:06.042 } 00:21:06.042 }, 00:21:06.042 { 00:21:06.042 "method": "bdev_nvme_set_options", 00:21:06.042 "params": { 00:21:06.042 "action_on_timeout": "none", 00:21:06.042 "timeout_us": 0, 00:21:06.042 "timeout_admin_us": 0, 00:21:06.042 "keep_alive_timeout_ms": 10000, 00:21:06.042 "arbitration_burst": 0, 00:21:06.042 "low_priority_weight": 0, 00:21:06.042 "medium_priority_weight": 0, 00:21:06.042 "high_priority_weight": 0, 00:21:06.042 "nvme_adminq_poll_period_us": 10000, 00:21:06.042 "nvme_ioq_poll_period_us": 0, 00:21:06.042 "io_queue_requests": 0, 00:21:06.042 "delay_cmd_submit": true, 00:21:06.042 "transport_retry_count": 4, 00:21:06.042 "bdev_retry_count": 3, 00:21:06.042 "transport_ack_timeout": 0, 00:21:06.042 "ctrlr_loss_timeout_sec": 0, 00:21:06.042 "reconnect_delay_sec": 0, 00:21:06.042 "fast_io_fail_timeout_sec": 0, 00:21:06.042 "disable_auto_failback": false, 00:21:06.042 "generate_uuids": false, 00:21:06.042 "transport_tos": 0, 00:21:06.042 "nvme_error_stat": false, 00:21:06.042 "rdma_srq_size": 0, 00:21:06.042 "io_path_stat": false, 00:21:06.042 "allow_accel_sequence": false, 00:21:06.042 "rdma_max_cq_size": 0, 00:21:06.042 "rdma_cm_event_timeout_ms": 0, 00:21:06.042 "dhchap_digests": [ 00:21:06.042 "sha256", 00:21:06.042 "sha384", 00:21:06.042 "sha512" 00:21:06.042 ], 00:21:06.042 "dhchap_dhgroups": [ 00:21:06.042 "null", 00:21:06.042 "ffdhe2048", 00:21:06.042 "ffdhe3072", 00:21:06.042 "ffdhe4096", 00:21:06.042 "ffdhe6144", 00:21:06.042 "ffdhe8192" 00:21:06.042 ] 00:21:06.042 } 00:21:06.042 }, 00:21:06.042 { 00:21:06.042 "method": "bdev_nvme_set_hotplug", 00:21:06.042 "params": { 00:21:06.042 "period_us": 100000, 00:21:06.042 "enable": false 00:21:06.042 } 00:21:06.042 }, 00:21:06.042 { 00:21:06.042 "method": "bdev_malloc_create", 00:21:06.042 "params": { 00:21:06.042 "name": "malloc0", 00:21:06.042 "num_blocks": 8192, 00:21:06.042 "block_size": 4096, 00:21:06.042 "physical_block_size": 4096, 00:21:06.042 "uuid": "38be2fa8-fc71-45c8-bf32-c68f322f8976", 00:21:06.042 "optimal_io_boundary": 0, 00:21:06.042 "md_size": 0, 00:21:06.042 "dif_type": 0, 00:21:06.042 "dif_is_head_of_md": false, 00:21:06.042 "dif_pi_format": 0 00:21:06.042 } 00:21:06.042 }, 00:21:06.042 { 00:21:06.042 "method": "bdev_wait_for_examine" 00:21:06.042 } 00:21:06.042 ] 00:21:06.042 }, 00:21:06.042 { 00:21:06.042 "subsystem": "nbd", 00:21:06.042 "config": [] 00:21:06.042 }, 00:21:06.042 { 00:21:06.042 "subsystem": "scheduler", 00:21:06.042 "config": [ 00:21:06.042 { 00:21:06.042 "method": "framework_set_scheduler", 00:21:06.042 "params": { 00:21:06.042 "name": "static" 00:21:06.042 } 00:21:06.042 } 00:21:06.042 ] 00:21:06.042 }, 00:21:06.042 { 00:21:06.042 "subsystem": "nvmf", 00:21:06.042 "config": [ 00:21:06.042 { 00:21:06.042 "method": "nvmf_set_config", 00:21:06.042 "params": { 00:21:06.042 "discovery_filter": "match_any", 00:21:06.042 "admin_cmd_passthru": { 00:21:06.042 "identify_ctrlr": false 00:21:06.042 }, 00:21:06.042 "dhchap_digests": [ 00:21:06.042 "sha256", 00:21:06.042 "sha384", 00:21:06.042 "sha512" 00:21:06.042 ], 00:21:06.042 "dhchap_dhgroups": [ 00:21:06.042 "null", 00:21:06.042 "ffdhe2048", 00:21:06.042 "ffdhe3072", 00:21:06.042 "ffdhe4096", 00:21:06.042 "ffdhe6144", 00:21:06.042 "ffdhe8192" 00:21:06.042 ] 00:21:06.043 } 00:21:06.043 }, 00:21:06.043 { 00:21:06.043 "method": "nvmf_set_max_subsystems", 00:21:06.043 "params": { 00:21:06.043 "max_subsystems": 1024 00:21:06.043 } 00:21:06.043 }, 00:21:06.043 { 00:21:06.043 "method": "nvmf_set_crdt", 00:21:06.043 "params": { 00:21:06.043 "crdt1": 0, 00:21:06.043 "crdt2": 0, 00:21:06.043 "crdt3": 0 00:21:06.043 } 00:21:06.043 }, 00:21:06.043 { 00:21:06.043 "method": "nvmf_create_transport", 00:21:06.043 "params": { 00:21:06.043 "trtype": "TCP", 00:21:06.043 "max_queue_depth": 128, 00:21:06.043 "max_io_qpairs_per_ctrlr": 127, 00:21:06.043 "in_capsule_data_size": 4096, 00:21:06.043 "max_io_size": 131072, 00:21:06.043 "io_unit_size": 131072, 00:21:06.043 "max_aq_depth": 128, 00:21:06.043 "num_shared_buffers": 511, 00:21:06.043 "buf_cache_size": 4294967295, 00:21:06.043 "dif_insert_or_strip": false, 00:21:06.043 "zcopy": false, 00:21:06.043 "c2h_success": false, 00:21:06.043 "sock_priority": 0, 00:21:06.043 "abort_timeout_sec": 1, 00:21:06.043 "ack_timeout": 0, 00:21:06.043 "data_wr_pool_size": 0 00:21:06.043 } 00:21:06.043 }, 00:21:06.043 { 00:21:06.043 "method": "nvmf_create_subsystem", 00:21:06.043 "params": { 00:21:06.043 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:06.043 "allow_any_host": false, 00:21:06.043 "serial_number": "00000000000000000000", 00:21:06.043 "model_number": "SPDK bdev Controller", 00:21:06.043 "max_namespaces": 32, 00:21:06.043 "min_cntlid": 1, 00:21:06.043 "max_cntlid": 65519, 00:21:06.043 "ana_reporting": false 00:21:06.043 } 00:21:06.043 }, 00:21:06.043 { 00:21:06.043 "method": "nvmf_subsystem_add_host", 00:21:06.043 "params": { 00:21:06.043 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:06.043 "host": "nqn.2016-06.io.spdk:host1", 00:21:06.043 "psk": "key0" 00:21:06.043 } 00:21:06.043 }, 00:21:06.043 { 00:21:06.043 "method": "nvmf_subsystem_add_ns", 00:21:06.043 "params": { 00:21:06.043 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:06.043 "namespace": { 00:21:06.043 "nsid": 1, 00:21:06.043 "bdev_name": "malloc0", 00:21:06.043 "nguid": "38BE2FA8FC7145C8BF32C68F322F8976", 00:21:06.043 "uuid": "38be2fa8-fc71-45c8-bf32-c68f322f8976", 00:21:06.043 "no_auto_visible": false 00:21:06.043 } 00:21:06.043 } 00:21:06.043 }, 00:21:06.043 { 00:21:06.043 "method": "nvmf_subsystem_add_listener", 00:21:06.043 "params": { 00:21:06.043 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:06.043 "listen_address": { 00:21:06.043 "trtype": "TCP", 00:21:06.043 "adrfam": "IPv4", 00:21:06.043 "traddr": "10.0.0.2", 00:21:06.043 "trsvcid": "4420" 00:21:06.043 }, 00:21:06.043 "secure_channel": false, 00:21:06.043 "sock_impl": "ssl" 00:21:06.043 } 00:21:06.043 } 00:21:06.043 ] 00:21:06.043 } 00:21:06.043 ] 00:21:06.043 }' 00:21:06.043 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:06.043 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=206896 00:21:06.043 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:06.043 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 206896 00:21:06.043 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 206896 ']' 00:21:06.043 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:06.043 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:06.043 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:06.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:06.043 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:06.043 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:06.043 [2024-11-20 12:35:11.754068] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:21:06.043 [2024-11-20 12:35:11.754109] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:06.303 [2024-11-20 12:35:11.833314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.303 [2024-11-20 12:35:11.873804] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:06.303 [2024-11-20 12:35:11.873838] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:06.303 [2024-11-20 12:35:11.873846] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:06.303 [2024-11-20 12:35:11.873852] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:06.303 [2024-11-20 12:35:11.873856] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:06.303 [2024-11-20 12:35:11.874414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:06.563 [2024-11-20 12:35:12.086230] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:06.563 [2024-11-20 12:35:12.118274] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:06.563 [2024-11-20 12:35:12.118484] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:07.131 12:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:07.131 12:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:07.131 12:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:07.131 12:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:07.131 12:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:07.131 12:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:07.131 12:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=207046 00:21:07.131 12:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 207046 /var/tmp/bdevperf.sock 00:21:07.131 12:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 207046 ']' 00:21:07.131 12:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:07.132 12:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:07.132 12:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:07.132 12:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:07.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:07.132 12:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:21:07.132 "subsystems": [ 00:21:07.132 { 00:21:07.132 "subsystem": "keyring", 00:21:07.132 "config": [ 00:21:07.132 { 00:21:07.132 "method": "keyring_file_add_key", 00:21:07.132 "params": { 00:21:07.132 "name": "key0", 00:21:07.132 "path": "/tmp/tmp.lPlypwmqUq" 00:21:07.132 } 00:21:07.132 } 00:21:07.132 ] 00:21:07.132 }, 00:21:07.132 { 00:21:07.132 "subsystem": "iobuf", 00:21:07.132 "config": [ 00:21:07.132 { 00:21:07.132 "method": "iobuf_set_options", 00:21:07.132 "params": { 00:21:07.132 "small_pool_count": 8192, 00:21:07.132 "large_pool_count": 1024, 00:21:07.132 "small_bufsize": 8192, 00:21:07.132 "large_bufsize": 135168, 00:21:07.132 "enable_numa": false 00:21:07.132 } 00:21:07.132 } 00:21:07.132 ] 00:21:07.132 }, 00:21:07.132 { 00:21:07.132 "subsystem": "sock", 00:21:07.132 "config": [ 00:21:07.132 { 00:21:07.132 "method": "sock_set_default_impl", 00:21:07.132 "params": { 00:21:07.132 "impl_name": "posix" 00:21:07.132 } 00:21:07.132 }, 00:21:07.132 { 00:21:07.132 "method": "sock_impl_set_options", 00:21:07.132 "params": { 00:21:07.132 "impl_name": "ssl", 00:21:07.132 "recv_buf_size": 4096, 00:21:07.132 "send_buf_size": 4096, 00:21:07.132 "enable_recv_pipe": true, 00:21:07.132 "enable_quickack": false, 00:21:07.132 "enable_placement_id": 0, 00:21:07.132 "enable_zerocopy_send_server": true, 00:21:07.132 "enable_zerocopy_send_client": false, 00:21:07.132 "zerocopy_threshold": 0, 00:21:07.132 "tls_version": 0, 00:21:07.132 "enable_ktls": false 00:21:07.132 } 00:21:07.132 }, 00:21:07.132 { 00:21:07.132 "method": "sock_impl_set_options", 00:21:07.132 "params": { 00:21:07.132 "impl_name": "posix", 00:21:07.132 "recv_buf_size": 2097152, 00:21:07.132 "send_buf_size": 2097152, 00:21:07.132 "enable_recv_pipe": true, 00:21:07.132 "enable_quickack": false, 00:21:07.132 "enable_placement_id": 0, 00:21:07.132 "enable_zerocopy_send_server": true, 00:21:07.132 "enable_zerocopy_send_client": false, 00:21:07.132 "zerocopy_threshold": 0, 00:21:07.132 "tls_version": 0, 00:21:07.132 "enable_ktls": false 00:21:07.132 } 00:21:07.132 } 00:21:07.132 ] 00:21:07.132 }, 00:21:07.132 { 00:21:07.132 "subsystem": "vmd", 00:21:07.132 "config": [] 00:21:07.132 }, 00:21:07.132 { 00:21:07.132 "subsystem": "accel", 00:21:07.132 "config": [ 00:21:07.132 { 00:21:07.132 "method": "accel_set_options", 00:21:07.132 "params": { 00:21:07.132 "small_cache_size": 128, 00:21:07.132 "large_cache_size": 16, 00:21:07.132 "task_count": 2048, 00:21:07.132 "sequence_count": 2048, 00:21:07.132 "buf_count": 2048 00:21:07.132 } 00:21:07.132 } 00:21:07.132 ] 00:21:07.132 }, 00:21:07.132 { 00:21:07.132 "subsystem": "bdev", 00:21:07.132 "config": [ 00:21:07.132 { 00:21:07.132 "method": "bdev_set_options", 00:21:07.132 "params": { 00:21:07.132 "bdev_io_pool_size": 65535, 00:21:07.132 "bdev_io_cache_size": 256, 00:21:07.132 "bdev_auto_examine": true, 00:21:07.132 "iobuf_small_cache_size": 128, 00:21:07.132 "iobuf_large_cache_size": 16 00:21:07.132 } 00:21:07.132 }, 00:21:07.132 { 00:21:07.132 "method": "bdev_raid_set_options", 00:21:07.132 "params": { 00:21:07.132 "process_window_size_kb": 1024, 00:21:07.132 "process_max_bandwidth_mb_sec": 0 00:21:07.132 } 00:21:07.132 }, 00:21:07.132 { 00:21:07.132 "method": "bdev_iscsi_set_options", 00:21:07.132 "params": { 00:21:07.132 "timeout_sec": 30 00:21:07.132 } 00:21:07.132 }, 00:21:07.132 { 00:21:07.132 "method": "bdev_nvme_set_options", 00:21:07.132 "params": { 00:21:07.132 "action_on_timeout": "none", 00:21:07.132 "timeout_us": 0, 00:21:07.132 "timeout_admin_us": 0, 00:21:07.132 "keep_alive_timeout_ms": 10000, 00:21:07.132 "arbitration_burst": 0, 00:21:07.132 "low_priority_weight": 0, 00:21:07.132 "medium_priority_weight": 0, 00:21:07.132 "high_priority_weight": 0, 00:21:07.132 "nvme_adminq_poll_period_us": 10000, 00:21:07.132 "nvme_ioq_poll_period_us": 0, 00:21:07.132 "io_queue_requests": 512, 00:21:07.132 "delay_cmd_submit": true, 00:21:07.132 "transport_retry_count": 4, 00:21:07.132 "bdev_retry_count": 3, 00:21:07.132 "transport_ack_timeout": 0, 00:21:07.132 "ctrlr_loss_timeout_sec": 0, 00:21:07.132 "reconnect_delay_sec": 0, 00:21:07.132 "fast_io_fail_timeout_sec": 0, 00:21:07.132 "disable_auto_failback": false, 00:21:07.132 "generate_uuids": false, 00:21:07.132 "transport_tos": 0, 00:21:07.132 "nvme_error_stat": false, 00:21:07.132 "rdma_srq_size": 0, 00:21:07.132 "io_path_stat": false, 00:21:07.132 "allow_accel_sequence": false, 00:21:07.132 "rdma_max_cq_size": 0, 00:21:07.132 "rdma_cm_event_timeout_ms": 0, 00:21:07.132 "dhchap_digests": [ 00:21:07.132 "sha256", 00:21:07.132 "sha384", 00:21:07.132 "sha512" 00:21:07.132 ], 00:21:07.132 "dhchap_dhgroups": [ 00:21:07.132 "null", 00:21:07.132 "ffdhe2048", 00:21:07.132 "ffdhe3072", 00:21:07.132 "ffdhe4096", 00:21:07.132 "ffdhe6144", 00:21:07.132 "ffdhe8192" 00:21:07.132 ] 00:21:07.132 } 00:21:07.132 }, 00:21:07.132 { 00:21:07.132 "method": "bdev_nvme_attach_controller", 00:21:07.132 "params": { 00:21:07.132 "name": "nvme0", 00:21:07.132 "trtype": "TCP", 00:21:07.132 "adrfam": "IPv4", 00:21:07.132 "traddr": "10.0.0.2", 00:21:07.132 "trsvcid": "4420", 00:21:07.132 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.132 "prchk_reftag": false, 00:21:07.132 "prchk_guard": false, 00:21:07.132 "ctrlr_loss_timeout_sec": 0, 00:21:07.132 "reconnect_delay_sec": 0, 00:21:07.132 "fast_io_fail_timeout_sec": 0, 00:21:07.132 "psk": "key0", 00:21:07.132 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:07.132 "hdgst": false, 00:21:07.132 "ddgst": false, 00:21:07.132 "multipath": "multipath" 00:21:07.132 } 00:21:07.132 }, 00:21:07.132 { 00:21:07.132 "method": "bdev_nvme_set_hotplug", 00:21:07.132 "params": { 00:21:07.132 "period_us": 100000, 00:21:07.132 "enable": false 00:21:07.132 } 00:21:07.132 }, 00:21:07.132 { 00:21:07.132 "method": "bdev_enable_histogram", 00:21:07.132 "params": { 00:21:07.132 "name": "nvme0n1", 00:21:07.132 "enable": true 00:21:07.132 } 00:21:07.132 }, 00:21:07.132 { 00:21:07.132 "method": "bdev_wait_for_examine" 00:21:07.132 } 00:21:07.132 ] 00:21:07.132 }, 00:21:07.132 { 00:21:07.132 "subsystem": "nbd", 00:21:07.132 "config": [] 00:21:07.132 } 00:21:07.132 ] 00:21:07.132 }' 00:21:07.132 12:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:07.132 12:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:07.132 [2024-11-20 12:35:12.670806] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:21:07.132 [2024-11-20 12:35:12.670851] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid207046 ] 00:21:07.132 [2024-11-20 12:35:12.746398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.132 [2024-11-20 12:35:12.787783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:07.391 [2024-11-20 12:35:12.939338] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:07.959 12:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:07.959 12:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:07.959 12:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:07.959 12:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:21:07.959 12:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.959 12:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:08.218 Running I/O for 1 seconds... 00:21:09.154 5240.00 IOPS, 20.47 MiB/s 00:21:09.154 Latency(us) 00:21:09.154 [2024-11-20T11:35:14.920Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:09.154 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:09.154 Verification LBA range: start 0x0 length 0x2000 00:21:09.154 nvme0n1 : 1.03 5231.12 20.43 0.00 0.00 24205.39 4993.22 42692.02 00:21:09.154 [2024-11-20T11:35:14.920Z] =================================================================================================================== 00:21:09.154 [2024-11-20T11:35:14.920Z] Total : 5231.12 20.43 0.00 0.00 24205.39 4993.22 42692.02 00:21:09.154 { 00:21:09.154 "results": [ 00:21:09.154 { 00:21:09.154 "job": "nvme0n1", 00:21:09.154 "core_mask": "0x2", 00:21:09.154 "workload": "verify", 00:21:09.154 "status": "finished", 00:21:09.154 "verify_range": { 00:21:09.154 "start": 0, 00:21:09.154 "length": 8192 00:21:09.154 }, 00:21:09.154 "queue_depth": 128, 00:21:09.154 "io_size": 4096, 00:21:09.154 "runtime": 1.026357, 00:21:09.154 "iops": 5231.1232836137915, 00:21:09.154 "mibps": 20.434075326616373, 00:21:09.154 "io_failed": 0, 00:21:09.154 "io_timeout": 0, 00:21:09.154 "avg_latency_us": 24205.388453290052, 00:21:09.154 "min_latency_us": 4993.219047619048, 00:21:09.154 "max_latency_us": 42692.02285714286 00:21:09.154 } 00:21:09.154 ], 00:21:09.155 "core_count": 1 00:21:09.155 } 00:21:09.155 12:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:21:09.155 12:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:21:09.155 12:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:09.155 12:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:21:09.155 12:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:21:09.155 12:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:21:09.155 12:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:09.155 12:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:21:09.155 12:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:21:09.155 12:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:21:09.155 12:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:09.155 nvmf_trace.0 00:21:09.414 12:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:21:09.414 12:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 207046 00:21:09.414 12:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 207046 ']' 00:21:09.414 12:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 207046 00:21:09.414 12:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:09.414 12:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:09.414 12:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 207046 00:21:09.414 12:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:09.414 12:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:09.414 12:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 207046' 00:21:09.414 killing process with pid 207046 00:21:09.414 12:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 207046 00:21:09.414 Received shutdown signal, test time was about 1.000000 seconds 00:21:09.414 00:21:09.414 Latency(us) 00:21:09.414 [2024-11-20T11:35:15.180Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:09.414 [2024-11-20T11:35:15.180Z] =================================================================================================================== 00:21:09.414 [2024-11-20T11:35:15.180Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:09.414 12:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 207046 00:21:09.414 12:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:09.414 12:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:09.415 12:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:21:09.415 12:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:09.415 12:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:21:09.415 12:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:09.415 12:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:09.415 rmmod nvme_tcp 00:21:09.415 rmmod nvme_fabrics 00:21:09.673 rmmod nvme_keyring 00:21:09.673 12:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:09.673 12:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:21:09.673 12:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:21:09.673 12:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 206896 ']' 00:21:09.673 12:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 206896 00:21:09.674 12:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 206896 ']' 00:21:09.674 12:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 206896 00:21:09.674 12:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:09.674 12:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:09.674 12:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 206896 00:21:09.674 12:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:09.674 12:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:09.674 12:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 206896' 00:21:09.674 killing process with pid 206896 00:21:09.674 12:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 206896 00:21:09.674 12:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 206896 00:21:09.674 12:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:09.674 12:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:09.674 12:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:09.674 12:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:21:09.674 12:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:21:09.674 12:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:09.674 12:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:21:09.674 12:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:09.674 12:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:09.674 12:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:09.674 12:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:09.674 12:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:12.211 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:12.211 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.luUc38Pkc4 /tmp/tmp.mQVxy1z8NQ /tmp/tmp.lPlypwmqUq 00:21:12.211 00:21:12.211 real 1m19.801s 00:21:12.211 user 2m1.136s 00:21:12.211 sys 0m31.203s 00:21:12.211 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:12.211 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:12.211 ************************************ 00:21:12.211 END TEST nvmf_tls 00:21:12.211 ************************************ 00:21:12.211 12:35:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:12.211 12:35:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:12.211 12:35:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:12.211 12:35:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:12.211 ************************************ 00:21:12.211 START TEST nvmf_fips 00:21:12.211 ************************************ 00:21:12.211 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:12.211 * Looking for test storage... 00:21:12.211 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:12.211 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:12.211 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:21:12.211 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:12.211 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:12.211 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:12.211 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:12.211 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:12.211 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:12.211 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:12.211 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:12.211 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:12.211 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:21:12.211 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:21:12.211 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:21:12.211 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:12.211 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:12.211 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:12.211 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:12.211 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:12.211 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:12.211 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:12.211 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:12.211 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:12.211 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:12.211 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:21:12.211 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:21:12.211 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:12.211 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:21:12.211 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:21:12.211 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:12.211 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:12.211 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:21:12.211 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:12.211 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:12.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.211 --rc genhtml_branch_coverage=1 00:21:12.211 --rc genhtml_function_coverage=1 00:21:12.211 --rc genhtml_legend=1 00:21:12.211 --rc geninfo_all_blocks=1 00:21:12.211 --rc geninfo_unexecuted_blocks=1 00:21:12.211 00:21:12.211 ' 00:21:12.211 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:12.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.212 --rc genhtml_branch_coverage=1 00:21:12.212 --rc genhtml_function_coverage=1 00:21:12.212 --rc genhtml_legend=1 00:21:12.212 --rc geninfo_all_blocks=1 00:21:12.212 --rc geninfo_unexecuted_blocks=1 00:21:12.212 00:21:12.212 ' 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:12.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.212 --rc genhtml_branch_coverage=1 00:21:12.212 --rc genhtml_function_coverage=1 00:21:12.212 --rc genhtml_legend=1 00:21:12.212 --rc geninfo_all_blocks=1 00:21:12.212 --rc geninfo_unexecuted_blocks=1 00:21:12.212 00:21:12.212 ' 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:12.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.212 --rc genhtml_branch_coverage=1 00:21:12.212 --rc genhtml_function_coverage=1 00:21:12.212 --rc genhtml_legend=1 00:21:12.212 --rc geninfo_all_blocks=1 00:21:12.212 --rc geninfo_unexecuted_blocks=1 00:21:12.212 00:21:12.212 ' 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:12.212 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:21:12.212 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:21:12.213 Error setting digest 00:21:12.213 40C2B9FDE07F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:21:12.213 40C2B9FDE07F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:21:12.213 12:35:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:18.818 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:18.818 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:18.818 Found net devices under 0000:86:00.0: cvl_0_0 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.818 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:18.819 Found net devices under 0000:86:00.1: cvl_0_1 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:18.819 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:18.819 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:21:18.819 00:21:18.819 --- 10.0.0.2 ping statistics --- 00:21:18.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.819 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:18.819 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:18.819 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:21:18.819 00:21:18.819 --- 10.0.0.1 ping statistics --- 00:21:18.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.819 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=211003 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 211003 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 211003 ']' 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:18.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:18.819 12:35:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:18.819 [2024-11-20 12:35:23.994924] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:21:18.819 [2024-11-20 12:35:23.994973] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:18.819 [2024-11-20 12:35:24.074005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.819 [2024-11-20 12:35:24.115035] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:18.819 [2024-11-20 12:35:24.115072] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:18.819 [2024-11-20 12:35:24.115079] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:18.819 [2024-11-20 12:35:24.115085] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:18.819 [2024-11-20 12:35:24.115090] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:18.819 [2024-11-20 12:35:24.115678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:19.078 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:19.078 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:21:19.078 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:19.078 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:19.078 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:19.337 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:19.337 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:21:19.337 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:19.337 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:21:19.337 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.DsR 00:21:19.337 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:19.337 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.DsR 00:21:19.337 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.DsR 00:21:19.337 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.DsR 00:21:19.337 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:19.337 [2024-11-20 12:35:25.023712] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:19.337 [2024-11-20 12:35:25.039722] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:19.337 [2024-11-20 12:35:25.039871] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:19.337 malloc0 00:21:19.596 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:19.596 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=211198 00:21:19.596 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:19.596 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 211198 /var/tmp/bdevperf.sock 00:21:19.596 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 211198 ']' 00:21:19.596 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:19.596 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:19.596 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:19.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:19.596 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:19.596 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:19.596 [2024-11-20 12:35:25.170303] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:21:19.596 [2024-11-20 12:35:25.170354] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid211198 ] 00:21:19.596 [2024-11-20 12:35:25.246086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.596 [2024-11-20 12:35:25.286296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:20.534 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:20.534 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:21:20.534 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.DsR 00:21:20.534 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:20.793 [2024-11-20 12:35:26.341677] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:20.793 TLSTESTn1 00:21:20.793 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:20.793 Running I/O for 10 seconds... 00:21:23.109 5376.00 IOPS, 21.00 MiB/s [2024-11-20T11:35:29.812Z] 5355.50 IOPS, 20.92 MiB/s [2024-11-20T11:35:30.746Z] 5433.00 IOPS, 21.22 MiB/s [2024-11-20T11:35:31.683Z] 5448.00 IOPS, 21.28 MiB/s [2024-11-20T11:35:32.619Z] 5476.40 IOPS, 21.39 MiB/s [2024-11-20T11:35:33.556Z] 5500.00 IOPS, 21.48 MiB/s [2024-11-20T11:35:34.933Z] 5512.57 IOPS, 21.53 MiB/s [2024-11-20T11:35:35.872Z] 5518.00 IOPS, 21.55 MiB/s [2024-11-20T11:35:36.809Z] 5525.22 IOPS, 21.58 MiB/s [2024-11-20T11:35:36.809Z] 5533.50 IOPS, 21.62 MiB/s 00:21:31.043 Latency(us) 00:21:31.043 [2024-11-20T11:35:36.809Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.043 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:31.043 Verification LBA range: start 0x0 length 0x2000 00:21:31.043 TLSTESTn1 : 10.02 5536.68 21.63 0.00 0.00 23082.73 6522.39 43690.67 00:21:31.043 [2024-11-20T11:35:36.809Z] =================================================================================================================== 00:21:31.043 [2024-11-20T11:35:36.809Z] Total : 5536.68 21.63 0.00 0.00 23082.73 6522.39 43690.67 00:21:31.043 { 00:21:31.043 "results": [ 00:21:31.043 { 00:21:31.043 "job": "TLSTESTn1", 00:21:31.043 "core_mask": "0x4", 00:21:31.043 "workload": "verify", 00:21:31.043 "status": "finished", 00:21:31.043 "verify_range": { 00:21:31.043 "start": 0, 00:21:31.043 "length": 8192 00:21:31.043 }, 00:21:31.043 "queue_depth": 128, 00:21:31.043 "io_size": 4096, 00:21:31.043 "runtime": 10.016837, 00:21:31.043 "iops": 5536.677895427469, 00:21:31.043 "mibps": 21.62764802901355, 00:21:31.043 "io_failed": 0, 00:21:31.043 "io_timeout": 0, 00:21:31.043 "avg_latency_us": 23082.732591073793, 00:21:31.043 "min_latency_us": 6522.392380952381, 00:21:31.043 "max_latency_us": 43690.666666666664 00:21:31.043 } 00:21:31.043 ], 00:21:31.043 "core_count": 1 00:21:31.043 } 00:21:31.043 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:31.043 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:31.043 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:21:31.043 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:21:31.043 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:21:31.043 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:31.043 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:21:31.043 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:21:31.043 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:21:31.043 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:31.043 nvmf_trace.0 00:21:31.043 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:21:31.043 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 211198 00:21:31.044 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 211198 ']' 00:21:31.044 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 211198 00:21:31.044 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:21:31.044 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:31.044 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 211198 00:21:31.044 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:31.044 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:31.044 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 211198' 00:21:31.044 killing process with pid 211198 00:21:31.044 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 211198 00:21:31.044 Received shutdown signal, test time was about 10.000000 seconds 00:21:31.044 00:21:31.044 Latency(us) 00:21:31.044 [2024-11-20T11:35:36.810Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.044 [2024-11-20T11:35:36.810Z] =================================================================================================================== 00:21:31.044 [2024-11-20T11:35:36.810Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:31.044 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 211198 00:21:31.303 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:31.303 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:31.303 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:21:31.303 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:31.303 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:21:31.303 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:31.303 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:31.303 rmmod nvme_tcp 00:21:31.303 rmmod nvme_fabrics 00:21:31.303 rmmod nvme_keyring 00:21:31.303 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:31.303 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:21:31.303 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:21:31.303 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 211003 ']' 00:21:31.303 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 211003 00:21:31.303 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 211003 ']' 00:21:31.303 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 211003 00:21:31.303 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:21:31.303 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:31.303 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 211003 00:21:31.303 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:31.303 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:31.303 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 211003' 00:21:31.303 killing process with pid 211003 00:21:31.303 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 211003 00:21:31.303 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 211003 00:21:31.563 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:31.563 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:31.563 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:31.563 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:21:31.563 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:21:31.563 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:31.563 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:21:31.563 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:31.563 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:31.563 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.563 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:31.563 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.469 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:33.469 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.DsR 00:21:33.469 00:21:33.469 real 0m21.657s 00:21:33.469 user 0m23.301s 00:21:33.469 sys 0m9.711s 00:21:33.469 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:33.469 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:33.469 ************************************ 00:21:33.469 END TEST nvmf_fips 00:21:33.469 ************************************ 00:21:33.728 12:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:33.728 12:35:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:33.728 12:35:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:33.728 12:35:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:33.728 ************************************ 00:21:33.728 START TEST nvmf_control_msg_list 00:21:33.728 ************************************ 00:21:33.728 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:33.728 * Looking for test storage... 00:21:33.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:33.728 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:33.728 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:21:33.728 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:33.728 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:33.728 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:33.728 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:33.728 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:33.728 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:21:33.728 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:21:33.728 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:21:33.728 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:21:33.728 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:21:33.728 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:21:33.728 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:21:33.728 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:33.728 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:21:33.728 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:21:33.728 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:33.728 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:33.728 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:21:33.728 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:21:33.728 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:33.728 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:21:33.728 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:21:33.728 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:21:33.728 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:21:33.728 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:33.728 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:21:33.728 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:21:33.728 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:33.728 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:33.728 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:21:33.728 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:33.728 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:33.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.728 --rc genhtml_branch_coverage=1 00:21:33.728 --rc genhtml_function_coverage=1 00:21:33.728 --rc genhtml_legend=1 00:21:33.728 --rc geninfo_all_blocks=1 00:21:33.728 --rc geninfo_unexecuted_blocks=1 00:21:33.728 00:21:33.728 ' 00:21:33.729 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:33.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.729 --rc genhtml_branch_coverage=1 00:21:33.729 --rc genhtml_function_coverage=1 00:21:33.729 --rc genhtml_legend=1 00:21:33.729 --rc geninfo_all_blocks=1 00:21:33.729 --rc geninfo_unexecuted_blocks=1 00:21:33.729 00:21:33.729 ' 00:21:33.729 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:33.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.729 --rc genhtml_branch_coverage=1 00:21:33.729 --rc genhtml_function_coverage=1 00:21:33.729 --rc genhtml_legend=1 00:21:33.729 --rc geninfo_all_blocks=1 00:21:33.729 --rc geninfo_unexecuted_blocks=1 00:21:33.729 00:21:33.729 ' 00:21:33.729 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:33.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.729 --rc genhtml_branch_coverage=1 00:21:33.729 --rc genhtml_function_coverage=1 00:21:33.729 --rc genhtml_legend=1 00:21:33.729 --rc geninfo_all_blocks=1 00:21:33.729 --rc geninfo_unexecuted_blocks=1 00:21:33.729 00:21:33.729 ' 00:21:33.729 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:33.729 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:21:33.729 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:33.729 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:33.729 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:33.729 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:33.729 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:33.729 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:33.729 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:33.729 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:33.729 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:33.988 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:33.988 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:33.988 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:33.988 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:33.988 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:33.988 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:33.989 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:33.989 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:33.989 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:21:33.989 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:33.989 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:33.989 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:33.989 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.989 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.989 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.989 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:21:33.989 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.989 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:21:33.989 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:33.989 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:33.989 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:33.989 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:33.989 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:33.989 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:33.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:33.989 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:33.989 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:33.989 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:33.989 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:21:33.989 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:33.989 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:33.989 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:33.989 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:33.989 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:33.989 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.989 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:33.989 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.989 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:33.989 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:33.989 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:21:33.989 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:40.558 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:40.558 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:21:40.558 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:40.558 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:40.558 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:40.558 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:40.558 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:40.558 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:21:40.558 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:40.558 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:21:40.558 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:21:40.558 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:21:40.558 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:21:40.558 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:21:40.558 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:21:40.558 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:40.558 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:40.558 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:40.558 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:40.558 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:40.558 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:40.558 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:40.558 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:40.559 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:40.559 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:40.559 Found net devices under 0000:86:00.0: cvl_0_0 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:40.559 Found net devices under 0000:86:00.1: cvl_0_1 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:40.559 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:40.560 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:40.560 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:40.560 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:40.560 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:40.560 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:40.560 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:40.560 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:40.560 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:40.560 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:40.560 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.383 ms 00:21:40.560 00:21:40.560 --- 10.0.0.2 ping statistics --- 00:21:40.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.560 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:21:40.560 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:40.560 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:40.560 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:21:40.560 00:21:40.560 --- 10.0.0.1 ping statistics --- 00:21:40.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.560 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:21:40.560 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:40.560 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:21:40.560 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:40.560 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:40.560 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:40.560 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:40.560 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:40.560 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:40.560 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:40.560 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:21:40.560 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:40.560 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:40.560 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:40.560 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=216735 00:21:40.560 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 216735 00:21:40.560 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:40.560 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 216735 ']' 00:21:40.560 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.560 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:40.560 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.560 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:40.560 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:40.560 [2024-11-20 12:35:45.526657] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:21:40.560 [2024-11-20 12:35:45.526709] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:40.560 [2024-11-20 12:35:45.609683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.560 [2024-11-20 12:35:45.649867] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:40.560 [2024-11-20 12:35:45.649901] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:40.560 [2024-11-20 12:35:45.649907] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:40.560 [2024-11-20 12:35:45.649913] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:40.560 [2024-11-20 12:35:45.649918] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:40.560 [2024-11-20 12:35:45.650494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:40.560 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:40.560 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:21:40.560 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:40.560 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:40.560 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:40.560 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:40.560 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:40.560 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:40.560 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:21:40.560 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.560 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:40.560 [2024-11-20 12:35:45.792779] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:40.560 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.560 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:21:40.560 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.560 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:40.560 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.561 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:40.561 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.561 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:40.561 Malloc0 00:21:40.561 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.561 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:40.561 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.561 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:40.561 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.561 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:40.561 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.561 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:40.561 [2024-11-20 12:35:45.833103] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:40.561 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.561 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=216811 00:21:40.561 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:40.561 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:40.561 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=216812 00:21:40.561 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=216813 00:21:40.561 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:40.561 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 216811 00:21:40.561 [2024-11-20 12:35:45.901490] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:40.561 [2024-11-20 12:35:45.911580] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:40.561 [2024-11-20 12:35:45.911731] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:41.498 Initializing NVMe Controllers 00:21:41.498 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:41.498 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:21:41.498 Initialization complete. Launching workers. 00:21:41.498 ======================================================== 00:21:41.498 Latency(us) 00:21:41.498 Device Information : IOPS MiB/s Average min max 00:21:41.498 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 6569.98 25.66 151.87 125.86 455.82 00:21:41.498 ======================================================== 00:21:41.498 Total : 6569.98 25.66 151.87 125.86 455.82 00:21:41.498 00:21:41.498 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 216812 00:21:41.498 Initializing NVMe Controllers 00:21:41.498 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:41.498 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:21:41.498 Initialization complete. Launching workers. 00:21:41.498 ======================================================== 00:21:41.498 Latency(us) 00:21:41.498 Device Information : IOPS MiB/s Average min max 00:21:41.498 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40969.13 40616.08 41902.52 00:21:41.498 ======================================================== 00:21:41.498 Total : 25.00 0.10 40969.13 40616.08 41902.52 00:21:41.498 00:21:41.498 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 216813 00:21:41.498 Initializing NVMe Controllers 00:21:41.498 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:41.498 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:21:41.498 Initialization complete. Launching workers. 00:21:41.498 ======================================================== 00:21:41.498 Latency(us) 00:21:41.498 Device Information : IOPS MiB/s Average min max 00:21:41.498 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 6471.95 25.28 154.16 141.21 403.01 00:21:41.498 ======================================================== 00:21:41.498 Total : 6471.95 25.28 154.16 141.21 403.01 00:21:41.498 00:21:41.498 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:41.498 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:21:41.498 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:41.498 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:21:41.498 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:41.498 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:21:41.498 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:41.498 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:41.498 rmmod nvme_tcp 00:21:41.498 rmmod nvme_fabrics 00:21:41.498 rmmod nvme_keyring 00:21:41.498 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:41.498 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:21:41.498 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:21:41.498 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 216735 ']' 00:21:41.498 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 216735 00:21:41.498 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 216735 ']' 00:21:41.498 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 216735 00:21:41.498 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:21:41.498 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:41.498 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 216735 00:21:41.498 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:41.498 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:41.498 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 216735' 00:21:41.498 killing process with pid 216735 00:21:41.498 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 216735 00:21:41.498 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 216735 00:21:41.757 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:41.757 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:41.757 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:41.757 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:21:41.757 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:21:41.757 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:41.757 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:21:41.757 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:41.758 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:41.758 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.758 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:41.758 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.661 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:43.661 00:21:43.661 real 0m10.104s 00:21:43.661 user 0m6.426s 00:21:43.661 sys 0m5.550s 00:21:43.661 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:43.661 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:43.661 ************************************ 00:21:43.661 END TEST nvmf_control_msg_list 00:21:43.661 ************************************ 00:21:43.920 12:35:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:43.920 12:35:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:43.920 12:35:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:43.920 12:35:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:43.920 ************************************ 00:21:43.920 START TEST nvmf_wait_for_buf 00:21:43.920 ************************************ 00:21:43.920 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:43.920 * Looking for test storage... 00:21:43.920 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:43.920 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:43.920 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:21:43.920 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:43.920 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:43.920 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:43.920 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:43.920 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:43.920 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:21:43.920 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:21:43.920 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:21:43.920 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:21:43.920 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:21:43.920 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:21:43.920 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:21:43.920 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:43.920 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:21:43.920 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:21:43.920 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:43.920 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:43.920 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:21:43.920 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:21:43.920 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:43.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.921 --rc genhtml_branch_coverage=1 00:21:43.921 --rc genhtml_function_coverage=1 00:21:43.921 --rc genhtml_legend=1 00:21:43.921 --rc geninfo_all_blocks=1 00:21:43.921 --rc geninfo_unexecuted_blocks=1 00:21:43.921 00:21:43.921 ' 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:43.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.921 --rc genhtml_branch_coverage=1 00:21:43.921 --rc genhtml_function_coverage=1 00:21:43.921 --rc genhtml_legend=1 00:21:43.921 --rc geninfo_all_blocks=1 00:21:43.921 --rc geninfo_unexecuted_blocks=1 00:21:43.921 00:21:43.921 ' 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:43.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.921 --rc genhtml_branch_coverage=1 00:21:43.921 --rc genhtml_function_coverage=1 00:21:43.921 --rc genhtml_legend=1 00:21:43.921 --rc geninfo_all_blocks=1 00:21:43.921 --rc geninfo_unexecuted_blocks=1 00:21:43.921 00:21:43.921 ' 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:43.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.921 --rc genhtml_branch_coverage=1 00:21:43.921 --rc genhtml_function_coverage=1 00:21:43.921 --rc genhtml_legend=1 00:21:43.921 --rc geninfo_all_blocks=1 00:21:43.921 --rc geninfo_unexecuted_blocks=1 00:21:43.921 00:21:43.921 ' 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:43.921 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:43.921 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:44.180 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:21:44.180 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:44.180 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:44.180 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:44.180 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:44.180 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:44.180 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.180 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:44.180 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.181 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:44.181 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:44.181 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:44.181 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:50.754 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:50.754 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:50.754 Found net devices under 0000:86:00.0: cvl_0_0 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:50.754 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:50.755 Found net devices under 0000:86:00.1: cvl_0_1 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:50.755 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:50.755 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.495 ms 00:21:50.755 00:21:50.755 --- 10.0.0.2 ping statistics --- 00:21:50.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:50.755 rtt min/avg/max/mdev = 0.495/0.495/0.495/0.000 ms 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:50.755 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:50.755 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:21:50.755 00:21:50.755 --- 10.0.0.1 ping statistics --- 00:21:50.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:50.755 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=220561 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 220561 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 220561 ']' 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:50.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:50.755 [2024-11-20 12:35:55.706404] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:21:50.755 [2024-11-20 12:35:55.706449] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:50.755 [2024-11-20 12:35:55.786582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.755 [2024-11-20 12:35:55.827146] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:50.755 [2024-11-20 12:35:55.827181] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:50.755 [2024-11-20 12:35:55.827189] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:50.755 [2024-11-20 12:35:55.827198] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:50.755 [2024-11-20 12:35:55.827207] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:50.755 [2024-11-20 12:35:55.827753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:50.755 Malloc0 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.755 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:50.755 [2024-11-20 12:35:56.000304] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:50.755 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.755 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:50.755 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.755 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:50.755 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.755 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:50.755 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.755 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:50.755 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.755 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:50.755 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.755 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:50.755 [2024-11-20 12:35:56.028480] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:50.755 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.755 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:50.755 [2024-11-20 12:35:56.118275] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:51.687 Initializing NVMe Controllers 00:21:51.687 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:51.687 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:21:51.687 Initialization complete. Launching workers. 00:21:51.687 ======================================================== 00:21:51.687 Latency(us) 00:21:51.687 Device Information : IOPS MiB/s Average min max 00:21:51.687 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 128.54 16.07 32208.22 5282.74 63851.37 00:21:51.687 ======================================================== 00:21:51.687 Total : 128.54 16.07 32208.22 5282.74 63851.37 00:21:51.687 00:21:51.945 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:21:51.945 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:21:51.945 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.945 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:51.945 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.945 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:21:51.945 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:21:51.945 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:51.945 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:21:51.945 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:51.945 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:21:51.945 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:51.945 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:21:51.945 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:51.945 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:51.945 rmmod nvme_tcp 00:21:51.945 rmmod nvme_fabrics 00:21:51.945 rmmod nvme_keyring 00:21:51.945 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:51.945 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:21:51.945 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:21:51.945 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 220561 ']' 00:21:51.945 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 220561 00:21:51.945 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 220561 ']' 00:21:51.945 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 220561 00:21:51.945 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:21:51.945 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:51.945 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 220561 00:21:51.945 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:51.945 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:51.945 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 220561' 00:21:51.945 killing process with pid 220561 00:21:51.945 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 220561 00:21:51.945 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 220561 00:21:52.204 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:52.204 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:52.204 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:52.204 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:21:52.204 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:21:52.204 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:52.204 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:21:52.204 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:52.204 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:52.204 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.204 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:52.204 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.109 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:54.109 00:21:54.109 real 0m10.356s 00:21:54.109 user 0m3.866s 00:21:54.109 sys 0m4.928s 00:21:54.109 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:54.109 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:54.109 ************************************ 00:21:54.109 END TEST nvmf_wait_for_buf 00:21:54.109 ************************************ 00:21:54.367 12:35:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:21:54.367 12:35:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:21:54.367 12:35:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:21:54.367 12:35:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:21:54.367 12:35:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:21:54.367 12:35:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:00.938 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:00.939 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:00.939 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:00.939 Found net devices under 0000:86:00.0: cvl_0_0 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:00.939 Found net devices under 0000:86:00.1: cvl_0_1 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:00.939 12:36:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:00.939 ************************************ 00:22:00.939 START TEST nvmf_perf_adq 00:22:00.939 ************************************ 00:22:00.940 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:00.940 * Looking for test storage... 00:22:00.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:00.940 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:00.940 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:22:00.940 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:00.940 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:00.940 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:00.940 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:00.940 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:00.940 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:22:00.940 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:22:00.940 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:22:00.940 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:22:00.940 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:22:00.940 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:22:00.940 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:22:00.940 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:00.940 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:22:00.940 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:22:00.940 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:00.940 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:00.940 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:22:00.940 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:22:00.940 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:00.940 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:22:00.940 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:22:00.940 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:22:00.940 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:22:00.940 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:00.940 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:22:00.940 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:22:00.940 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:00.940 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:00.940 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:22:00.940 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:00.940 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:00.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.940 --rc genhtml_branch_coverage=1 00:22:00.940 --rc genhtml_function_coverage=1 00:22:00.940 --rc genhtml_legend=1 00:22:00.940 --rc geninfo_all_blocks=1 00:22:00.940 --rc geninfo_unexecuted_blocks=1 00:22:00.940 00:22:00.940 ' 00:22:00.940 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:00.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.940 --rc genhtml_branch_coverage=1 00:22:00.940 --rc genhtml_function_coverage=1 00:22:00.940 --rc genhtml_legend=1 00:22:00.940 --rc geninfo_all_blocks=1 00:22:00.940 --rc geninfo_unexecuted_blocks=1 00:22:00.940 00:22:00.940 ' 00:22:00.940 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:00.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.940 --rc genhtml_branch_coverage=1 00:22:00.940 --rc genhtml_function_coverage=1 00:22:00.940 --rc genhtml_legend=1 00:22:00.940 --rc geninfo_all_blocks=1 00:22:00.940 --rc geninfo_unexecuted_blocks=1 00:22:00.940 00:22:00.940 ' 00:22:00.940 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:00.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.940 --rc genhtml_branch_coverage=1 00:22:00.940 --rc genhtml_function_coverage=1 00:22:00.940 --rc genhtml_legend=1 00:22:00.940 --rc geninfo_all_blocks=1 00:22:00.940 --rc geninfo_unexecuted_blocks=1 00:22:00.940 00:22:00.940 ' 00:22:00.941 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:00.941 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:00.941 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:00.941 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:00.941 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:00.941 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:00.941 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:00.941 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:00.941 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:00.941 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:00.941 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:00.941 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:00.941 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:00.941 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:00.941 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:00.941 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:00.941 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:00.941 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:00.941 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:00.941 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:22:00.941 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:00.941 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:00.941 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:00.941 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.941 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.941 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.941 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:00.941 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.941 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:22:00.941 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:00.941 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:00.941 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:00.941 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:00.941 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:00.941 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:00.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:00.941 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:00.941 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:00.941 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:00.941 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:00.941 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:00.941 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:06.215 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:06.215 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:06.215 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:06.215 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:06.215 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:06.215 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:06.215 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:06.215 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:06.215 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:06.215 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:06.215 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:06.215 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:06.215 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:06.215 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:06.215 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:06.215 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:06.215 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:06.215 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:06.215 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:06.215 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:06.215 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:06.215 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:06.215 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:06.215 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:06.215 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:06.216 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:06.216 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:06.216 Found net devices under 0000:86:00.0: cvl_0_0 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:06.216 Found net devices under 0000:86:00.1: cvl_0_1 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:06.216 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:06.783 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:09.320 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:14.607 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:14.607 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:14.607 Found net devices under 0000:86:00.0: cvl_0_0 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.607 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:14.608 Found net devices under 0000:86:00.1: cvl_0_1 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:14.608 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:14.608 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:22:14.608 00:22:14.608 --- 10.0.0.2 ping statistics --- 00:22:14.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.608 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:14.608 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:14.608 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:22:14.608 00:22:14.608 --- 10.0.0.1 ping statistics --- 00:22:14.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.608 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=229376 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 229376 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 229376 ']' 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:14.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:14.608 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:14.608 [2024-11-20 12:36:19.892145] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:22:14.608 [2024-11-20 12:36:19.892185] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:14.608 [2024-11-20 12:36:19.970313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:14.608 [2024-11-20 12:36:20.013691] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:14.608 [2024-11-20 12:36:20.013729] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:14.608 [2024-11-20 12:36:20.013736] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:14.608 [2024-11-20 12:36:20.013742] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:14.608 [2024-11-20 12:36:20.013747] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:14.608 [2024-11-20 12:36:20.015224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:14.608 [2024-11-20 12:36:20.015295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:14.608 [2024-11-20 12:36:20.015400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.608 [2024-11-20 12:36:20.015400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:14.608 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:14.608 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:22:14.608 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:14.608 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:14.608 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:14.608 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:14.608 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:22:14.608 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:14.608 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:14.608 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.608 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:14.608 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.608 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:14.608 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:14.608 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.608 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:14.608 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.608 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:14.608 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.608 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:14.608 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.608 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:14.608 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.608 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:14.608 [2024-11-20 12:36:20.208372] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:14.609 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.609 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:14.609 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.609 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:14.609 Malloc1 00:22:14.609 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.609 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:14.609 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.609 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:14.609 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.609 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:14.609 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.609 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:14.609 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.609 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:14.609 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.609 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:14.609 [2024-11-20 12:36:20.270758] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:14.609 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.609 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=229438 00:22:14.609 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:22:14.609 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:17.138 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:22:17.138 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.138 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.138 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.138 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:22:17.138 "tick_rate": 2100000000, 00:22:17.138 "poll_groups": [ 00:22:17.138 { 00:22:17.138 "name": "nvmf_tgt_poll_group_000", 00:22:17.138 "admin_qpairs": 1, 00:22:17.138 "io_qpairs": 1, 00:22:17.138 "current_admin_qpairs": 1, 00:22:17.138 "current_io_qpairs": 1, 00:22:17.138 "pending_bdev_io": 0, 00:22:17.138 "completed_nvme_io": 19471, 00:22:17.138 "transports": [ 00:22:17.138 { 00:22:17.138 "trtype": "TCP" 00:22:17.138 } 00:22:17.138 ] 00:22:17.138 }, 00:22:17.138 { 00:22:17.138 "name": "nvmf_tgt_poll_group_001", 00:22:17.138 "admin_qpairs": 0, 00:22:17.138 "io_qpairs": 1, 00:22:17.138 "current_admin_qpairs": 0, 00:22:17.138 "current_io_qpairs": 1, 00:22:17.138 "pending_bdev_io": 0, 00:22:17.138 "completed_nvme_io": 19552, 00:22:17.138 "transports": [ 00:22:17.138 { 00:22:17.138 "trtype": "TCP" 00:22:17.138 } 00:22:17.138 ] 00:22:17.138 }, 00:22:17.138 { 00:22:17.138 "name": "nvmf_tgt_poll_group_002", 00:22:17.138 "admin_qpairs": 0, 00:22:17.138 "io_qpairs": 1, 00:22:17.138 "current_admin_qpairs": 0, 00:22:17.138 "current_io_qpairs": 1, 00:22:17.138 "pending_bdev_io": 0, 00:22:17.138 "completed_nvme_io": 19442, 00:22:17.138 "transports": [ 00:22:17.138 { 00:22:17.138 "trtype": "TCP" 00:22:17.138 } 00:22:17.138 ] 00:22:17.138 }, 00:22:17.138 { 00:22:17.138 "name": "nvmf_tgt_poll_group_003", 00:22:17.138 "admin_qpairs": 0, 00:22:17.138 "io_qpairs": 1, 00:22:17.138 "current_admin_qpairs": 0, 00:22:17.138 "current_io_qpairs": 1, 00:22:17.138 "pending_bdev_io": 0, 00:22:17.138 "completed_nvme_io": 19591, 00:22:17.138 "transports": [ 00:22:17.138 { 00:22:17.138 "trtype": "TCP" 00:22:17.138 } 00:22:17.138 ] 00:22:17.138 } 00:22:17.138 ] 00:22:17.138 }' 00:22:17.138 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:17.138 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:22:17.138 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:22:17.138 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:22:17.138 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 229438 00:22:25.246 Initializing NVMe Controllers 00:22:25.246 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:25.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:25.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:25.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:25.247 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:25.247 Initialization complete. Launching workers. 00:22:25.247 ======================================================== 00:22:25.247 Latency(us) 00:22:25.247 Device Information : IOPS MiB/s Average min max 00:22:25.247 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10438.87 40.78 6131.42 2046.59 10338.72 00:22:25.247 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10492.67 40.99 6100.96 2382.48 10654.68 00:22:25.247 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10411.77 40.67 6147.94 1636.85 10179.56 00:22:25.247 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10385.47 40.57 6162.69 2142.00 10815.39 00:22:25.247 ======================================================== 00:22:25.247 Total : 41728.78 163.00 6135.66 1636.85 10815.39 00:22:25.247 00:22:25.247 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:22:25.247 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:25.247 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:25.247 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:25.247 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:25.247 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:25.247 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:25.247 rmmod nvme_tcp 00:22:25.247 rmmod nvme_fabrics 00:22:25.247 rmmod nvme_keyring 00:22:25.247 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:25.247 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:25.247 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:25.247 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 229376 ']' 00:22:25.247 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 229376 00:22:25.247 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 229376 ']' 00:22:25.247 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 229376 00:22:25.247 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:22:25.247 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:25.247 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 229376 00:22:25.247 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:25.247 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:25.247 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 229376' 00:22:25.247 killing process with pid 229376 00:22:25.247 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 229376 00:22:25.247 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 229376 00:22:25.247 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:25.247 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:25.247 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:25.247 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:25.247 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:25.247 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:25.247 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:25.247 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:25.247 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:25.247 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:25.247 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:25.247 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:27.152 12:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:27.152 12:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:22:27.152 12:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:27.152 12:36:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:28.530 12:36:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:30.435 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:35.710 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:35.710 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:35.711 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:35.711 Found net devices under 0000:86:00.0: cvl_0_0 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:35.711 Found net devices under 0000:86:00.1: cvl_0_1 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:35.711 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:35.711 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.408 ms 00:22:35.711 00:22:35.711 --- 10.0.0.2 ping statistics --- 00:22:35.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.711 rtt min/avg/max/mdev = 0.408/0.408/0.408/0.000 ms 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:35.711 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:35.711 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:22:35.711 00:22:35.711 --- 10.0.0.1 ping statistics --- 00:22:35.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.711 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:35.711 net.core.busy_poll = 1 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:35.711 net.core.busy_read = 1 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:35.711 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:35.970 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:35.970 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:35.970 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:35.970 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:35.970 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:35.970 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:35.970 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:35.970 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=233224 00:22:35.970 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 233224 00:22:35.970 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:35.970 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 233224 ']' 00:22:35.970 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.970 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:35.971 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:35.971 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:35.971 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:35.971 [2024-11-20 12:36:41.662876] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:22:35.971 [2024-11-20 12:36:41.662931] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:36.229 [2024-11-20 12:36:41.744259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:36.229 [2024-11-20 12:36:41.784904] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:36.229 [2024-11-20 12:36:41.784942] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:36.229 [2024-11-20 12:36:41.784949] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:36.229 [2024-11-20 12:36:41.784954] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:36.229 [2024-11-20 12:36:41.784959] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:36.229 [2024-11-20 12:36:41.786558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:36.229 [2024-11-20 12:36:41.786664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:36.229 [2024-11-20 12:36:41.786774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:36.229 [2024-11-20 12:36:41.786775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:36.797 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:36.797 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:22:36.797 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:36.797 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:36.797 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:36.797 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:36.797 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:22:36.797 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:36.797 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:36.797 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.797 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:36.797 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.056 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:37.056 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:37.056 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.056 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:37.056 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.056 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:37.056 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.056 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:37.056 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.056 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:37.056 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.056 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:37.056 [2024-11-20 12:36:42.676680] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:37.056 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.056 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:37.056 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.056 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:37.056 Malloc1 00:22:37.057 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.057 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:37.057 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.057 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:37.057 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.057 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:37.057 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.057 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:37.057 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.057 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:37.057 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.057 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:37.057 [2024-11-20 12:36:42.735398] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:37.057 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.057 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=233476 00:22:37.057 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:22:37.057 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:39.585 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:22:39.585 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.585 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.585 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.585 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:22:39.585 "tick_rate": 2100000000, 00:22:39.585 "poll_groups": [ 00:22:39.585 { 00:22:39.585 "name": "nvmf_tgt_poll_group_000", 00:22:39.585 "admin_qpairs": 1, 00:22:39.585 "io_qpairs": 2, 00:22:39.585 "current_admin_qpairs": 1, 00:22:39.585 "current_io_qpairs": 2, 00:22:39.585 "pending_bdev_io": 0, 00:22:39.585 "completed_nvme_io": 29278, 00:22:39.585 "transports": [ 00:22:39.585 { 00:22:39.585 "trtype": "TCP" 00:22:39.585 } 00:22:39.585 ] 00:22:39.585 }, 00:22:39.585 { 00:22:39.585 "name": "nvmf_tgt_poll_group_001", 00:22:39.585 "admin_qpairs": 0, 00:22:39.585 "io_qpairs": 2, 00:22:39.585 "current_admin_qpairs": 0, 00:22:39.585 "current_io_qpairs": 2, 00:22:39.585 "pending_bdev_io": 0, 00:22:39.585 "completed_nvme_io": 29115, 00:22:39.585 "transports": [ 00:22:39.585 { 00:22:39.585 "trtype": "TCP" 00:22:39.585 } 00:22:39.585 ] 00:22:39.585 }, 00:22:39.585 { 00:22:39.585 "name": "nvmf_tgt_poll_group_002", 00:22:39.585 "admin_qpairs": 0, 00:22:39.585 "io_qpairs": 0, 00:22:39.585 "current_admin_qpairs": 0, 00:22:39.585 "current_io_qpairs": 0, 00:22:39.585 "pending_bdev_io": 0, 00:22:39.585 "completed_nvme_io": 0, 00:22:39.585 "transports": [ 00:22:39.585 { 00:22:39.585 "trtype": "TCP" 00:22:39.585 } 00:22:39.585 ] 00:22:39.585 }, 00:22:39.585 { 00:22:39.585 "name": "nvmf_tgt_poll_group_003", 00:22:39.585 "admin_qpairs": 0, 00:22:39.585 "io_qpairs": 0, 00:22:39.585 "current_admin_qpairs": 0, 00:22:39.585 "current_io_qpairs": 0, 00:22:39.585 "pending_bdev_io": 0, 00:22:39.585 "completed_nvme_io": 0, 00:22:39.585 "transports": [ 00:22:39.585 { 00:22:39.585 "trtype": "TCP" 00:22:39.585 } 00:22:39.585 ] 00:22:39.585 } 00:22:39.585 ] 00:22:39.585 }' 00:22:39.585 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:39.585 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:22:39.585 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:22:39.585 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:22:39.585 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 233476 00:22:47.891 Initializing NVMe Controllers 00:22:47.891 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:47.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:47.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:47.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:47.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:47.891 Initialization complete. Launching workers. 00:22:47.891 ======================================================== 00:22:47.891 Latency(us) 00:22:47.891 Device Information : IOPS MiB/s Average min max 00:22:47.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7482.20 29.23 8554.08 1554.75 52122.41 00:22:47.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9002.20 35.16 7109.17 1342.02 53389.33 00:22:47.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6592.90 25.75 9707.95 1113.31 52469.13 00:22:47.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7275.90 28.42 8811.77 1395.43 53195.85 00:22:47.891 ======================================================== 00:22:47.891 Total : 30353.19 118.57 8437.94 1113.31 53389.33 00:22:47.891 00:22:47.891 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:22:47.891 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:47.891 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:47.891 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:47.891 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:47.891 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:47.891 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:47.891 rmmod nvme_tcp 00:22:47.891 rmmod nvme_fabrics 00:22:47.891 rmmod nvme_keyring 00:22:47.891 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:47.891 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:47.891 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:47.891 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 233224 ']' 00:22:47.891 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 233224 00:22:47.891 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 233224 ']' 00:22:47.891 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 233224 00:22:47.891 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:22:47.891 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:47.891 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 233224 00:22:47.891 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:47.891 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:47.891 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 233224' 00:22:47.891 killing process with pid 233224 00:22:47.891 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 233224 00:22:47.891 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 233224 00:22:47.891 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:47.891 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:47.891 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:47.891 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:47.891 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:47.891 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:47.891 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:47.891 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:47.891 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:47.891 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.891 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:47.891 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.182 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:22:51.183 00:22:51.183 real 0m50.698s 00:22:51.183 user 2m46.466s 00:22:51.183 sys 0m10.385s 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:51.183 ************************************ 00:22:51.183 END TEST nvmf_perf_adq 00:22:51.183 ************************************ 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:51.183 ************************************ 00:22:51.183 START TEST nvmf_shutdown 00:22:51.183 ************************************ 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:51.183 * Looking for test storage... 00:22:51.183 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:51.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.183 --rc genhtml_branch_coverage=1 00:22:51.183 --rc genhtml_function_coverage=1 00:22:51.183 --rc genhtml_legend=1 00:22:51.183 --rc geninfo_all_blocks=1 00:22:51.183 --rc geninfo_unexecuted_blocks=1 00:22:51.183 00:22:51.183 ' 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:51.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.183 --rc genhtml_branch_coverage=1 00:22:51.183 --rc genhtml_function_coverage=1 00:22:51.183 --rc genhtml_legend=1 00:22:51.183 --rc geninfo_all_blocks=1 00:22:51.183 --rc geninfo_unexecuted_blocks=1 00:22:51.183 00:22:51.183 ' 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:51.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.183 --rc genhtml_branch_coverage=1 00:22:51.183 --rc genhtml_function_coverage=1 00:22:51.183 --rc genhtml_legend=1 00:22:51.183 --rc geninfo_all_blocks=1 00:22:51.183 --rc geninfo_unexecuted_blocks=1 00:22:51.183 00:22:51.183 ' 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:51.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.183 --rc genhtml_branch_coverage=1 00:22:51.183 --rc genhtml_function_coverage=1 00:22:51.183 --rc genhtml_legend=1 00:22:51.183 --rc geninfo_all_blocks=1 00:22:51.183 --rc geninfo_unexecuted_blocks=1 00:22:51.183 00:22:51.183 ' 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.183 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:51.184 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.184 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:22:51.184 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:51.184 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:51.184 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:51.184 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:51.184 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:51.184 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:51.184 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:51.184 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:51.184 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:51.184 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:51.184 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:51.184 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:51.184 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:51.184 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:51.184 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:51.184 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:51.184 ************************************ 00:22:51.184 START TEST nvmf_shutdown_tc1 00:22:51.184 ************************************ 00:22:51.184 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:22:51.184 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:22:51.184 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:51.184 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:51.184 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:51.184 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:51.184 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:51.184 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:51.184 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.184 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:51.184 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.184 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:51.184 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:51.184 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:51.184 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:57.754 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:57.754 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:57.754 Found net devices under 0000:86:00.0: cvl_0_0 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:57.754 Found net devices under 0000:86:00.1: cvl_0_1 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:57.754 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:57.755 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:57.755 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:22:57.755 00:22:57.755 --- 10.0.0.2 ping statistics --- 00:22:57.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.755 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:57.755 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:57.755 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:22:57.755 00:22:57.755 --- 10.0.0.1 ping statistics --- 00:22:57.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.755 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=238900 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 238900 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 238900 ']' 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:57.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:57.755 [2024-11-20 12:37:02.622008] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:22:57.755 [2024-11-20 12:37:02.622058] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:57.755 [2024-11-20 12:37:02.702159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:57.755 [2024-11-20 12:37:02.742689] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:57.755 [2024-11-20 12:37:02.742728] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:57.755 [2024-11-20 12:37:02.742737] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:57.755 [2024-11-20 12:37:02.742744] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:57.755 [2024-11-20 12:37:02.742749] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:57.755 [2024-11-20 12:37:02.744257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:57.755 [2024-11-20 12:37:02.744364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:57.755 [2024-11-20 12:37:02.744446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:57.755 [2024-11-20 12:37:02.744447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:57.755 [2024-11-20 12:37:02.888877] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:57.755 Malloc1 00:22:57.755 [2024-11-20 12:37:03.006416] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:57.755 Malloc2 00:22:57.755 Malloc3 00:22:57.755 Malloc4 00:22:57.755 Malloc5 00:22:57.755 Malloc6 00:22:57.755 Malloc7 00:22:57.755 Malloc8 00:22:57.755 Malloc9 00:22:57.756 Malloc10 00:22:57.756 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.756 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:57.756 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:57.756 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:57.756 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=238997 00:22:57.756 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 238997 /var/tmp/bdevperf.sock 00:22:57.756 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 238997 ']' 00:22:57.756 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:57.756 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:57.756 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:57.756 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:57.756 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:57.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:57.756 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:57.756 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:57.756 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:57.756 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:57.756 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:57.756 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:57.756 { 00:22:57.756 "params": { 00:22:57.756 "name": "Nvme$subsystem", 00:22:57.756 "trtype": "$TEST_TRANSPORT", 00:22:57.756 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.756 "adrfam": "ipv4", 00:22:57.756 "trsvcid": "$NVMF_PORT", 00:22:57.756 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.756 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.756 "hdgst": ${hdgst:-false}, 00:22:57.756 "ddgst": ${ddgst:-false} 00:22:57.756 }, 00:22:57.756 "method": "bdev_nvme_attach_controller" 00:22:57.756 } 00:22:57.756 EOF 00:22:57.756 )") 00:22:57.756 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:57.756 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:57.756 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:57.756 { 00:22:57.756 "params": { 00:22:57.756 "name": "Nvme$subsystem", 00:22:57.756 "trtype": "$TEST_TRANSPORT", 00:22:57.756 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.756 "adrfam": "ipv4", 00:22:57.756 "trsvcid": "$NVMF_PORT", 00:22:57.756 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.756 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.756 "hdgst": ${hdgst:-false}, 00:22:57.756 "ddgst": ${ddgst:-false} 00:22:57.756 }, 00:22:57.756 "method": "bdev_nvme_attach_controller" 00:22:57.756 } 00:22:57.756 EOF 00:22:57.756 )") 00:22:57.756 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:57.756 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:57.756 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:57.756 { 00:22:57.756 "params": { 00:22:57.756 "name": "Nvme$subsystem", 00:22:57.756 "trtype": "$TEST_TRANSPORT", 00:22:57.756 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.756 "adrfam": "ipv4", 00:22:57.756 "trsvcid": "$NVMF_PORT", 00:22:57.756 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.756 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.756 "hdgst": ${hdgst:-false}, 00:22:57.756 "ddgst": ${ddgst:-false} 00:22:57.756 }, 00:22:57.756 "method": "bdev_nvme_attach_controller" 00:22:57.756 } 00:22:57.756 EOF 00:22:57.756 )") 00:22:57.756 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:57.756 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:57.756 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:57.756 { 00:22:57.756 "params": { 00:22:57.756 "name": "Nvme$subsystem", 00:22:57.756 "trtype": "$TEST_TRANSPORT", 00:22:57.756 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.756 "adrfam": "ipv4", 00:22:57.756 "trsvcid": "$NVMF_PORT", 00:22:57.756 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.756 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.756 "hdgst": ${hdgst:-false}, 00:22:57.756 "ddgst": ${ddgst:-false} 00:22:57.756 }, 00:22:57.756 "method": "bdev_nvme_attach_controller" 00:22:57.756 } 00:22:57.756 EOF 00:22:57.756 )") 00:22:57.756 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:57.756 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:57.756 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:57.756 { 00:22:57.756 "params": { 00:22:57.756 "name": "Nvme$subsystem", 00:22:57.756 "trtype": "$TEST_TRANSPORT", 00:22:57.756 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.756 "adrfam": "ipv4", 00:22:57.756 "trsvcid": "$NVMF_PORT", 00:22:57.756 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.756 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.756 "hdgst": ${hdgst:-false}, 00:22:57.756 "ddgst": ${ddgst:-false} 00:22:57.756 }, 00:22:57.756 "method": "bdev_nvme_attach_controller" 00:22:57.756 } 00:22:57.756 EOF 00:22:57.756 )") 00:22:57.756 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:57.756 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:57.756 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:57.756 { 00:22:57.756 "params": { 00:22:57.756 "name": "Nvme$subsystem", 00:22:57.756 "trtype": "$TEST_TRANSPORT", 00:22:57.756 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.756 "adrfam": "ipv4", 00:22:57.756 "trsvcid": "$NVMF_PORT", 00:22:57.756 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.756 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.756 "hdgst": ${hdgst:-false}, 00:22:57.756 "ddgst": ${ddgst:-false} 00:22:57.756 }, 00:22:57.756 "method": "bdev_nvme_attach_controller" 00:22:57.756 } 00:22:57.756 EOF 00:22:57.756 )") 00:22:57.756 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:57.756 [2024-11-20 12:37:03.476712] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:22:57.756 [2024-11-20 12:37:03.476762] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:57.756 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:57.756 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:57.756 { 00:22:57.756 "params": { 00:22:57.756 "name": "Nvme$subsystem", 00:22:57.756 "trtype": "$TEST_TRANSPORT", 00:22:57.756 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.756 "adrfam": "ipv4", 00:22:57.756 "trsvcid": "$NVMF_PORT", 00:22:57.756 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.756 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.756 "hdgst": ${hdgst:-false}, 00:22:57.756 "ddgst": ${ddgst:-false} 00:22:57.756 }, 00:22:57.756 "method": "bdev_nvme_attach_controller" 00:22:57.756 } 00:22:57.756 EOF 00:22:57.756 )") 00:22:57.756 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:57.756 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:57.756 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:57.756 { 00:22:57.756 "params": { 00:22:57.756 "name": "Nvme$subsystem", 00:22:57.756 "trtype": "$TEST_TRANSPORT", 00:22:57.756 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.756 "adrfam": "ipv4", 00:22:57.756 "trsvcid": "$NVMF_PORT", 00:22:57.756 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.756 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.756 "hdgst": ${hdgst:-false}, 00:22:57.756 "ddgst": ${ddgst:-false} 00:22:57.756 }, 00:22:57.756 "method": "bdev_nvme_attach_controller" 00:22:57.756 } 00:22:57.756 EOF 00:22:57.756 )") 00:22:57.756 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:57.756 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:57.756 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:57.756 { 00:22:57.756 "params": { 00:22:57.756 "name": "Nvme$subsystem", 00:22:57.756 "trtype": "$TEST_TRANSPORT", 00:22:57.756 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.756 "adrfam": "ipv4", 00:22:57.756 "trsvcid": "$NVMF_PORT", 00:22:57.756 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.756 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.757 "hdgst": ${hdgst:-false}, 00:22:57.757 "ddgst": ${ddgst:-false} 00:22:57.757 }, 00:22:57.757 "method": "bdev_nvme_attach_controller" 00:22:57.757 } 00:22:57.757 EOF 00:22:57.757 )") 00:22:57.757 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:57.757 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:57.757 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:57.757 { 00:22:57.757 "params": { 00:22:57.757 "name": "Nvme$subsystem", 00:22:57.757 "trtype": "$TEST_TRANSPORT", 00:22:57.757 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.757 "adrfam": "ipv4", 00:22:57.757 "trsvcid": "$NVMF_PORT", 00:22:57.757 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.757 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.757 "hdgst": ${hdgst:-false}, 00:22:57.757 "ddgst": ${ddgst:-false} 00:22:57.757 }, 00:22:57.757 "method": "bdev_nvme_attach_controller" 00:22:57.757 } 00:22:57.757 EOF 00:22:57.757 )") 00:22:57.757 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:57.757 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:57.757 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:57.757 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:57.757 "params": { 00:22:57.757 "name": "Nvme1", 00:22:57.757 "trtype": "tcp", 00:22:57.757 "traddr": "10.0.0.2", 00:22:57.757 "adrfam": "ipv4", 00:22:57.757 "trsvcid": "4420", 00:22:57.757 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.757 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:57.757 "hdgst": false, 00:22:57.757 "ddgst": false 00:22:57.757 }, 00:22:57.757 "method": "bdev_nvme_attach_controller" 00:22:57.757 },{ 00:22:57.757 "params": { 00:22:57.757 "name": "Nvme2", 00:22:57.757 "trtype": "tcp", 00:22:57.757 "traddr": "10.0.0.2", 00:22:57.757 "adrfam": "ipv4", 00:22:57.757 "trsvcid": "4420", 00:22:57.757 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:57.757 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:57.757 "hdgst": false, 00:22:57.757 "ddgst": false 00:22:57.757 }, 00:22:57.757 "method": "bdev_nvme_attach_controller" 00:22:57.757 },{ 00:22:57.757 "params": { 00:22:57.757 "name": "Nvme3", 00:22:57.757 "trtype": "tcp", 00:22:57.757 "traddr": "10.0.0.2", 00:22:57.757 "adrfam": "ipv4", 00:22:57.757 "trsvcid": "4420", 00:22:57.757 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:57.757 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:57.757 "hdgst": false, 00:22:57.757 "ddgst": false 00:22:57.757 }, 00:22:57.757 "method": "bdev_nvme_attach_controller" 00:22:57.757 },{ 00:22:57.757 "params": { 00:22:57.757 "name": "Nvme4", 00:22:57.757 "trtype": "tcp", 00:22:57.757 "traddr": "10.0.0.2", 00:22:57.757 "adrfam": "ipv4", 00:22:57.757 "trsvcid": "4420", 00:22:57.757 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:57.757 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:57.757 "hdgst": false, 00:22:57.757 "ddgst": false 00:22:57.757 }, 00:22:57.757 "method": "bdev_nvme_attach_controller" 00:22:57.757 },{ 00:22:57.757 "params": { 00:22:57.757 "name": "Nvme5", 00:22:57.757 "trtype": "tcp", 00:22:57.757 "traddr": "10.0.0.2", 00:22:57.757 "adrfam": "ipv4", 00:22:57.757 "trsvcid": "4420", 00:22:57.757 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:57.757 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:57.757 "hdgst": false, 00:22:57.757 "ddgst": false 00:22:57.757 }, 00:22:57.757 "method": "bdev_nvme_attach_controller" 00:22:57.757 },{ 00:22:57.757 "params": { 00:22:57.757 "name": "Nvme6", 00:22:57.757 "trtype": "tcp", 00:22:57.757 "traddr": "10.0.0.2", 00:22:57.757 "adrfam": "ipv4", 00:22:57.757 "trsvcid": "4420", 00:22:57.757 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:57.757 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:57.757 "hdgst": false, 00:22:57.757 "ddgst": false 00:22:57.757 }, 00:22:57.757 "method": "bdev_nvme_attach_controller" 00:22:57.757 },{ 00:22:57.757 "params": { 00:22:57.757 "name": "Nvme7", 00:22:57.757 "trtype": "tcp", 00:22:57.757 "traddr": "10.0.0.2", 00:22:57.757 "adrfam": "ipv4", 00:22:57.757 "trsvcid": "4420", 00:22:57.757 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:57.757 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:57.757 "hdgst": false, 00:22:57.757 "ddgst": false 00:22:57.757 }, 00:22:57.757 "method": "bdev_nvme_attach_controller" 00:22:57.757 },{ 00:22:57.757 "params": { 00:22:57.757 "name": "Nvme8", 00:22:57.757 "trtype": "tcp", 00:22:57.757 "traddr": "10.0.0.2", 00:22:57.757 "adrfam": "ipv4", 00:22:57.757 "trsvcid": "4420", 00:22:57.757 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:57.757 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:57.757 "hdgst": false, 00:22:57.757 "ddgst": false 00:22:57.757 }, 00:22:57.757 "method": "bdev_nvme_attach_controller" 00:22:57.757 },{ 00:22:57.757 "params": { 00:22:57.757 "name": "Nvme9", 00:22:57.757 "trtype": "tcp", 00:22:57.757 "traddr": "10.0.0.2", 00:22:57.757 "adrfam": "ipv4", 00:22:57.757 "trsvcid": "4420", 00:22:57.757 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:57.757 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:57.757 "hdgst": false, 00:22:57.757 "ddgst": false 00:22:57.757 }, 00:22:57.757 "method": "bdev_nvme_attach_controller" 00:22:57.757 },{ 00:22:57.757 "params": { 00:22:57.757 "name": "Nvme10", 00:22:57.757 "trtype": "tcp", 00:22:57.757 "traddr": "10.0.0.2", 00:22:57.757 "adrfam": "ipv4", 00:22:57.757 "trsvcid": "4420", 00:22:57.757 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:57.757 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:57.757 "hdgst": false, 00:22:57.757 "ddgst": false 00:22:57.757 }, 00:22:57.757 "method": "bdev_nvme_attach_controller" 00:22:57.757 }' 00:22:58.016 [2024-11-20 12:37:03.554765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.016 [2024-11-20 12:37:03.595684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:59.922 12:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:59.922 12:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:59.922 12:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:59.922 12:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.922 12:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:59.922 12:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.922 12:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 238997 00:22:59.922 12:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:22:59.922 12:37:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:23:00.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 238997 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:00.859 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 238900 00:23:00.859 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:00.859 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:00.859 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:23:00.859 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:23:00.859 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:00.859 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:00.859 { 00:23:00.859 "params": { 00:23:00.859 "name": "Nvme$subsystem", 00:23:00.859 "trtype": "$TEST_TRANSPORT", 00:23:00.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.859 "adrfam": "ipv4", 00:23:00.859 "trsvcid": "$NVMF_PORT", 00:23:00.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.859 "hdgst": ${hdgst:-false}, 00:23:00.859 "ddgst": ${ddgst:-false} 00:23:00.859 }, 00:23:00.859 "method": "bdev_nvme_attach_controller" 00:23:00.859 } 00:23:00.859 EOF 00:23:00.859 )") 00:23:00.859 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:00.860 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:00.860 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:00.860 { 00:23:00.860 "params": { 00:23:00.860 "name": "Nvme$subsystem", 00:23:00.860 "trtype": "$TEST_TRANSPORT", 00:23:00.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.860 "adrfam": "ipv4", 00:23:00.860 "trsvcid": "$NVMF_PORT", 00:23:00.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.860 "hdgst": ${hdgst:-false}, 00:23:00.860 "ddgst": ${ddgst:-false} 00:23:00.860 }, 00:23:00.860 "method": "bdev_nvme_attach_controller" 00:23:00.860 } 00:23:00.860 EOF 00:23:00.860 )") 00:23:00.860 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:00.860 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:00.860 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:00.860 { 00:23:00.860 "params": { 00:23:00.860 "name": "Nvme$subsystem", 00:23:00.860 "trtype": "$TEST_TRANSPORT", 00:23:00.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.860 "adrfam": "ipv4", 00:23:00.860 "trsvcid": "$NVMF_PORT", 00:23:00.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.860 "hdgst": ${hdgst:-false}, 00:23:00.860 "ddgst": ${ddgst:-false} 00:23:00.860 }, 00:23:00.860 "method": "bdev_nvme_attach_controller" 00:23:00.860 } 00:23:00.860 EOF 00:23:00.860 )") 00:23:00.860 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:00.860 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:00.860 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:00.860 { 00:23:00.860 "params": { 00:23:00.860 "name": "Nvme$subsystem", 00:23:00.860 "trtype": "$TEST_TRANSPORT", 00:23:00.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.860 "adrfam": "ipv4", 00:23:00.860 "trsvcid": "$NVMF_PORT", 00:23:00.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.860 "hdgst": ${hdgst:-false}, 00:23:00.860 "ddgst": ${ddgst:-false} 00:23:00.860 }, 00:23:00.860 "method": "bdev_nvme_attach_controller" 00:23:00.860 } 00:23:00.860 EOF 00:23:00.860 )") 00:23:00.860 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:00.860 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:00.860 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:00.860 { 00:23:00.860 "params": { 00:23:00.860 "name": "Nvme$subsystem", 00:23:00.860 "trtype": "$TEST_TRANSPORT", 00:23:00.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.860 "adrfam": "ipv4", 00:23:00.860 "trsvcid": "$NVMF_PORT", 00:23:00.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.860 "hdgst": ${hdgst:-false}, 00:23:00.860 "ddgst": ${ddgst:-false} 00:23:00.860 }, 00:23:00.860 "method": "bdev_nvme_attach_controller" 00:23:00.860 } 00:23:00.860 EOF 00:23:00.860 )") 00:23:00.860 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:00.860 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:00.860 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:00.860 { 00:23:00.860 "params": { 00:23:00.860 "name": "Nvme$subsystem", 00:23:00.860 "trtype": "$TEST_TRANSPORT", 00:23:00.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.860 "adrfam": "ipv4", 00:23:00.860 "trsvcid": "$NVMF_PORT", 00:23:00.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.860 "hdgst": ${hdgst:-false}, 00:23:00.860 "ddgst": ${ddgst:-false} 00:23:00.860 }, 00:23:00.860 "method": "bdev_nvme_attach_controller" 00:23:00.860 } 00:23:00.860 EOF 00:23:00.860 )") 00:23:00.860 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:00.860 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:00.860 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:00.860 { 00:23:00.860 "params": { 00:23:00.860 "name": "Nvme$subsystem", 00:23:00.860 "trtype": "$TEST_TRANSPORT", 00:23:00.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.860 "adrfam": "ipv4", 00:23:00.860 "trsvcid": "$NVMF_PORT", 00:23:00.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.860 "hdgst": ${hdgst:-false}, 00:23:00.860 "ddgst": ${ddgst:-false} 00:23:00.860 }, 00:23:00.860 "method": "bdev_nvme_attach_controller" 00:23:00.860 } 00:23:00.860 EOF 00:23:00.860 )") 00:23:00.860 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:00.860 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:00.860 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:00.860 { 00:23:00.860 "params": { 00:23:00.860 "name": "Nvme$subsystem", 00:23:00.860 "trtype": "$TEST_TRANSPORT", 00:23:00.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.860 "adrfam": "ipv4", 00:23:00.860 "trsvcid": "$NVMF_PORT", 00:23:00.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.860 "hdgst": ${hdgst:-false}, 00:23:00.860 "ddgst": ${ddgst:-false} 00:23:00.860 }, 00:23:00.860 "method": "bdev_nvme_attach_controller" 00:23:00.860 } 00:23:00.860 EOF 00:23:00.860 )") 00:23:00.860 [2024-11-20 12:37:06.415485] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:23:00.860 [2024-11-20 12:37:06.415536] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid239486 ] 00:23:00.860 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:00.860 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:00.860 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:00.860 { 00:23:00.860 "params": { 00:23:00.860 "name": "Nvme$subsystem", 00:23:00.860 "trtype": "$TEST_TRANSPORT", 00:23:00.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.860 "adrfam": "ipv4", 00:23:00.860 "trsvcid": "$NVMF_PORT", 00:23:00.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.860 "hdgst": ${hdgst:-false}, 00:23:00.860 "ddgst": ${ddgst:-false} 00:23:00.860 }, 00:23:00.860 "method": "bdev_nvme_attach_controller" 00:23:00.860 } 00:23:00.860 EOF 00:23:00.860 )") 00:23:00.860 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:00.860 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:00.860 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:00.860 { 00:23:00.860 "params": { 00:23:00.860 "name": "Nvme$subsystem", 00:23:00.860 "trtype": "$TEST_TRANSPORT", 00:23:00.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.860 "adrfam": "ipv4", 00:23:00.860 "trsvcid": "$NVMF_PORT", 00:23:00.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.860 "hdgst": ${hdgst:-false}, 00:23:00.860 "ddgst": ${ddgst:-false} 00:23:00.860 }, 00:23:00.860 "method": "bdev_nvme_attach_controller" 00:23:00.860 } 00:23:00.860 EOF 00:23:00.860 )") 00:23:00.860 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:00.860 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:23:00.860 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:23:00.860 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:00.860 "params": { 00:23:00.860 "name": "Nvme1", 00:23:00.860 "trtype": "tcp", 00:23:00.860 "traddr": "10.0.0.2", 00:23:00.860 "adrfam": "ipv4", 00:23:00.860 "trsvcid": "4420", 00:23:00.860 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:00.860 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:00.860 "hdgst": false, 00:23:00.860 "ddgst": false 00:23:00.860 }, 00:23:00.860 "method": "bdev_nvme_attach_controller" 00:23:00.860 },{ 00:23:00.860 "params": { 00:23:00.860 "name": "Nvme2", 00:23:00.860 "trtype": "tcp", 00:23:00.860 "traddr": "10.0.0.2", 00:23:00.860 "adrfam": "ipv4", 00:23:00.860 "trsvcid": "4420", 00:23:00.860 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:00.860 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:00.861 "hdgst": false, 00:23:00.861 "ddgst": false 00:23:00.861 }, 00:23:00.861 "method": "bdev_nvme_attach_controller" 00:23:00.861 },{ 00:23:00.861 "params": { 00:23:00.861 "name": "Nvme3", 00:23:00.861 "trtype": "tcp", 00:23:00.861 "traddr": "10.0.0.2", 00:23:00.861 "adrfam": "ipv4", 00:23:00.861 "trsvcid": "4420", 00:23:00.861 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:00.861 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:00.861 "hdgst": false, 00:23:00.861 "ddgst": false 00:23:00.861 }, 00:23:00.861 "method": "bdev_nvme_attach_controller" 00:23:00.861 },{ 00:23:00.861 "params": { 00:23:00.861 "name": "Nvme4", 00:23:00.861 "trtype": "tcp", 00:23:00.861 "traddr": "10.0.0.2", 00:23:00.861 "adrfam": "ipv4", 00:23:00.861 "trsvcid": "4420", 00:23:00.861 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:00.861 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:00.861 "hdgst": false, 00:23:00.861 "ddgst": false 00:23:00.861 }, 00:23:00.861 "method": "bdev_nvme_attach_controller" 00:23:00.861 },{ 00:23:00.861 "params": { 00:23:00.861 "name": "Nvme5", 00:23:00.861 "trtype": "tcp", 00:23:00.861 "traddr": "10.0.0.2", 00:23:00.861 "adrfam": "ipv4", 00:23:00.861 "trsvcid": "4420", 00:23:00.861 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:00.861 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:00.861 "hdgst": false, 00:23:00.861 "ddgst": false 00:23:00.861 }, 00:23:00.861 "method": "bdev_nvme_attach_controller" 00:23:00.861 },{ 00:23:00.861 "params": { 00:23:00.861 "name": "Nvme6", 00:23:00.861 "trtype": "tcp", 00:23:00.861 "traddr": "10.0.0.2", 00:23:00.861 "adrfam": "ipv4", 00:23:00.861 "trsvcid": "4420", 00:23:00.861 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:00.861 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:00.861 "hdgst": false, 00:23:00.861 "ddgst": false 00:23:00.861 }, 00:23:00.861 "method": "bdev_nvme_attach_controller" 00:23:00.861 },{ 00:23:00.861 "params": { 00:23:00.861 "name": "Nvme7", 00:23:00.861 "trtype": "tcp", 00:23:00.861 "traddr": "10.0.0.2", 00:23:00.861 "adrfam": "ipv4", 00:23:00.861 "trsvcid": "4420", 00:23:00.861 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:00.861 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:00.861 "hdgst": false, 00:23:00.861 "ddgst": false 00:23:00.861 }, 00:23:00.861 "method": "bdev_nvme_attach_controller" 00:23:00.861 },{ 00:23:00.861 "params": { 00:23:00.861 "name": "Nvme8", 00:23:00.861 "trtype": "tcp", 00:23:00.861 "traddr": "10.0.0.2", 00:23:00.861 "adrfam": "ipv4", 00:23:00.861 "trsvcid": "4420", 00:23:00.861 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:00.861 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:00.861 "hdgst": false, 00:23:00.861 "ddgst": false 00:23:00.861 }, 00:23:00.861 "method": "bdev_nvme_attach_controller" 00:23:00.861 },{ 00:23:00.861 "params": { 00:23:00.861 "name": "Nvme9", 00:23:00.861 "trtype": "tcp", 00:23:00.861 "traddr": "10.0.0.2", 00:23:00.861 "adrfam": "ipv4", 00:23:00.861 "trsvcid": "4420", 00:23:00.861 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:00.861 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:00.861 "hdgst": false, 00:23:00.861 "ddgst": false 00:23:00.861 }, 00:23:00.861 "method": "bdev_nvme_attach_controller" 00:23:00.861 },{ 00:23:00.861 "params": { 00:23:00.861 "name": "Nvme10", 00:23:00.861 "trtype": "tcp", 00:23:00.861 "traddr": "10.0.0.2", 00:23:00.861 "adrfam": "ipv4", 00:23:00.861 "trsvcid": "4420", 00:23:00.861 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:00.861 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:00.861 "hdgst": false, 00:23:00.861 "ddgst": false 00:23:00.861 }, 00:23:00.861 "method": "bdev_nvme_attach_controller" 00:23:00.861 }' 00:23:00.861 [2024-11-20 12:37:06.494971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.861 [2024-11-20 12:37:06.536250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.239 Running I/O for 1 seconds... 00:23:03.436 2248.00 IOPS, 140.50 MiB/s 00:23:03.436 Latency(us) 00:23:03.436 [2024-11-20T11:37:09.202Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.436 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.436 Verification LBA range: start 0x0 length 0x400 00:23:03.436 Nvme1n1 : 1.13 285.58 17.85 0.00 0.00 220095.01 9986.44 191739.61 00:23:03.436 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.436 Verification LBA range: start 0x0 length 0x400 00:23:03.436 Nvme2n1 : 1.08 238.07 14.88 0.00 0.00 262520.08 15915.89 224694.86 00:23:03.436 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.436 Verification LBA range: start 0x0 length 0x400 00:23:03.436 Nvme3n1 : 1.12 285.98 17.87 0.00 0.00 215153.52 18100.42 209715.20 00:23:03.436 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.436 Verification LBA range: start 0x0 length 0x400 00:23:03.436 Nvme4n1 : 1.14 280.04 17.50 0.00 0.00 217251.45 13606.52 214708.42 00:23:03.436 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.436 Verification LBA range: start 0x0 length 0x400 00:23:03.436 Nvme5n1 : 1.19 269.39 16.84 0.00 0.00 215612.37 15603.81 231685.36 00:23:03.436 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.436 Verification LBA range: start 0x0 length 0x400 00:23:03.436 Nvme6n1 : 1.14 284.83 17.80 0.00 0.00 205784.00 7052.92 209715.20 00:23:03.436 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.436 Verification LBA range: start 0x0 length 0x400 00:23:03.436 Nvme7n1 : 1.13 282.74 17.67 0.00 0.00 205784.41 17226.61 214708.42 00:23:03.436 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.436 Verification LBA range: start 0x0 length 0x400 00:23:03.436 Nvme8n1 : 1.14 281.54 17.60 0.00 0.00 203729.58 13731.35 215707.06 00:23:03.436 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.436 Verification LBA range: start 0x0 length 0x400 00:23:03.436 Nvme9n1 : 1.15 278.17 17.39 0.00 0.00 203311.98 16976.94 240673.16 00:23:03.436 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.436 Verification LBA range: start 0x0 length 0x400 00:23:03.436 Nvme10n1 : 1.19 268.72 16.79 0.00 0.00 200775.39 14792.41 221698.93 00:23:03.436 [2024-11-20T11:37:09.202Z] =================================================================================================================== 00:23:03.436 [2024-11-20T11:37:09.202Z] Total : 2755.05 172.19 0.00 0.00 214029.24 7052.92 240673.16 00:23:03.695 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:23:03.695 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:03.695 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:03.695 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:03.695 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:03.695 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:03.695 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:23:03.695 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:03.695 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:23:03.695 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:03.695 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:03.695 rmmod nvme_tcp 00:23:03.695 rmmod nvme_fabrics 00:23:03.695 rmmod nvme_keyring 00:23:03.695 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:03.696 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:23:03.696 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:23:03.696 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 238900 ']' 00:23:03.696 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 238900 00:23:03.696 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 238900 ']' 00:23:03.696 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 238900 00:23:03.696 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:23:03.696 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:03.696 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 238900 00:23:03.696 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:03.696 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:03.696 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 238900' 00:23:03.696 killing process with pid 238900 00:23:03.696 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 238900 00:23:03.696 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 238900 00:23:04.264 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:04.264 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:04.264 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:04.264 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:23:04.264 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:04.264 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:23:04.264 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:23:04.264 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:04.264 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:04.264 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.264 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:04.264 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:06.170 00:23:06.170 real 0m15.239s 00:23:06.170 user 0m33.845s 00:23:06.170 sys 0m5.843s 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:06.170 ************************************ 00:23:06.170 END TEST nvmf_shutdown_tc1 00:23:06.170 ************************************ 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:06.170 ************************************ 00:23:06.170 START TEST nvmf_shutdown_tc2 00:23:06.170 ************************************ 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:06.170 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:06.170 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:06.170 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:06.171 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:06.171 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:06.171 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:06.171 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:06.171 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:06.171 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:06.171 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:06.171 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:06.171 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:06.171 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:06.171 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:06.171 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:06.171 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:06.171 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:06.171 Found net devices under 0000:86:00.0: cvl_0_0 00:23:06.171 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:06.171 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:06.171 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:06.171 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:06.171 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:06.171 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:06.432 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:06.432 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:06.432 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:06.432 Found net devices under 0000:86:00.1: cvl_0_1 00:23:06.432 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:06.432 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:06.432 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:06.432 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:06.432 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:06.432 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:06.432 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:06.432 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:06.432 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:06.432 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:06.432 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:06.432 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:06.432 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:06.432 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:06.432 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:06.432 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:06.432 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:06.432 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:06.432 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:06.432 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:06.432 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:06.432 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:06.432 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:06.432 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:06.432 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:06.432 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:06.432 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:06.432 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:06.433 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:06.433 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:06.433 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.437 ms 00:23:06.433 00:23:06.433 --- 10.0.0.2 ping statistics --- 00:23:06.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.433 rtt min/avg/max/mdev = 0.437/0.437/0.437/0.000 ms 00:23:06.433 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:06.433 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:06.433 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:23:06.433 00:23:06.433 --- 10.0.0.1 ping statistics --- 00:23:06.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.433 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:23:06.433 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:06.433 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:23:06.433 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:06.433 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:06.433 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:06.433 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:06.433 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:06.433 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:06.433 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:06.692 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:06.692 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:06.692 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:06.692 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:06.692 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=240539 00:23:06.692 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 240539 00:23:06.692 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:06.692 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 240539 ']' 00:23:06.692 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:06.692 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:06.692 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:06.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:06.692 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:06.692 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:06.692 [2024-11-20 12:37:12.289718] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:23:06.692 [2024-11-20 12:37:12.289764] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:06.692 [2024-11-20 12:37:12.368983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:06.692 [2024-11-20 12:37:12.409749] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:06.692 [2024-11-20 12:37:12.409787] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:06.692 [2024-11-20 12:37:12.409795] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:06.692 [2024-11-20 12:37:12.409801] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:06.692 [2024-11-20 12:37:12.409806] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:06.692 [2024-11-20 12:37:12.411428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:06.692 [2024-11-20 12:37:12.411535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:06.692 [2024-11-20 12:37:12.411644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:06.692 [2024-11-20 12:37:12.411645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:06.950 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:06.950 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:23:06.950 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:06.950 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:06.950 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:06.950 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:06.950 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:06.950 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.951 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:06.951 [2024-11-20 12:37:12.555908] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:06.951 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.951 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:06.951 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:06.951 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:06.951 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:06.951 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:06.951 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:06.951 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:06.951 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:06.951 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:06.951 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:06.951 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:06.951 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:06.951 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:06.951 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:06.951 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:06.951 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:06.951 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:06.951 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:06.951 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:06.951 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:06.951 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:06.951 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:06.951 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:06.951 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:06.951 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:06.951 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:06.951 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.951 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:06.951 Malloc1 00:23:06.951 [2024-11-20 12:37:12.669786] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:06.951 Malloc2 00:23:07.209 Malloc3 00:23:07.209 Malloc4 00:23:07.209 Malloc5 00:23:07.209 Malloc6 00:23:07.209 Malloc7 00:23:07.209 Malloc8 00:23:07.467 Malloc9 00:23:07.467 Malloc10 00:23:07.467 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.467 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:07.467 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:07.467 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:07.467 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=240790 00:23:07.467 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 240790 /var/tmp/bdevperf.sock 00:23:07.467 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 240790 ']' 00:23:07.467 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:07.467 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:07.467 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:07.467 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:07.467 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:07.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:07.468 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:23:07.468 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:07.468 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:23:07.468 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:07.468 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:07.468 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:07.468 { 00:23:07.468 "params": { 00:23:07.468 "name": "Nvme$subsystem", 00:23:07.468 "trtype": "$TEST_TRANSPORT", 00:23:07.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:07.468 "adrfam": "ipv4", 00:23:07.468 "trsvcid": "$NVMF_PORT", 00:23:07.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:07.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:07.468 "hdgst": ${hdgst:-false}, 00:23:07.468 "ddgst": ${ddgst:-false} 00:23:07.468 }, 00:23:07.468 "method": "bdev_nvme_attach_controller" 00:23:07.468 } 00:23:07.468 EOF 00:23:07.468 )") 00:23:07.468 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:07.468 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:07.468 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:07.468 { 00:23:07.468 "params": { 00:23:07.468 "name": "Nvme$subsystem", 00:23:07.468 "trtype": "$TEST_TRANSPORT", 00:23:07.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:07.468 "adrfam": "ipv4", 00:23:07.468 "trsvcid": "$NVMF_PORT", 00:23:07.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:07.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:07.468 "hdgst": ${hdgst:-false}, 00:23:07.468 "ddgst": ${ddgst:-false} 00:23:07.468 }, 00:23:07.468 "method": "bdev_nvme_attach_controller" 00:23:07.468 } 00:23:07.468 EOF 00:23:07.468 )") 00:23:07.468 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:07.468 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:07.468 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:07.468 { 00:23:07.468 "params": { 00:23:07.468 "name": "Nvme$subsystem", 00:23:07.468 "trtype": "$TEST_TRANSPORT", 00:23:07.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:07.468 "adrfam": "ipv4", 00:23:07.468 "trsvcid": "$NVMF_PORT", 00:23:07.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:07.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:07.468 "hdgst": ${hdgst:-false}, 00:23:07.468 "ddgst": ${ddgst:-false} 00:23:07.468 }, 00:23:07.468 "method": "bdev_nvme_attach_controller" 00:23:07.468 } 00:23:07.468 EOF 00:23:07.468 )") 00:23:07.468 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:07.468 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:07.468 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:07.468 { 00:23:07.468 "params": { 00:23:07.468 "name": "Nvme$subsystem", 00:23:07.468 "trtype": "$TEST_TRANSPORT", 00:23:07.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:07.468 "adrfam": "ipv4", 00:23:07.468 "trsvcid": "$NVMF_PORT", 00:23:07.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:07.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:07.468 "hdgst": ${hdgst:-false}, 00:23:07.468 "ddgst": ${ddgst:-false} 00:23:07.468 }, 00:23:07.468 "method": "bdev_nvme_attach_controller" 00:23:07.468 } 00:23:07.468 EOF 00:23:07.468 )") 00:23:07.468 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:07.468 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:07.468 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:07.468 { 00:23:07.468 "params": { 00:23:07.468 "name": "Nvme$subsystem", 00:23:07.468 "trtype": "$TEST_TRANSPORT", 00:23:07.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:07.468 "adrfam": "ipv4", 00:23:07.468 "trsvcid": "$NVMF_PORT", 00:23:07.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:07.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:07.468 "hdgst": ${hdgst:-false}, 00:23:07.468 "ddgst": ${ddgst:-false} 00:23:07.468 }, 00:23:07.468 "method": "bdev_nvme_attach_controller" 00:23:07.468 } 00:23:07.468 EOF 00:23:07.468 )") 00:23:07.468 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:07.468 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:07.468 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:07.468 { 00:23:07.468 "params": { 00:23:07.468 "name": "Nvme$subsystem", 00:23:07.468 "trtype": "$TEST_TRANSPORT", 00:23:07.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:07.468 "adrfam": "ipv4", 00:23:07.468 "trsvcid": "$NVMF_PORT", 00:23:07.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:07.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:07.468 "hdgst": ${hdgst:-false}, 00:23:07.468 "ddgst": ${ddgst:-false} 00:23:07.468 }, 00:23:07.468 "method": "bdev_nvme_attach_controller" 00:23:07.468 } 00:23:07.468 EOF 00:23:07.468 )") 00:23:07.468 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:07.468 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:07.468 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:07.468 { 00:23:07.468 "params": { 00:23:07.468 "name": "Nvme$subsystem", 00:23:07.468 "trtype": "$TEST_TRANSPORT", 00:23:07.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:07.468 "adrfam": "ipv4", 00:23:07.468 "trsvcid": "$NVMF_PORT", 00:23:07.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:07.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:07.468 "hdgst": ${hdgst:-false}, 00:23:07.468 "ddgst": ${ddgst:-false} 00:23:07.468 }, 00:23:07.468 "method": "bdev_nvme_attach_controller" 00:23:07.468 } 00:23:07.468 EOF 00:23:07.468 )") 00:23:07.468 [2024-11-20 12:37:13.138423] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:23:07.468 [2024-11-20 12:37:13.138471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid240790 ] 00:23:07.468 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:07.468 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:07.468 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:07.468 { 00:23:07.468 "params": { 00:23:07.468 "name": "Nvme$subsystem", 00:23:07.468 "trtype": "$TEST_TRANSPORT", 00:23:07.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:07.468 "adrfam": "ipv4", 00:23:07.468 "trsvcid": "$NVMF_PORT", 00:23:07.469 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:07.469 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:07.469 "hdgst": ${hdgst:-false}, 00:23:07.469 "ddgst": ${ddgst:-false} 00:23:07.469 }, 00:23:07.469 "method": "bdev_nvme_attach_controller" 00:23:07.469 } 00:23:07.469 EOF 00:23:07.469 )") 00:23:07.469 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:07.469 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:07.469 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:07.469 { 00:23:07.469 "params": { 00:23:07.469 "name": "Nvme$subsystem", 00:23:07.469 "trtype": "$TEST_TRANSPORT", 00:23:07.469 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:07.469 "adrfam": "ipv4", 00:23:07.469 "trsvcid": "$NVMF_PORT", 00:23:07.469 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:07.469 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:07.469 "hdgst": ${hdgst:-false}, 00:23:07.469 "ddgst": ${ddgst:-false} 00:23:07.469 }, 00:23:07.469 "method": "bdev_nvme_attach_controller" 00:23:07.469 } 00:23:07.469 EOF 00:23:07.469 )") 00:23:07.469 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:07.469 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:07.469 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:07.469 { 00:23:07.469 "params": { 00:23:07.469 "name": "Nvme$subsystem", 00:23:07.469 "trtype": "$TEST_TRANSPORT", 00:23:07.469 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:07.469 "adrfam": "ipv4", 00:23:07.469 "trsvcid": "$NVMF_PORT", 00:23:07.469 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:07.469 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:07.469 "hdgst": ${hdgst:-false}, 00:23:07.469 "ddgst": ${ddgst:-false} 00:23:07.469 }, 00:23:07.469 "method": "bdev_nvme_attach_controller" 00:23:07.469 } 00:23:07.469 EOF 00:23:07.469 )") 00:23:07.469 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:07.469 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:23:07.469 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:23:07.469 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:07.469 "params": { 00:23:07.469 "name": "Nvme1", 00:23:07.469 "trtype": "tcp", 00:23:07.469 "traddr": "10.0.0.2", 00:23:07.469 "adrfam": "ipv4", 00:23:07.469 "trsvcid": "4420", 00:23:07.469 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:07.469 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:07.469 "hdgst": false, 00:23:07.469 "ddgst": false 00:23:07.469 }, 00:23:07.469 "method": "bdev_nvme_attach_controller" 00:23:07.469 },{ 00:23:07.469 "params": { 00:23:07.469 "name": "Nvme2", 00:23:07.469 "trtype": "tcp", 00:23:07.469 "traddr": "10.0.0.2", 00:23:07.469 "adrfam": "ipv4", 00:23:07.469 "trsvcid": "4420", 00:23:07.469 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:07.469 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:07.469 "hdgst": false, 00:23:07.469 "ddgst": false 00:23:07.469 }, 00:23:07.469 "method": "bdev_nvme_attach_controller" 00:23:07.469 },{ 00:23:07.469 "params": { 00:23:07.469 "name": "Nvme3", 00:23:07.469 "trtype": "tcp", 00:23:07.469 "traddr": "10.0.0.2", 00:23:07.469 "adrfam": "ipv4", 00:23:07.469 "trsvcid": "4420", 00:23:07.469 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:07.469 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:07.469 "hdgst": false, 00:23:07.469 "ddgst": false 00:23:07.469 }, 00:23:07.469 "method": "bdev_nvme_attach_controller" 00:23:07.469 },{ 00:23:07.469 "params": { 00:23:07.469 "name": "Nvme4", 00:23:07.469 "trtype": "tcp", 00:23:07.469 "traddr": "10.0.0.2", 00:23:07.469 "adrfam": "ipv4", 00:23:07.469 "trsvcid": "4420", 00:23:07.469 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:07.469 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:07.469 "hdgst": false, 00:23:07.469 "ddgst": false 00:23:07.469 }, 00:23:07.469 "method": "bdev_nvme_attach_controller" 00:23:07.469 },{ 00:23:07.469 "params": { 00:23:07.469 "name": "Nvme5", 00:23:07.469 "trtype": "tcp", 00:23:07.469 "traddr": "10.0.0.2", 00:23:07.469 "adrfam": "ipv4", 00:23:07.469 "trsvcid": "4420", 00:23:07.469 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:07.469 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:07.469 "hdgst": false, 00:23:07.469 "ddgst": false 00:23:07.469 }, 00:23:07.469 "method": "bdev_nvme_attach_controller" 00:23:07.469 },{ 00:23:07.469 "params": { 00:23:07.469 "name": "Nvme6", 00:23:07.469 "trtype": "tcp", 00:23:07.469 "traddr": "10.0.0.2", 00:23:07.469 "adrfam": "ipv4", 00:23:07.469 "trsvcid": "4420", 00:23:07.469 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:07.469 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:07.469 "hdgst": false, 00:23:07.469 "ddgst": false 00:23:07.469 }, 00:23:07.469 "method": "bdev_nvme_attach_controller" 00:23:07.469 },{ 00:23:07.469 "params": { 00:23:07.469 "name": "Nvme7", 00:23:07.469 "trtype": "tcp", 00:23:07.469 "traddr": "10.0.0.2", 00:23:07.469 "adrfam": "ipv4", 00:23:07.469 "trsvcid": "4420", 00:23:07.469 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:07.469 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:07.469 "hdgst": false, 00:23:07.469 "ddgst": false 00:23:07.469 }, 00:23:07.469 "method": "bdev_nvme_attach_controller" 00:23:07.469 },{ 00:23:07.469 "params": { 00:23:07.469 "name": "Nvme8", 00:23:07.469 "trtype": "tcp", 00:23:07.469 "traddr": "10.0.0.2", 00:23:07.469 "adrfam": "ipv4", 00:23:07.469 "trsvcid": "4420", 00:23:07.469 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:07.469 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:07.469 "hdgst": false, 00:23:07.469 "ddgst": false 00:23:07.469 }, 00:23:07.469 "method": "bdev_nvme_attach_controller" 00:23:07.469 },{ 00:23:07.469 "params": { 00:23:07.469 "name": "Nvme9", 00:23:07.469 "trtype": "tcp", 00:23:07.469 "traddr": "10.0.0.2", 00:23:07.469 "adrfam": "ipv4", 00:23:07.469 "trsvcid": "4420", 00:23:07.469 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:07.469 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:07.469 "hdgst": false, 00:23:07.469 "ddgst": false 00:23:07.469 }, 00:23:07.469 "method": "bdev_nvme_attach_controller" 00:23:07.469 },{ 00:23:07.469 "params": { 00:23:07.469 "name": "Nvme10", 00:23:07.469 "trtype": "tcp", 00:23:07.469 "traddr": "10.0.0.2", 00:23:07.469 "adrfam": "ipv4", 00:23:07.469 "trsvcid": "4420", 00:23:07.469 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:07.469 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:07.469 "hdgst": false, 00:23:07.469 "ddgst": false 00:23:07.469 }, 00:23:07.469 "method": "bdev_nvme_attach_controller" 00:23:07.469 }' 00:23:07.469 [2024-11-20 12:37:13.213543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.727 [2024-11-20 12:37:13.255073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:09.101 Running I/O for 10 seconds... 00:23:09.360 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:09.360 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:23:09.360 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:09.360 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.360 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:09.360 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.360 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:09.360 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:09.360 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:09.360 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:23:09.360 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:23:09.360 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:09.360 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:09.360 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:09.360 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:09.360 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.360 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:09.360 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.360 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:23:09.361 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:23:09.361 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:09.619 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:09.619 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:09.619 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:09.619 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:09.619 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.619 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:09.878 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.878 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:23:09.878 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:23:09.878 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:23:09.878 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:23:09.878 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:23:09.878 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 240790 00:23:09.878 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 240790 ']' 00:23:09.878 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 240790 00:23:09.878 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:23:09.878 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:09.878 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 240790 00:23:09.878 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:09.878 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:09.878 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 240790' 00:23:09.878 killing process with pid 240790 00:23:09.878 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 240790 00:23:09.878 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 240790 00:23:09.878 Received shutdown signal, test time was about 0.845600 seconds 00:23:09.878 00:23:09.878 Latency(us) 00:23:09.878 [2024-11-20T11:37:15.644Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.878 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:09.878 Verification LBA range: start 0x0 length 0x400 00:23:09.878 Nvme1n1 : 0.84 304.31 19.02 0.00 0.00 207421.68 14730.00 217704.35 00:23:09.878 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:09.878 Verification LBA range: start 0x0 length 0x400 00:23:09.878 Nvme2n1 : 0.83 232.44 14.53 0.00 0.00 266968.83 17975.59 227690.79 00:23:09.878 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:09.878 Verification LBA range: start 0x0 length 0x400 00:23:09.878 Nvme3n1 : 0.84 302.98 18.94 0.00 0.00 200241.55 6865.68 216705.71 00:23:09.878 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:09.878 Verification LBA range: start 0x0 length 0x400 00:23:09.878 Nvme4n1 : 0.83 313.15 19.57 0.00 0.00 189734.70 2418.59 212711.13 00:23:09.878 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:09.878 Verification LBA range: start 0x0 length 0x400 00:23:09.878 Nvme5n1 : 0.82 233.42 14.59 0.00 0.00 250407.33 17975.59 244667.73 00:23:09.878 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:09.878 Verification LBA range: start 0x0 length 0x400 00:23:09.878 Nvme6n1 : 0.84 305.68 19.11 0.00 0.00 187665.07 27962.03 199728.76 00:23:09.878 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:09.878 Verification LBA range: start 0x0 length 0x400 00:23:09.878 Nvme7n1 : 0.84 311.30 19.46 0.00 0.00 179930.36 1911.47 205720.62 00:23:09.878 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:09.878 Verification LBA range: start 0x0 length 0x400 00:23:09.878 Nvme8n1 : 0.84 304.89 19.06 0.00 0.00 180519.50 18100.42 214708.42 00:23:09.878 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:09.878 Verification LBA range: start 0x0 length 0x400 00:23:09.878 Nvme9n1 : 0.81 236.46 14.78 0.00 0.00 226361.86 17725.93 220700.28 00:23:09.878 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:09.878 Verification LBA range: start 0x0 length 0x400 00:23:09.878 Nvme10n1 : 0.82 235.37 14.71 0.00 0.00 222262.45 15042.07 217704.35 00:23:09.878 [2024-11-20T11:37:15.644Z] =================================================================================================================== 00:23:09.878 [2024-11-20T11:37:15.644Z] Total : 2780.00 173.75 0.00 0.00 207699.85 1911.47 244667.73 00:23:10.138 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:23:11.076 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 240539 00:23:11.076 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:23:11.076 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:11.076 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:11.076 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:11.076 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:11.076 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:11.076 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:23:11.076 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:11.076 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:23:11.076 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:11.076 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:11.076 rmmod nvme_tcp 00:23:11.076 rmmod nvme_fabrics 00:23:11.076 rmmod nvme_keyring 00:23:11.076 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:11.076 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:23:11.076 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:23:11.076 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 240539 ']' 00:23:11.076 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 240539 00:23:11.076 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 240539 ']' 00:23:11.076 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 240539 00:23:11.076 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:23:11.076 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:11.076 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 240539 00:23:11.076 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:11.076 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:11.076 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 240539' 00:23:11.076 killing process with pid 240539 00:23:11.076 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 240539 00:23:11.076 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 240539 00:23:11.644 12:37:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:11.644 12:37:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:11.644 12:37:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:11.644 12:37:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:23:11.644 12:37:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:23:11.644 12:37:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:11.644 12:37:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:23:11.644 12:37:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:11.644 12:37:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:11.644 12:37:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.644 12:37:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:11.644 12:37:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.549 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:13.549 00:23:13.549 real 0m7.354s 00:23:13.549 user 0m21.554s 00:23:13.549 sys 0m1.331s 00:23:13.549 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:13.549 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:13.549 ************************************ 00:23:13.549 END TEST nvmf_shutdown_tc2 00:23:13.549 ************************************ 00:23:13.549 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:13.549 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:13.549 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:13.549 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:13.808 ************************************ 00:23:13.808 START TEST nvmf_shutdown_tc3 00:23:13.808 ************************************ 00:23:13.808 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:23:13.808 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:23:13.808 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:13.808 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:13.808 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:13.808 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:13.809 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:13.809 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:13.809 Found net devices under 0000:86:00.0: cvl_0_0 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:13.809 Found net devices under 0000:86:00.1: cvl_0_1 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:13.809 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:13.810 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:13.810 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:13.810 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:13.810 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:13.810 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:13.810 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:13.810 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:13.810 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:13.810 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:13.810 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:13.810 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:13.810 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:13.810 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:13.810 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:14.069 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:14.069 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:14.069 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:14.069 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:14.069 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:14.069 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:23:14.069 00:23:14.069 --- 10.0.0.2 ping statistics --- 00:23:14.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.069 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:23:14.069 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:14.069 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:14.069 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:23:14.069 00:23:14.069 --- 10.0.0.1 ping statistics --- 00:23:14.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.069 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:23:14.069 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:14.069 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:23:14.069 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:14.069 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:14.069 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:14.069 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:14.069 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:14.069 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:14.069 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:14.069 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:14.069 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:14.069 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:14.069 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:14.069 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=242022 00:23:14.069 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 242022 00:23:14.069 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:14.069 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 242022 ']' 00:23:14.070 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:14.070 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:14.070 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:14.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:14.070 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:14.070 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:14.070 [2024-11-20 12:37:19.732974] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:23:14.070 [2024-11-20 12:37:19.733031] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:14.070 [2024-11-20 12:37:19.814330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:14.329 [2024-11-20 12:37:19.856249] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:14.329 [2024-11-20 12:37:19.856286] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:14.329 [2024-11-20 12:37:19.856293] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:14.329 [2024-11-20 12:37:19.856299] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:14.329 [2024-11-20 12:37:19.856304] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:14.329 [2024-11-20 12:37:19.857752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:14.329 [2024-11-20 12:37:19.857838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:14.329 [2024-11-20 12:37:19.857920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.329 [2024-11-20 12:37:19.857921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:14.896 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:14.896 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:23:14.896 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:14.896 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:14.896 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:14.896 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:14.896 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:14.896 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.896 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:14.896 [2024-11-20 12:37:20.604094] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:14.896 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.896 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:14.897 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:14.897 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:14.897 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:14.897 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:14.897 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:14.897 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:14.897 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:14.897 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:14.897 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:14.897 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:14.897 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:14.897 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:14.897 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:14.897 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:14.897 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:14.897 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:14.897 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:14.897 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:14.897 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:14.897 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:14.897 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:14.897 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:14.897 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:14.897 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:15.156 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:15.156 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.156 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:15.156 Malloc1 00:23:15.156 [2024-11-20 12:37:20.714739] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:15.156 Malloc2 00:23:15.156 Malloc3 00:23:15.156 Malloc4 00:23:15.156 Malloc5 00:23:15.156 Malloc6 00:23:15.415 Malloc7 00:23:15.415 Malloc8 00:23:15.415 Malloc9 00:23:15.415 Malloc10 00:23:15.415 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.415 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:15.415 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:15.415 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:15.415 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=242320 00:23:15.415 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 242320 /var/tmp/bdevperf.sock 00:23:15.415 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 242320 ']' 00:23:15.415 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:15.415 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:15.415 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:15.415 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:15.415 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:15.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:15.415 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:23:15.415 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:15.415 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:23:15.415 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:15.415 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:15.415 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:15.415 { 00:23:15.415 "params": { 00:23:15.415 "name": "Nvme$subsystem", 00:23:15.415 "trtype": "$TEST_TRANSPORT", 00:23:15.415 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.415 "adrfam": "ipv4", 00:23:15.415 "trsvcid": "$NVMF_PORT", 00:23:15.415 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.415 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.415 "hdgst": ${hdgst:-false}, 00:23:15.415 "ddgst": ${ddgst:-false} 00:23:15.415 }, 00:23:15.415 "method": "bdev_nvme_attach_controller" 00:23:15.415 } 00:23:15.415 EOF 00:23:15.415 )") 00:23:15.415 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:15.415 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:15.415 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:15.415 { 00:23:15.415 "params": { 00:23:15.415 "name": "Nvme$subsystem", 00:23:15.415 "trtype": "$TEST_TRANSPORT", 00:23:15.415 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.415 "adrfam": "ipv4", 00:23:15.415 "trsvcid": "$NVMF_PORT", 00:23:15.415 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.415 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.415 "hdgst": ${hdgst:-false}, 00:23:15.415 "ddgst": ${ddgst:-false} 00:23:15.415 }, 00:23:15.415 "method": "bdev_nvme_attach_controller" 00:23:15.415 } 00:23:15.415 EOF 00:23:15.415 )") 00:23:15.415 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:15.415 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:15.415 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:15.415 { 00:23:15.415 "params": { 00:23:15.415 "name": "Nvme$subsystem", 00:23:15.415 "trtype": "$TEST_TRANSPORT", 00:23:15.415 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.415 "adrfam": "ipv4", 00:23:15.415 "trsvcid": "$NVMF_PORT", 00:23:15.415 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.415 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.415 "hdgst": ${hdgst:-false}, 00:23:15.415 "ddgst": ${ddgst:-false} 00:23:15.415 }, 00:23:15.415 "method": "bdev_nvme_attach_controller" 00:23:15.415 } 00:23:15.416 EOF 00:23:15.416 )") 00:23:15.416 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:15.416 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:15.416 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:15.416 { 00:23:15.416 "params": { 00:23:15.416 "name": "Nvme$subsystem", 00:23:15.416 "trtype": "$TEST_TRANSPORT", 00:23:15.416 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.416 "adrfam": "ipv4", 00:23:15.416 "trsvcid": "$NVMF_PORT", 00:23:15.416 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.416 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.416 "hdgst": ${hdgst:-false}, 00:23:15.416 "ddgst": ${ddgst:-false} 00:23:15.416 }, 00:23:15.416 "method": "bdev_nvme_attach_controller" 00:23:15.416 } 00:23:15.416 EOF 00:23:15.416 )") 00:23:15.416 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:15.416 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:15.416 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:15.416 { 00:23:15.416 "params": { 00:23:15.416 "name": "Nvme$subsystem", 00:23:15.416 "trtype": "$TEST_TRANSPORT", 00:23:15.416 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.416 "adrfam": "ipv4", 00:23:15.416 "trsvcid": "$NVMF_PORT", 00:23:15.416 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.416 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.416 "hdgst": ${hdgst:-false}, 00:23:15.416 "ddgst": ${ddgst:-false} 00:23:15.416 }, 00:23:15.416 "method": "bdev_nvme_attach_controller" 00:23:15.416 } 00:23:15.416 EOF 00:23:15.416 )") 00:23:15.416 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:15.416 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:15.416 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:15.416 { 00:23:15.416 "params": { 00:23:15.416 "name": "Nvme$subsystem", 00:23:15.416 "trtype": "$TEST_TRANSPORT", 00:23:15.416 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.416 "adrfam": "ipv4", 00:23:15.416 "trsvcid": "$NVMF_PORT", 00:23:15.416 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.416 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.416 "hdgst": ${hdgst:-false}, 00:23:15.416 "ddgst": ${ddgst:-false} 00:23:15.416 }, 00:23:15.416 "method": "bdev_nvme_attach_controller" 00:23:15.416 } 00:23:15.416 EOF 00:23:15.416 )") 00:23:15.416 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:15.675 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:15.675 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:15.675 { 00:23:15.675 "params": { 00:23:15.675 "name": "Nvme$subsystem", 00:23:15.675 "trtype": "$TEST_TRANSPORT", 00:23:15.675 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.675 "adrfam": "ipv4", 00:23:15.675 "trsvcid": "$NVMF_PORT", 00:23:15.675 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.675 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.675 "hdgst": ${hdgst:-false}, 00:23:15.675 "ddgst": ${ddgst:-false} 00:23:15.675 }, 00:23:15.675 "method": "bdev_nvme_attach_controller" 00:23:15.675 } 00:23:15.675 EOF 00:23:15.675 )") 00:23:15.675 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:15.675 [2024-11-20 12:37:21.183487] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:23:15.675 [2024-11-20 12:37:21.183540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid242320 ] 00:23:15.675 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:15.675 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:15.675 { 00:23:15.675 "params": { 00:23:15.675 "name": "Nvme$subsystem", 00:23:15.675 "trtype": "$TEST_TRANSPORT", 00:23:15.675 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.675 "adrfam": "ipv4", 00:23:15.675 "trsvcid": "$NVMF_PORT", 00:23:15.675 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.676 "hdgst": ${hdgst:-false}, 00:23:15.676 "ddgst": ${ddgst:-false} 00:23:15.676 }, 00:23:15.676 "method": "bdev_nvme_attach_controller" 00:23:15.676 } 00:23:15.676 EOF 00:23:15.676 )") 00:23:15.676 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:15.676 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:15.676 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:15.676 { 00:23:15.676 "params": { 00:23:15.676 "name": "Nvme$subsystem", 00:23:15.676 "trtype": "$TEST_TRANSPORT", 00:23:15.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.676 "adrfam": "ipv4", 00:23:15.676 "trsvcid": "$NVMF_PORT", 00:23:15.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.676 "hdgst": ${hdgst:-false}, 00:23:15.676 "ddgst": ${ddgst:-false} 00:23:15.676 }, 00:23:15.676 "method": "bdev_nvme_attach_controller" 00:23:15.676 } 00:23:15.676 EOF 00:23:15.676 )") 00:23:15.676 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:15.676 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:15.676 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:15.676 { 00:23:15.676 "params": { 00:23:15.676 "name": "Nvme$subsystem", 00:23:15.676 "trtype": "$TEST_TRANSPORT", 00:23:15.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.676 "adrfam": "ipv4", 00:23:15.676 "trsvcid": "$NVMF_PORT", 00:23:15.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.676 "hdgst": ${hdgst:-false}, 00:23:15.676 "ddgst": ${ddgst:-false} 00:23:15.676 }, 00:23:15.676 "method": "bdev_nvme_attach_controller" 00:23:15.676 } 00:23:15.676 EOF 00:23:15.676 )") 00:23:15.676 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:15.676 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:23:15.676 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:23:15.676 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:15.676 "params": { 00:23:15.676 "name": "Nvme1", 00:23:15.676 "trtype": "tcp", 00:23:15.676 "traddr": "10.0.0.2", 00:23:15.676 "adrfam": "ipv4", 00:23:15.676 "trsvcid": "4420", 00:23:15.676 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.676 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:15.676 "hdgst": false, 00:23:15.676 "ddgst": false 00:23:15.676 }, 00:23:15.676 "method": "bdev_nvme_attach_controller" 00:23:15.676 },{ 00:23:15.676 "params": { 00:23:15.676 "name": "Nvme2", 00:23:15.676 "trtype": "tcp", 00:23:15.676 "traddr": "10.0.0.2", 00:23:15.676 "adrfam": "ipv4", 00:23:15.676 "trsvcid": "4420", 00:23:15.676 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:15.676 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:15.676 "hdgst": false, 00:23:15.676 "ddgst": false 00:23:15.676 }, 00:23:15.676 "method": "bdev_nvme_attach_controller" 00:23:15.676 },{ 00:23:15.676 "params": { 00:23:15.676 "name": "Nvme3", 00:23:15.676 "trtype": "tcp", 00:23:15.676 "traddr": "10.0.0.2", 00:23:15.676 "adrfam": "ipv4", 00:23:15.676 "trsvcid": "4420", 00:23:15.676 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:15.676 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:15.676 "hdgst": false, 00:23:15.676 "ddgst": false 00:23:15.676 }, 00:23:15.676 "method": "bdev_nvme_attach_controller" 00:23:15.676 },{ 00:23:15.676 "params": { 00:23:15.676 "name": "Nvme4", 00:23:15.676 "trtype": "tcp", 00:23:15.676 "traddr": "10.0.0.2", 00:23:15.676 "adrfam": "ipv4", 00:23:15.676 "trsvcid": "4420", 00:23:15.676 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:15.676 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:15.676 "hdgst": false, 00:23:15.676 "ddgst": false 00:23:15.676 }, 00:23:15.676 "method": "bdev_nvme_attach_controller" 00:23:15.676 },{ 00:23:15.676 "params": { 00:23:15.676 "name": "Nvme5", 00:23:15.676 "trtype": "tcp", 00:23:15.676 "traddr": "10.0.0.2", 00:23:15.676 "adrfam": "ipv4", 00:23:15.676 "trsvcid": "4420", 00:23:15.676 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:15.676 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:15.676 "hdgst": false, 00:23:15.676 "ddgst": false 00:23:15.676 }, 00:23:15.676 "method": "bdev_nvme_attach_controller" 00:23:15.676 },{ 00:23:15.676 "params": { 00:23:15.676 "name": "Nvme6", 00:23:15.676 "trtype": "tcp", 00:23:15.676 "traddr": "10.0.0.2", 00:23:15.676 "adrfam": "ipv4", 00:23:15.676 "trsvcid": "4420", 00:23:15.676 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:15.676 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:15.676 "hdgst": false, 00:23:15.676 "ddgst": false 00:23:15.676 }, 00:23:15.676 "method": "bdev_nvme_attach_controller" 00:23:15.676 },{ 00:23:15.676 "params": { 00:23:15.676 "name": "Nvme7", 00:23:15.676 "trtype": "tcp", 00:23:15.676 "traddr": "10.0.0.2", 00:23:15.676 "adrfam": "ipv4", 00:23:15.676 "trsvcid": "4420", 00:23:15.676 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:15.676 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:15.676 "hdgst": false, 00:23:15.676 "ddgst": false 00:23:15.676 }, 00:23:15.676 "method": "bdev_nvme_attach_controller" 00:23:15.676 },{ 00:23:15.676 "params": { 00:23:15.676 "name": "Nvme8", 00:23:15.676 "trtype": "tcp", 00:23:15.676 "traddr": "10.0.0.2", 00:23:15.676 "adrfam": "ipv4", 00:23:15.676 "trsvcid": "4420", 00:23:15.676 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:15.676 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:15.676 "hdgst": false, 00:23:15.676 "ddgst": false 00:23:15.676 }, 00:23:15.676 "method": "bdev_nvme_attach_controller" 00:23:15.676 },{ 00:23:15.676 "params": { 00:23:15.676 "name": "Nvme9", 00:23:15.676 "trtype": "tcp", 00:23:15.676 "traddr": "10.0.0.2", 00:23:15.676 "adrfam": "ipv4", 00:23:15.676 "trsvcid": "4420", 00:23:15.676 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:15.676 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:15.676 "hdgst": false, 00:23:15.676 "ddgst": false 00:23:15.676 }, 00:23:15.676 "method": "bdev_nvme_attach_controller" 00:23:15.676 },{ 00:23:15.676 "params": { 00:23:15.677 "name": "Nvme10", 00:23:15.677 "trtype": "tcp", 00:23:15.677 "traddr": "10.0.0.2", 00:23:15.677 "adrfam": "ipv4", 00:23:15.677 "trsvcid": "4420", 00:23:15.677 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:15.677 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:15.677 "hdgst": false, 00:23:15.677 "ddgst": false 00:23:15.677 }, 00:23:15.677 "method": "bdev_nvme_attach_controller" 00:23:15.677 }' 00:23:15.677 [2024-11-20 12:37:21.260515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.677 [2024-11-20 12:37:21.301445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:17.580 Running I/O for 10 seconds... 00:23:17.580 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:17.580 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:23:17.580 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:17.580 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.580 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:17.580 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.581 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:17.581 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:17.581 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:17.581 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:17.581 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:23:17.581 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:23:17.581 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:17.581 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:17.581 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:17.581 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:17.581 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.581 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:17.581 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.581 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:23:17.581 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:23:17.581 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:17.839 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:17.839 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:17.839 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:17.839 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:17.839 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.839 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:17.839 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.839 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:23:17.839 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:23:17.839 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:18.104 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:18.104 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:18.104 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:18.104 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:18.104 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.104 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:18.104 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.104 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=195 00:23:18.104 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:23:18.104 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:23:18.104 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:23:18.104 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:23:18.104 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 242022 00:23:18.104 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 242022 ']' 00:23:18.104 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 242022 00:23:18.104 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:23:18.104 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:18.104 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 242022 00:23:18.104 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:18.104 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:18.104 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 242022' 00:23:18.104 killing process with pid 242022 00:23:18.104 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 242022 00:23:18.104 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 242022 00:23:18.104 [2024-11-20 12:37:23.857348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.104 [2024-11-20 12:37:23.857425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.104 [2024-11-20 12:37:23.857433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.104 [2024-11-20 12:37:23.857440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.104 [2024-11-20 12:37:23.857446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.104 [2024-11-20 12:37:23.857453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.104 [2024-11-20 12:37:23.857460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.104 [2024-11-20 12:37:23.857466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.104 [2024-11-20 12:37:23.857473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.104 [2024-11-20 12:37:23.857479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.104 [2024-11-20 12:37:23.857485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.857820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26700 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.858782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa29180 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.858819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa29180 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.858827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa29180 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.858833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa29180 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.858839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa29180 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.858849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa29180 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.858856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa29180 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.859712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.859724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.859731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.859738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.859744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.859750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.859757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.859763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.859769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.859776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.859784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.859791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.859797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.859803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.859809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.859817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.859823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.859829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.859835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.859842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.859848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.859855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.859861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.859867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.859873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.105 [2024-11-20 12:37:23.859881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.859888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.859895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.859901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.859908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.859914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.859920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.859927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.859933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.859939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.859946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.859953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.859959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.859965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.859971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.859977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.859983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.859989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.859996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.860003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.860009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.860016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.860022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.860028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.860034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.860040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.860046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.860059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.860065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.860072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.860078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.860084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.860090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.860096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.860102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.860109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.860115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.860121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26bf0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.861549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.861577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.861585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.861591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.861598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.861604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.861610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.861616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.861622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.861629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.861636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.861643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.861649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.861655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.861661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.861667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.861673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.861683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.861690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.861696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.861702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.861708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.861714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.861720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.861726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.861732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.861738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.861744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.861751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.861756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.861762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.861768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.861775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.861781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.861787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.861793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.861800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.861806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.861812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.861818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.861825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.861830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.861836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.861842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.861850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.861857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.106 [2024-11-20 12:37:23.861863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.107 [2024-11-20 12:37:23.861869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.107 [2024-11-20 12:37:23.861875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.107 [2024-11-20 12:37:23.861880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.107 [2024-11-20 12:37:23.861887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.107 [2024-11-20 12:37:23.861893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.107 [2024-11-20 12:37:23.861898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.107 [2024-11-20 12:37:23.861905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.107 [2024-11-20 12:37:23.861911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.107 [2024-11-20 12:37:23.861917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.107 [2024-11-20 12:37:23.861923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.107 [2024-11-20 12:37:23.861928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.107 [2024-11-20 12:37:23.861934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.107 [2024-11-20 12:37:23.861940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.107 [2024-11-20 12:37:23.861945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.107 [2024-11-20 12:37:23.861951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.107 [2024-11-20 12:37:23.861962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa270c0 is same with the state(6) to be set 00:23:18.107 [2024-11-20 12:37:23.862421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.107 [2024-11-20 12:37:23.862451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.107 [2024-11-20 12:37:23.862461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.107 [2024-11-20 12:37:23.862468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.107 [2024-11-20 12:37:23.862475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.107 [2024-11-20 12:37:23.862482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.107 [2024-11-20 12:37:23.862490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.107 [2024-11-20 12:37:23.862496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.107 [2024-11-20 12:37:23.862507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb50 is same with the state(6) to be set 00:23:18.107 [2024-11-20 12:37:23.862558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.107 [2024-11-20 12:37:23.862567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.107 [2024-11-20 12:37:23.862574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.107 [2024-11-20 12:37:23.862581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.107 [2024-11-20 12:37:23.862588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.107 [2024-11-20 12:37:23.862595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.107 [2024-11-20 12:37:23.862602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.107 [2024-11-20 12:37:23.862608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.107 [2024-11-20 12:37:23.862615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d71b0 is same with the state(6) to be set 00:23:18.107 [2024-11-20 12:37:23.862643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.107 [2024-11-20 12:37:23.862651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.107 [2024-11-20 12:37:23.862659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.107 [2024-11-20 12:37:23.862666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.107 [2024-11-20 12:37:23.862673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.107 [2024-11-20 12:37:23.862679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.107 [2024-11-20 12:37:23.862687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.107 [2024-11-20 12:37:23.862693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.107 [2024-11-20 12:37:23.862699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d4c70 is same with the state(6) to be set 00:23:18.107 [2024-11-20 12:37:23.862722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.107 [2024-11-20 12:37:23.862730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.107 [2024-11-20 12:37:23.862737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.107 [2024-11-20 12:37:23.862744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.107 [2024-11-20 12:37:23.862751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.107 [2024-11-20 12:37:23.862758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.107 [2024-11-20 12:37:23.862768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.107 [2024-11-20 12:37:23.862775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.107 [2024-11-20 12:37:23.862781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d6d50 is same with the state(6) to be set 00:23:18.107 [2024-11-20 12:37:23.863242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.107 [2024-11-20 12:37:23.863265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.107 [2024-11-20 12:37:23.863278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.107 [2024-11-20 12:37:23.863286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.107 [2024-11-20 12:37:23.863294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.107 [2024-11-20 12:37:23.863301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.107 [2024-11-20 12:37:23.863309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.107 [2024-11-20 12:37:23.863316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.107 [2024-11-20 12:37:23.863324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.107 [2024-11-20 12:37:23.863332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.107 [2024-11-20 12:37:23.863340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.107 [2024-11-20 12:37:23.863347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.107 [2024-11-20 12:37:23.863355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.107 [2024-11-20 12:37:23.863362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.390 [2024-11-20 12:37:23.863370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.390 [2024-11-20 12:37:23.863377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.390 [2024-11-20 12:37:23.863386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.390 [2024-11-20 12:37:23.863393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.390 [2024-11-20 12:37:23.863401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.390 [2024-11-20 12:37:23.863407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.390 [2024-11-20 12:37:23.863415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.390 [2024-11-20 12:37:23.863421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 12:37:23.863422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.390 he state(6) to be set 00:23:18.390 [2024-11-20 12:37:23.863440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:1[2024-11-20 12:37:23.863440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.390 he state(6) to be set 00:23:18.391 [2024-11-20 12:37:23.863450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 12:37:23.863451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.391 he state(6) to be set 00:23:18.391 [2024-11-20 12:37:23.863460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.391 [2024-11-20 12:37:23.863461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.391 [2024-11-20 12:37:23.863467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.391 [2024-11-20 12:37:23.863469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.391 [2024-11-20 12:37:23.863475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.391 [2024-11-20 12:37:23.863478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.391 [2024-11-20 12:37:23.863484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.391 [2024-11-20 12:37:23.863486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.391 [2024-11-20 12:37:23.863491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.391 [2024-11-20 12:37:23.863495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.391 [2024-11-20 12:37:23.863500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.391 [2024-11-20 12:37:23.863502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.391 [2024-11-20 12:37:23.863507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.391 [2024-11-20 12:37:23.863511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.391 [2024-11-20 12:37:23.863514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.391 [2024-11-20 12:37:23.863519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.391 [2024-11-20 12:37:23.863522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.391 [2024-11-20 12:37:23.863527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.391 [2024-11-20 12:37:23.863529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.391 [2024-11-20 12:37:23.863535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.391 [2024-11-20 12:37:23.863538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.391 [2024-11-20 12:37:23.863544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.391 [2024-11-20 12:37:23.863548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.391 [2024-11-20 12:37:23.863551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.391 [2024-11-20 12:37:23.863556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.391 [2024-11-20 12:37:23.863560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.391 [2024-11-20 12:37:23.863563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.391 [2024-11-20 12:37:23.863568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.391 [2024-11-20 12:37:23.863571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.391 [2024-11-20 12:37:23.863577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.391 [2024-11-20 12:37:23.863578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.391 [2024-11-20 12:37:23.863584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.391 [2024-11-20 12:37:23.863586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.391 [2024-11-20 12:37:23.863594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:1[2024-11-20 12:37:23.863594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.391 he state(6) to be set 00:23:18.391 [2024-11-20 12:37:23.863604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with t[2024-11-20 12:37:23.863604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:23:18.391 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.391 [2024-11-20 12:37:23.863613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.391 [2024-11-20 12:37:23.863616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.391 [2024-11-20 12:37:23.863621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.391 [2024-11-20 12:37:23.863623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.391 [2024-11-20 12:37:23.863629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.391 [2024-11-20 12:37:23.863632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.391 [2024-11-20 12:37:23.863636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.391 [2024-11-20 12:37:23.863640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.391 [2024-11-20 12:37:23.863644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.391 [2024-11-20 12:37:23.863649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.391 [2024-11-20 12:37:23.863652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.391 [2024-11-20 12:37:23.863658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.391 [2024-11-20 12:37:23.863660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.391 [2024-11-20 12:37:23.863667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with t[2024-11-20 12:37:23.863667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:1he state(6) to be set 00:23:18.391 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.391 [2024-11-20 12:37:23.863675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.391 [2024-11-20 12:37:23.863676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.391 [2024-11-20 12:37:23.863683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.391 [2024-11-20 12:37:23.863688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.391 [2024-11-20 12:37:23.863690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.391 [2024-11-20 12:37:23.863698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.391 [2024-11-20 12:37:23.863707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.391 [2024-11-20 12:37:23.863708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.391 [2024-11-20 12:37:23.863714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.391 [2024-11-20 12:37:23.863716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.391 [2024-11-20 12:37:23.863722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.391 [2024-11-20 12:37:23.863725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.391 [2024-11-20 12:37:23.863728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.391 [2024-11-20 12:37:23.863732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.391 [2024-11-20 12:37:23.863735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.391 [2024-11-20 12:37:23.863742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:1[2024-11-20 12:37:23.863742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.391 he state(6) to be set 00:23:18.391 [2024-11-20 12:37:23.863752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with t[2024-11-20 12:37:23.863752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:23:18.391 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.391 [2024-11-20 12:37:23.863762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.391 [2024-11-20 12:37:23.863764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.391 [2024-11-20 12:37:23.863769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.391 [2024-11-20 12:37:23.863772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.391 [2024-11-20 12:37:23.863777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.391 [2024-11-20 12:37:23.863781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.391 [2024-11-20 12:37:23.863784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.392 [2024-11-20 12:37:23.863789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.392 [2024-11-20 12:37:23.863792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.392 [2024-11-20 12:37:23.863799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with t[2024-11-20 12:37:23.863799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:1he state(6) to be set 00:23:18.392 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.392 [2024-11-20 12:37:23.863808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with t[2024-11-20 12:37:23.863809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:23:18.392 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.392 [2024-11-20 12:37:23.863818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.392 [2024-11-20 12:37:23.863820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.392 [2024-11-20 12:37:23.863826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.392 [2024-11-20 12:37:23.863828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.392 [2024-11-20 12:37:23.863833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.392 [2024-11-20 12:37:23.863837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.392 [2024-11-20 12:37:23.863840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.392 [2024-11-20 12:37:23.863845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.392 [2024-11-20 12:37:23.863848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.392 [2024-11-20 12:37:23.863854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40960 len:12[2024-11-20 12:37:23.863854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.392 he state(6) to be set 00:23:18.392 [2024-11-20 12:37:23.863864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 12:37:23.863864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.392 he state(6) to be set 00:23:18.392 [2024-11-20 12:37:23.863875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.392 [2024-11-20 12:37:23.863876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.392 [2024-11-20 12:37:23.863881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with t[2024-11-20 12:37:23.863883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:23:18.392 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.392 [2024-11-20 12:37:23.863892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.392 [2024-11-20 12:37:23.863894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.392 [2024-11-20 12:37:23.863899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.392 [2024-11-20 12:37:23.863902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.392 [2024-11-20 12:37:23.863905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.392 [2024-11-20 12:37:23.863911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.392 [2024-11-20 12:37:23.863913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.392 [2024-11-20 12:37:23.863919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.392 [2024-11-20 12:37:23.863920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.392 [2024-11-20 12:37:23.863929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.392 [2024-11-20 12:37:23.863929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.392 [2024-11-20 12:37:23.863935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27930 is same with the state(6) to be set 00:23:18.392 [2024-11-20 12:37:23.863938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.392 [2024-11-20 12:37:23.863947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.392 [2024-11-20 12:37:23.863953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.392 [2024-11-20 12:37:23.863961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.392 [2024-11-20 12:37:23.863968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.392 [2024-11-20 12:37:23.863976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.392 [2024-11-20 12:37:23.863983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.392 [2024-11-20 12:37:23.863991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.392 [2024-11-20 12:37:23.863998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.392 [2024-11-20 12:37:23.864005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.392 [2024-11-20 12:37:23.864012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.392 [2024-11-20 12:37:23.864020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.392 [2024-11-20 12:37:23.864027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.392 [2024-11-20 12:37:23.864036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.392 [2024-11-20 12:37:23.864043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.392 [2024-11-20 12:37:23.864051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.392 [2024-11-20 12:37:23.864060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.392 [2024-11-20 12:37:23.864068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.392 [2024-11-20 12:37:23.864075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.392 [2024-11-20 12:37:23.864084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.392 [2024-11-20 12:37:23.864090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.392 [2024-11-20 12:37:23.864099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.392 [2024-11-20 12:37:23.864105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.392 [2024-11-20 12:37:23.864113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.392 [2024-11-20 12:37:23.864119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.392 [2024-11-20 12:37:23.864127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.392 [2024-11-20 12:37:23.864134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.392 [2024-11-20 12:37:23.864142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.392 [2024-11-20 12:37:23.864149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.392 [2024-11-20 12:37:23.864157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.392 [2024-11-20 12:37:23.864163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.392 [2024-11-20 12:37:23.864171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.392 [2024-11-20 12:37:23.864177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.392 [2024-11-20 12:37:23.864186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.392 [2024-11-20 12:37:23.864194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.392 [2024-11-20 12:37:23.864206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.392 [2024-11-20 12:37:23.864213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.392 [2024-11-20 12:37:23.864221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.392 [2024-11-20 12:37:23.864229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.392 [2024-11-20 12:37:23.864237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.392 [2024-11-20 12:37:23.864243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.392 [2024-11-20 12:37:23.864252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.392 [2024-11-20 12:37:23.864258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.392 [2024-11-20 12:37:23.864268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.393 [2024-11-20 12:37:23.864274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.393 [2024-11-20 12:37:23.864282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.393 [2024-11-20 12:37:23.864288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.393 [2024-11-20 12:37:23.864296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.393 [2024-11-20 12:37:23.864305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.393 [2024-11-20 12:37:23.864313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.393 [2024-11-20 12:37:23.864319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.393 [2024-11-20 12:37:23.864347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:18.393 [2024-11-20 12:37:23.864658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.393 [2024-11-20 12:37:23.864676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.393 [2024-11-20 12:37:23.864688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.393 [2024-11-20 12:37:23.864695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.393 [2024-11-20 12:37:23.864704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.393 [2024-11-20 12:37:23.864710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.393 [2024-11-20 12:37:23.864718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.393 [2024-11-20 12:37:23.864725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.393 [2024-11-20 12:37:23.864733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.393 [2024-11-20 12:37:23.864739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.393 [2024-11-20 12:37:23.864748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.393 [2024-11-20 12:37:23.864757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.393 [2024-11-20 12:37:23.864766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.393 [2024-11-20 12:37:23.864772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.393 [2024-11-20 12:37:23.864780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.393 [2024-11-20 12:37:23.864786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.393 [2024-11-20 12:37:23.864794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.393 [2024-11-20 12:37:23.864803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.393 [2024-11-20 12:37:23.864810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.393 [2024-11-20 12:37:23.864817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.393 [2024-11-20 12:37:23.864824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.393 [2024-11-20 12:37:23.864831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.393 [2024-11-20 12:37:23.864838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.393 [2024-11-20 12:37:23.864844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.393 [2024-11-20 12:37:23.864853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.393 [2024-11-20 12:37:23.864860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.393 [2024-11-20 12:37:23.864868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.393 [2024-11-20 12:37:23.864874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.393 [2024-11-20 12:37:23.864882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.393 [2024-11-20 12:37:23.864890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.393 [2024-11-20 12:37:23.864898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.393 [2024-11-20 12:37:23.864904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.393 [2024-11-20 12:37:23.864906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.393 [2024-11-20 12:37:23.864913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.393 [2024-11-20 12:37:23.864919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with t[2024-11-20 12:37:23.864920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:23:18.393 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.393 [2024-11-20 12:37:23.864930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.393 [2024-11-20 12:37:23.864932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.393 [2024-11-20 12:37:23.864937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.393 [2024-11-20 12:37:23.864939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.393 [2024-11-20 12:37:23.864944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.393 [2024-11-20 12:37:23.864948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.393 [2024-11-20 12:37:23.864952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.393 [2024-11-20 12:37:23.864956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.393 [2024-11-20 12:37:23.864960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.393 [2024-11-20 12:37:23.864964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.393 [2024-11-20 12:37:23.864968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.393 [2024-11-20 12:37:23.864971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.393 [2024-11-20 12:37:23.864975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.393 [2024-11-20 12:37:23.864980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.393 [2024-11-20 12:37:23.864982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.393 [2024-11-20 12:37:23.864987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.393 [2024-11-20 12:37:23.864990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.393 [2024-11-20 12:37:23.864995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.393 [2024-11-20 12:37:23.864997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.393 [2024-11-20 12:37:23.865004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 12:37:23.865005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.393 he state(6) to be set 00:23:18.393 [2024-11-20 12:37:23.865014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with t[2024-11-20 12:37:23.865015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:1he state(6) to be set 00:23:18.393 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.393 [2024-11-20 12:37:23.865024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with t[2024-11-20 12:37:23.865024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:23:18.393 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.393 [2024-11-20 12:37:23.865032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with t[2024-11-20 12:37:23.865035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:1he state(6) to be set 00:23:18.393 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.393 [2024-11-20 12:37:23.865043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with t[2024-11-20 12:37:23.865043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:23:18.393 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.393 [2024-11-20 12:37:23.865052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.393 [2024-11-20 12:37:23.865054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.393 [2024-11-20 12:37:23.865060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.393 [2024-11-20 12:37:23.865064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.393 [2024-11-20 12:37:23.865068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.393 [2024-11-20 12:37:23.865072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.393 [2024-11-20 12:37:23.865076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.394 [2024-11-20 12:37:23.865079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.394 [2024-11-20 12:37:23.865084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.394 [2024-11-20 12:37:23.865088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.394 [2024-11-20 12:37:23.865092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.394 [2024-11-20 12:37:23.865095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.394 [2024-11-20 12:37:23.865099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.394 [2024-11-20 12:37:23.865104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.394 [2024-11-20 12:37:23.865106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.394 [2024-11-20 12:37:23.865112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.394 [2024-11-20 12:37:23.865114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.394 [2024-11-20 12:37:23.865121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:1[2024-11-20 12:37:23.865121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.394 he state(6) to be set 00:23:18.394 [2024-11-20 12:37:23.865130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.394 [2024-11-20 12:37:23.865131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.394 [2024-11-20 12:37:23.865138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with t[2024-11-20 12:37:23.865139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:1he state(6) to be set 00:23:18.394 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.394 [2024-11-20 12:37:23.865148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with t[2024-11-20 12:37:23.865149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:23:18.394 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.394 [2024-11-20 12:37:23.865157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.394 [2024-11-20 12:37:23.865159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.394 [2024-11-20 12:37:23.865164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.394 [2024-11-20 12:37:23.865167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.394 [2024-11-20 12:37:23.865171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.394 [2024-11-20 12:37:23.865176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.394 [2024-11-20 12:37:23.865179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.394 [2024-11-20 12:37:23.865183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.394 [2024-11-20 12:37:23.865186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.394 [2024-11-20 12:37:23.865193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with t[2024-11-20 12:37:23.865193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:1he state(6) to be set 00:23:18.394 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.394 [2024-11-20 12:37:23.865206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.394 [2024-11-20 12:37:23.865209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.394 [2024-11-20 12:37:23.865219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.394 [2024-11-20 12:37:23.865221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.394 [2024-11-20 12:37:23.865226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.394 [2024-11-20 12:37:23.865228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.394 [2024-11-20 12:37:23.865235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.394 [2024-11-20 12:37:23.865236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.394 [2024-11-20 12:37:23.865243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.394 [2024-11-20 12:37:23.865244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.394 [2024-11-20 12:37:23.865252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with t[2024-11-20 12:37:23.865252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:1he state(6) to be set 00:23:18.394 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.394 [2024-11-20 12:37:23.865260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with t[2024-11-20 12:37:23.865261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:23:18.394 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.394 [2024-11-20 12:37:23.865270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.394 [2024-11-20 12:37:23.865272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.394 [2024-11-20 12:37:23.865278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.394 [2024-11-20 12:37:23.865280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.394 [2024-11-20 12:37:23.865285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.394 [2024-11-20 12:37:23.865289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.394 [2024-11-20 12:37:23.865292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.394 [2024-11-20 12:37:23.865296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.394 [2024-11-20 12:37:23.865300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.394 [2024-11-20 12:37:23.865305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.394 [2024-11-20 12:37:23.865307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.394 [2024-11-20 12:37:23.865312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.394 [2024-11-20 12:37:23.865313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.394 [2024-11-20 12:37:23.865321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with t[2024-11-20 12:37:23.865321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:1he state(6) to be set 00:23:18.394 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.394 [2024-11-20 12:37:23.865329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.394 [2024-11-20 12:37:23.865330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.394 [2024-11-20 12:37:23.865336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.394 [2024-11-20 12:37:23.865339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.394 [2024-11-20 12:37:23.865345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.394 [2024-11-20 12:37:23.865349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.394 [2024-11-20 12:37:23.865352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.394 [2024-11-20 12:37:23.865357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.394 [2024-11-20 12:37:23.865359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.394 [2024-11-20 12:37:23.865365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 12:37:23.865366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.394 he state(6) to be set 00:23:18.394 [2024-11-20 12:37:23.865375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.394 [2024-11-20 12:37:23.865376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.394 [2024-11-20 12:37:23.865382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.394 [2024-11-20 12:37:23.865383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.394 [2024-11-20 12:37:23.865389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.394 [2024-11-20 12:37:23.865392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.394 [2024-11-20 12:37:23.865396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.394 [2024-11-20 12:37:23.865400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.394 [2024-11-20 12:37:23.865404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.394 [2024-11-20 12:37:23.865408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.394 [2024-11-20 12:37:23.865412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa27e00 is same with the state(6) to be set 00:23:18.394 [2024-11-20 12:37:23.865415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.395 [2024-11-20 12:37:23.865424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.395 [2024-11-20 12:37:23.865430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.395 [2024-11-20 12:37:23.865438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.395 [2024-11-20 12:37:23.865445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.395 [2024-11-20 12:37:23.865453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.395 [2024-11-20 12:37:23.865461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.395 [2024-11-20 12:37:23.865468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.395 [2024-11-20 12:37:23.865475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.395 [2024-11-20 12:37:23.865483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.395 [2024-11-20 12:37:23.865489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.395 [2024-11-20 12:37:23.865497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.395 [2024-11-20 12:37:23.865503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.395 [2024-11-20 12:37:23.865511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.395 [2024-11-20 12:37:23.865519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.395 [2024-11-20 12:37:23.865527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.395 [2024-11-20 12:37:23.865533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.395 [2024-11-20 12:37:23.865541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.395 [2024-11-20 12:37:23.865547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.395 [2024-11-20 12:37:23.865554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.395 [2024-11-20 12:37:23.865561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.395 [2024-11-20 12:37:23.865569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.395 [2024-11-20 12:37:23.865576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.395 [2024-11-20 12:37:23.865583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.395 [2024-11-20 12:37:23.866481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.395 [2024-11-20 12:37:23.866498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.395 [2024-11-20 12:37:23.866506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.395 [2024-11-20 12:37:23.866512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.395 [2024-11-20 12:37:23.866519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.395 [2024-11-20 12:37:23.866525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.395 [2024-11-20 12:37:23.866531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.395 [2024-11-20 12:37:23.866537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.395 [2024-11-20 12:37:23.866543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.395 [2024-11-20 12:37:23.866549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.395 [2024-11-20 12:37:23.866556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.395 [2024-11-20 12:37:23.866563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.395 [2024-11-20 12:37:23.866569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.395 [2024-11-20 12:37:23.866575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.395 [2024-11-20 12:37:23.866581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.395 [2024-11-20 12:37:23.866587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.395 [2024-11-20 12:37:23.866596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.395 [2024-11-20 12:37:23.866602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.395 [2024-11-20 12:37:23.866610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.395 [2024-11-20 12:37:23.866617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.395 [2024-11-20 12:37:23.866623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.395 [2024-11-20 12:37:23.866629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.395 [2024-11-20 12:37:23.866635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.395 [2024-11-20 12:37:23.866641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.395 [2024-11-20 12:37:23.866647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.395 [2024-11-20 12:37:23.866653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.395 [2024-11-20 12:37:23.866659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.395 [2024-11-20 12:37:23.866666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.395 [2024-11-20 12:37:23.866672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.395 [2024-11-20 12:37:23.866679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.395 [2024-11-20 12:37:23.866692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.395 [2024-11-20 12:37:23.866698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.395 [2024-11-20 12:37:23.866704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.395 [2024-11-20 12:37:23.866709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.395 [2024-11-20 12:37:23.866717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.395 [2024-11-20 12:37:23.866723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.395 [2024-11-20 12:37:23.866729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.395 [2024-11-20 12:37:23.866735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.395 [2024-11-20 12:37:23.866741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.395 [2024-11-20 12:37:23.866747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.395 [2024-11-20 12:37:23.866753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.395 [2024-11-20 12:37:23.866758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.395 [2024-11-20 12:37:23.866765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.395 [2024-11-20 12:37:23.866773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.395 [2024-11-20 12:37:23.866779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.395 [2024-11-20 12:37:23.866785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.866791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.866797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.866803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.866808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.866814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.866821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.866828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.866834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.866840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.866846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.866851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.866857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.866863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.866869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.866874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.866881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.866889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa282d0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.867707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.867732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.867740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.867747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.867752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.867759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.867766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.867772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.867786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.867792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.867798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.867804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.867810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.867816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.867822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.867828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.867834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.867839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.867845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.867851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.867857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.867862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.867869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.867875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.867881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.867886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.867892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.867898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.867903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.867909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.867914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.867923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.867930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.867936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.867942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.867949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.867955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.867960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.867966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.867973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.868029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.868082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.868136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.868188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.868256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.868311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.868365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.868421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.868474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.868820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.868881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.868938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.868993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.869049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.869103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.869157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.869243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.869309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.869364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.869417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.869473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.869526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.869581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa287c0 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.870194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.870213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.870222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.396 [2024-11-20 12:37:23.870228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.397 [2024-11-20 12:37:23.870234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.397 [2024-11-20 12:37:23.870240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.397 [2024-11-20 12:37:23.870245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.397 [2024-11-20 12:37:23.870251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.397 [2024-11-20 12:37:23.870256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.397 [2024-11-20 12:37:23.870263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.397 [2024-11-20 12:37:23.870269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.397 [2024-11-20 12:37:23.870275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.397 [2024-11-20 12:37:23.870281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.397 [2024-11-20 12:37:23.870323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.397 [2024-11-20 12:37:23.870374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.397 [2024-11-20 12:37:23.870428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.397 [2024-11-20 12:37:23.870480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.397 [2024-11-20 12:37:23.870533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.397 [2024-11-20 12:37:23.870585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.397 [2024-11-20 12:37:23.870638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.397 [2024-11-20 12:37:23.870692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.397 [2024-11-20 12:37:23.870754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.397 [2024-11-20 12:37:23.870817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.397 [2024-11-20 12:37:23.870875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.397 [2024-11-20 12:37:23.870930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.397 [2024-11-20 12:37:23.870984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.397 [2024-11-20 12:37:23.871045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.397 [2024-11-20 12:37:23.879214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.397 [2024-11-20 12:37:23.879232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.397 [2024-11-20 12:37:23.879241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.397 [2024-11-20 12:37:23.879251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.397 [2024-11-20 12:37:23.879259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.397 [2024-11-20 12:37:23.879268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.397 [2024-11-20 12:37:23.879277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.397 [2024-11-20 12:37:23.879286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.397 [2024-11-20 12:37:23.879294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.397 [2024-11-20 12:37:23.879304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.397 [2024-11-20 12:37:23.879312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.397 [2024-11-20 12:37:23.879322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.397 [2024-11-20 12:37:23.879332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.397 [2024-11-20 12:37:23.879343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.397 [2024-11-20 12:37:23.879351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.397 [2024-11-20 12:37:23.879478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.397 [2024-11-20 12:37:23.879492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.397 [2024-11-20 12:37:23.879502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.397 [2024-11-20 12:37:23.879511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.397 [2024-11-20 12:37:23.879520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.397 [2024-11-20 12:37:23.879528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.397 [2024-11-20 12:37:23.879537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.397 [2024-11-20 12:37:23.879545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.397 [2024-11-20 12:37:23.879553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11eb610 is same with the state(6) to be set 00:23:18.397 [2024-11-20 12:37:23.879583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb50 (9): Bad file descriptor 00:23:18.397 [2024-11-20 12:37:23.879616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.397 [2024-11-20 12:37:23.879627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.397 [2024-11-20 12:37:23.879637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.397 [2024-11-20 12:37:23.879646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.397 [2024-11-20 12:37:23.879655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.397 [2024-11-20 12:37:23.879662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.397 [2024-11-20 12:37:23.879672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.397 [2024-11-20 12:37:23.879680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.397 [2024-11-20 12:37:23.879688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714920 is same with the state(6) to be set 00:23:18.397 [2024-11-20 12:37:23.879716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.397 [2024-11-20 12:37:23.879727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.397 [2024-11-20 12:37:23.879736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.397 [2024-11-20 12:37:23.879744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.397 [2024-11-20 12:37:23.879753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.397 [2024-11-20 12:37:23.879761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.397 [2024-11-20 12:37:23.879771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.397 [2024-11-20 12:37:23.879779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.397 [2024-11-20 12:37:23.879787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17024c0 is same with the state(6) to be set 00:23:18.397 [2024-11-20 12:37:23.879815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.397 [2024-11-20 12:37:23.879826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.397 [2024-11-20 12:37:23.879835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.397 [2024-11-20 12:37:23.879843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.397 [2024-11-20 12:37:23.879852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.397 [2024-11-20 12:37:23.879860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.397 [2024-11-20 12:37:23.879869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.397 [2024-11-20 12:37:23.879880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.397 [2024-11-20 12:37:23.879888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f8300 is same with the state(6) to be set 00:23:18.397 [2024-11-20 12:37:23.879907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12d71b0 (9): Bad file descriptor 00:23:18.397 [2024-11-20 12:37:23.879934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.397 [2024-11-20 12:37:23.879944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.397 [2024-11-20 12:37:23.879953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.397 [2024-11-20 12:37:23.879961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.397 [2024-11-20 12:37:23.879971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.398 [2024-11-20 12:37:23.879978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.398 [2024-11-20 12:37:23.879986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.398 [2024-11-20 12:37:23.879995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.398 [2024-11-20 12:37:23.880003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17017a0 is same with the state(6) to be set 00:23:18.398 [2024-11-20 12:37:23.880016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12d4c70 (9): Bad file descriptor 00:23:18.398 [2024-11-20 12:37:23.880035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12d6d50 (9): Bad file descriptor 00:23:18.398 [2024-11-20 12:37:23.882930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:18.398 [2024-11-20 12:37:23.882967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17017a0 (9): Bad file descriptor 00:23:18.398 [2024-11-20 12:37:23.883469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:18.398 [2024-11-20 12:37:23.884787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.398 [2024-11-20 12:37:23.884794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.398 [2024-11-20 12:37:23.884810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.398 [2024-11-20 12:37:23.884817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17017a0 wit[2024-11-20 12:37:23.884820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with th addr=10.0.0.2, port=4420 00:23:18.398 he state(6) to be set 00:23:18.398 [2024-11-20 12:37:23.884831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.398 [2024-11-20 12:37:23.884833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17017a0 is same with the state(6) to be set 00:23:18.398 [2024-11-20 12:37:23.884840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.398 [2024-11-20 12:37:23.884850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.398 [2024-11-20 12:37:23.884859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.398 [2024-11-20 12:37:23.884869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.398 [2024-11-20 12:37:23.884880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.398 [2024-11-20 12:37:23.884888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.398 [2024-11-20 12:37:23.884898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.398 [2024-11-20 12:37:23.884907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.398 [2024-11-20 12:37:23.884915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.398 [2024-11-20 12:37:23.884923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.398 [2024-11-20 12:37:23.884934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.398 [2024-11-20 12:37:23.884943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.398 [2024-11-20 12:37:23.884951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.398 [2024-11-20 12:37:23.884952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.398 [2024-11-20 12:37:23.884960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.398 [2024-11-20 12:37:23.884969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.398 [2024-11-20 12:37:23.884970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb50 with addr=10.0.0.2, port=4420 00:23:18.398 [2024-11-20 12:37:23.884978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.398 [2024-11-20 12:37:23.884982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb50 is same with the state(6) to be set 00:23:18.398 [2024-11-20 12:37:23.884987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.398 [2024-11-20 12:37:23.884996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.398 [2024-11-20 12:37:23.885005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.398 [2024-11-20 12:37:23.885014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.398 [2024-11-20 12:37:23.885022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.398 [2024-11-20 12:37:23.885032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.398 [2024-11-20 12:37:23.885040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.398 [2024-11-20 12:37:23.885040] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:18.398 [2024-11-20 12:37:23.885048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.398 [2024-11-20 12:37:23.885057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.398 [2024-11-20 12:37:23.885065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.398 [2024-11-20 12:37:23.885073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.398 [2024-11-20 12:37:23.885083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.398 [2024-11-20 12:37:23.885091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.398 [2024-11-20 12:37:23.885100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.398 [2024-11-20 12:37:23.885102] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:18.398 [2024-11-20 12:37:23.885108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.398 [2024-11-20 12:37:23.885117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa28c90 is same with the state(6) to be set 00:23:18.398 [2024-11-20 12:37:23.885163] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:18.398 [2024-11-20 12:37:23.885242] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:18.398 [2024-11-20 12:37:23.885501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17017a0 (9): Bad file descriptor 00:23:18.398 [2024-11-20 12:37:23.885523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb50 (9): Bad file descriptor 00:23:18.398 [2024-11-20 12:37:23.885642] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:18.398 [2024-11-20 12:37:23.885701] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:18.398 [2024-11-20 12:37:23.885757] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:18.398 [2024-11-20 12:37:23.885840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:18.398 [2024-11-20 12:37:23.885855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:18.398 [2024-11-20 12:37:23.885871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:18.398 [2024-11-20 12:37:23.885884] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:18.398 [2024-11-20 12:37:23.885896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:18.398 [2024-11-20 12:37:23.885906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:18.398 [2024-11-20 12:37:23.885917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:18.398 [2024-11-20 12:37:23.885927] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:18.398 [2024-11-20 12:37:23.886424] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:18.398 [2024-11-20 12:37:23.886499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.398 [2024-11-20 12:37:23.886515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.398 [2024-11-20 12:37:23.886533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.398 [2024-11-20 12:37:23.886546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.398 [2024-11-20 12:37:23.886560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.398 [2024-11-20 12:37:23.886572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.398 [2024-11-20 12:37:23.886586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.398 [2024-11-20 12:37:23.886596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.398 [2024-11-20 12:37:23.886616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.398 [2024-11-20 12:37:23.886627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.398 [2024-11-20 12:37:23.886641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.399 [2024-11-20 12:37:23.886653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.399 [2024-11-20 12:37:23.886667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.399 [2024-11-20 12:37:23.886678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.399 [2024-11-20 12:37:23.886692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.399 [2024-11-20 12:37:23.886704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.399 [2024-11-20 12:37:23.886717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.399 [2024-11-20 12:37:23.886729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.399 [2024-11-20 12:37:23.886743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.399 [2024-11-20 12:37:23.886754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.399 [2024-11-20 12:37:23.886768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.399 [2024-11-20 12:37:23.886778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.399 [2024-11-20 12:37:23.886792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.399 [2024-11-20 12:37:23.886803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.399 [2024-11-20 12:37:23.886817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.399 [2024-11-20 12:37:23.886827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.399 [2024-11-20 12:37:23.886841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.399 [2024-11-20 12:37:23.886853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.399 [2024-11-20 12:37:23.886867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.399 [2024-11-20 12:37:23.886878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.399 [2024-11-20 12:37:23.886891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.399 [2024-11-20 12:37:23.886903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.399 [2024-11-20 12:37:23.886916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.399 [2024-11-20 12:37:23.886930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.399 [2024-11-20 12:37:23.886944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.399 [2024-11-20 12:37:23.886956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.399 [2024-11-20 12:37:23.886969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.399 [2024-11-20 12:37:23.886980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.399 [2024-11-20 12:37:23.886994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.399 [2024-11-20 12:37:23.887004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.399 [2024-11-20 12:37:23.887018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.399 [2024-11-20 12:37:23.887028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.399 [2024-11-20 12:37:23.887042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.399 [2024-11-20 12:37:23.887053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.399 [2024-11-20 12:37:23.887067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.399 [2024-11-20 12:37:23.887079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.399 [2024-11-20 12:37:23.887093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.399 [2024-11-20 12:37:23.887104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.399 [2024-11-20 12:37:23.887118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.399 [2024-11-20 12:37:23.887131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.399 [2024-11-20 12:37:23.887145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.399 [2024-11-20 12:37:23.887157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.399 [2024-11-20 12:37:23.887170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.399 [2024-11-20 12:37:23.887181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.399 [2024-11-20 12:37:23.887194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.399 [2024-11-20 12:37:23.887225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.399 [2024-11-20 12:37:23.887240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.399 [2024-11-20 12:37:23.887250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.399 [2024-11-20 12:37:23.887267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.399 [2024-11-20 12:37:23.887278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.399 [2024-11-20 12:37:23.887292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.399 [2024-11-20 12:37:23.887303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.399 [2024-11-20 12:37:23.887317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.399 [2024-11-20 12:37:23.887327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.399 [2024-11-20 12:37:23.887342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.400 [2024-11-20 12:37:23.887353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.400 [2024-11-20 12:37:23.887367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.400 [2024-11-20 12:37:23.887380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.400 [2024-11-20 12:37:23.887393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.400 [2024-11-20 12:37:23.887405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.400 [2024-11-20 12:37:23.887419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.400 [2024-11-20 12:37:23.887429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.400 [2024-11-20 12:37:23.887443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.400 [2024-11-20 12:37:23.887454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.400 [2024-11-20 12:37:23.887467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.400 [2024-11-20 12:37:23.887479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.400 [2024-11-20 12:37:23.887492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.400 [2024-11-20 12:37:23.887503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.400 [2024-11-20 12:37:23.887517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.400 [2024-11-20 12:37:23.887527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.400 [2024-11-20 12:37:23.887542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.400 [2024-11-20 12:37:23.887553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.400 [2024-11-20 12:37:23.887567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.400 [2024-11-20 12:37:23.887584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.400 [2024-11-20 12:37:23.887598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.400 [2024-11-20 12:37:23.887610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.400 [2024-11-20 12:37:23.887624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.400 [2024-11-20 12:37:23.887635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.400 [2024-11-20 12:37:23.887650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.400 [2024-11-20 12:37:23.887660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.400 [2024-11-20 12:37:23.887674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.400 [2024-11-20 12:37:23.887685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.400 [2024-11-20 12:37:23.887699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.400 [2024-11-20 12:37:23.887710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.400 [2024-11-20 12:37:23.887725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.400 [2024-11-20 12:37:23.887736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.400 [2024-11-20 12:37:23.887750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.400 [2024-11-20 12:37:23.887761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.400 [2024-11-20 12:37:23.887775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.400 [2024-11-20 12:37:23.887786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.400 [2024-11-20 12:37:23.887799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.400 [2024-11-20 12:37:23.887810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.400 [2024-11-20 12:37:23.887824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.400 [2024-11-20 12:37:23.887835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.400 [2024-11-20 12:37:23.887851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.400 [2024-11-20 12:37:23.887863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.400 [2024-11-20 12:37:23.887876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.400 [2024-11-20 12:37:23.887887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.401 [2024-11-20 12:37:23.887904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.401 [2024-11-20 12:37:23.887916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.401 [2024-11-20 12:37:23.887930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.401 [2024-11-20 12:37:23.887941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.401 [2024-11-20 12:37:23.887956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.401 [2024-11-20 12:37:23.887967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.401 [2024-11-20 12:37:23.887981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.401 [2024-11-20 12:37:23.887993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.401 [2024-11-20 12:37:23.888006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.401 [2024-11-20 12:37:23.888018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.401 [2024-11-20 12:37:23.888032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.401 [2024-11-20 12:37:23.888043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.401 [2024-11-20 12:37:23.888056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.401 [2024-11-20 12:37:23.888067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.401 [2024-11-20 12:37:23.888082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.401 [2024-11-20 12:37:23.888094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.401 [2024-11-20 12:37:23.888108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.401 [2024-11-20 12:37:23.888118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.401 [2024-11-20 12:37:23.888132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.401 [2024-11-20 12:37:23.888143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.401 [2024-11-20 12:37:23.888156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c870 is same with the state(6) to be set 00:23:18.401 [2024-11-20 12:37:23.889817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:23:18.401 [2024-11-20 12:37:23.889877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171b3e0 (9): Bad file descriptor 00:23:18.401 [2024-11-20 12:37:23.889900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11eb610 (9): Bad file descriptor 00:23:18.401 [2024-11-20 12:37:23.889933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1714920 (9): Bad file descriptor 00:23:18.401 [2024-11-20 12:37:23.889958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17024c0 (9): Bad file descriptor 00:23:18.401 [2024-11-20 12:37:23.889987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f8300 (9): Bad file descriptor 00:23:18.401 [2024-11-20 12:37:23.890173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.401 [2024-11-20 12:37:23.890191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.401 [2024-11-20 12:37:23.890217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.401 [2024-11-20 12:37:23.890230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.401 [2024-11-20 12:37:23.890244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.401 [2024-11-20 12:37:23.890256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.401 [2024-11-20 12:37:23.890269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.401 [2024-11-20 12:37:23.890281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.401 [2024-11-20 12:37:23.890296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.401 [2024-11-20 12:37:23.890307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.401 [2024-11-20 12:37:23.890321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.401 [2024-11-20 12:37:23.890332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.401 [2024-11-20 12:37:23.890346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.401 [2024-11-20 12:37:23.890357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.401 [2024-11-20 12:37:23.890370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.401 [2024-11-20 12:37:23.890381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.401 [2024-11-20 12:37:23.890395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.401 [2024-11-20 12:37:23.890406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.401 [2024-11-20 12:37:23.890420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.401 [2024-11-20 12:37:23.890431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.401 [2024-11-20 12:37:23.890445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.401 [2024-11-20 12:37:23.890457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.401 [2024-11-20 12:37:23.890470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.401 [2024-11-20 12:37:23.890481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.401 [2024-11-20 12:37:23.890498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.401 [2024-11-20 12:37:23.890510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.402 [2024-11-20 12:37:23.890523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.402 [2024-11-20 12:37:23.890534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.402 [2024-11-20 12:37:23.890547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.402 [2024-11-20 12:37:23.890559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.402 [2024-11-20 12:37:23.890572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.402 [2024-11-20 12:37:23.890584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.402 [2024-11-20 12:37:23.890598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.402 [2024-11-20 12:37:23.890609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.402 [2024-11-20 12:37:23.890623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.402 [2024-11-20 12:37:23.890633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.402 [2024-11-20 12:37:23.890647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.402 [2024-11-20 12:37:23.890658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.402 [2024-11-20 12:37:23.890672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.402 [2024-11-20 12:37:23.890683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.402 [2024-11-20 12:37:23.890698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.402 [2024-11-20 12:37:23.890709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.402 [2024-11-20 12:37:23.890724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.402 [2024-11-20 12:37:23.890734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.402 [2024-11-20 12:37:23.890748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.402 [2024-11-20 12:37:23.890760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.402 [2024-11-20 12:37:23.890773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.402 [2024-11-20 12:37:23.890784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.402 [2024-11-20 12:37:23.890798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.402 [2024-11-20 12:37:23.890813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.402 [2024-11-20 12:37:23.890828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.402 [2024-11-20 12:37:23.890838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.402 [2024-11-20 12:37:23.890852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.402 [2024-11-20 12:37:23.890863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.402 [2024-11-20 12:37:23.890878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.402 [2024-11-20 12:37:23.890889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.402 [2024-11-20 12:37:23.890902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.402 [2024-11-20 12:37:23.890913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.402 [2024-11-20 12:37:23.890927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.402 [2024-11-20 12:37:23.890939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.402 [2024-11-20 12:37:23.890953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.402 [2024-11-20 12:37:23.890964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.402 [2024-11-20 12:37:23.890977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.402 [2024-11-20 12:37:23.890988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.402 [2024-11-20 12:37:23.891002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.402 [2024-11-20 12:37:23.891013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.402 [2024-11-20 12:37:23.891026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.402 [2024-11-20 12:37:23.891038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.402 [2024-11-20 12:37:23.891050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.402 [2024-11-20 12:37:23.891062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.402 [2024-11-20 12:37:23.891076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.402 [2024-11-20 12:37:23.891087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.402 [2024-11-20 12:37:23.891102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.402 [2024-11-20 12:37:23.891112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.402 [2024-11-20 12:37:23.891129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.402 [2024-11-20 12:37:23.891140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.402 [2024-11-20 12:37:23.891154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.402 [2024-11-20 12:37:23.891165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.402 [2024-11-20 12:37:23.891178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.403 [2024-11-20 12:37:23.891189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.403 [2024-11-20 12:37:23.891209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.403 [2024-11-20 12:37:23.891220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.403 [2024-11-20 12:37:23.891234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.403 [2024-11-20 12:37:23.891245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.403 [2024-11-20 12:37:23.891259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.403 [2024-11-20 12:37:23.891270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.403 [2024-11-20 12:37:23.891284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.403 [2024-11-20 12:37:23.891295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.403 [2024-11-20 12:37:23.891310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.403 [2024-11-20 12:37:23.891322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.403 [2024-11-20 12:37:23.891337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.403 [2024-11-20 12:37:23.891348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.403 [2024-11-20 12:37:23.891362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.403 [2024-11-20 12:37:23.891373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.403 [2024-11-20 12:37:23.891386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.403 [2024-11-20 12:37:23.891397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.403 [2024-11-20 12:37:23.891411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.403 [2024-11-20 12:37:23.891421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.403 [2024-11-20 12:37:23.891435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.403 [2024-11-20 12:37:23.891448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.403 [2024-11-20 12:37:23.891462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.403 [2024-11-20 12:37:23.891473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.403 [2024-11-20 12:37:23.891487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.403 [2024-11-20 12:37:23.891498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.403 [2024-11-20 12:37:23.891512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.403 [2024-11-20 12:37:23.891523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.403 [2024-11-20 12:37:23.891536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.403 [2024-11-20 12:37:23.891548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.403 [2024-11-20 12:37:23.891561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.403 [2024-11-20 12:37:23.891572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.403 [2024-11-20 12:37:23.891585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.403 [2024-11-20 12:37:23.891596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.403 [2024-11-20 12:37:23.891610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.403 [2024-11-20 12:37:23.891621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.403 [2024-11-20 12:37:23.891635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.403 [2024-11-20 12:37:23.891646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.403 [2024-11-20 12:37:23.891660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.403 [2024-11-20 12:37:23.891671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.403 [2024-11-20 12:37:23.891684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.403 [2024-11-20 12:37:23.891695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.403 [2024-11-20 12:37:23.891709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.403 [2024-11-20 12:37:23.891720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.403 [2024-11-20 12:37:23.891735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.403 [2024-11-20 12:37:23.891745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.403 [2024-11-20 12:37:23.891761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.403 [2024-11-20 12:37:23.891773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.403 [2024-11-20 12:37:23.891786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.403 [2024-11-20 12:37:23.891797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.403 [2024-11-20 12:37:23.891809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14db4d0 is same with the state(6) to be set 00:23:18.403 [2024-11-20 12:37:23.893035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.403 [2024-11-20 12:37:23.893049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.403 [2024-11-20 12:37:23.893059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.403 [2024-11-20 12:37:23.893068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.403 [2024-11-20 12:37:23.893077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.404 [2024-11-20 12:37:23.893086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.404 [2024-11-20 12:37:23.893095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.404 [2024-11-20 12:37:23.893103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.404 [2024-11-20 12:37:23.893112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.404 [2024-11-20 12:37:23.893120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.404 [2024-11-20 12:37:23.893129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.404 [2024-11-20 12:37:23.893137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.404 [2024-11-20 12:37:23.893146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.404 [2024-11-20 12:37:23.893154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.404 [2024-11-20 12:37:23.893163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.404 [2024-11-20 12:37:23.893171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.404 [2024-11-20 12:37:23.893180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.404 [2024-11-20 12:37:23.893188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.404 [2024-11-20 12:37:23.893196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.404 [2024-11-20 12:37:23.893209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.404 [2024-11-20 12:37:23.893222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.404 [2024-11-20 12:37:23.893229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.404 [2024-11-20 12:37:23.893240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.404 [2024-11-20 12:37:23.893248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.404 [2024-11-20 12:37:23.893264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.404 [2024-11-20 12:37:23.893272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.404 [2024-11-20 12:37:23.893282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.404 [2024-11-20 12:37:23.893289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.404 [2024-11-20 12:37:23.893298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.404 [2024-11-20 12:37:23.893306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.404 [2024-11-20 12:37:23.893315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.404 [2024-11-20 12:37:23.893323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.404 [2024-11-20 12:37:23.893333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.404 [2024-11-20 12:37:23.893341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.404 [2024-11-20 12:37:23.893350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.404 [2024-11-20 12:37:23.893358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.404 [2024-11-20 12:37:23.893368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.404 [2024-11-20 12:37:23.893374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.404 [2024-11-20 12:37:23.893384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.404 [2024-11-20 12:37:23.893393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.404 [2024-11-20 12:37:23.893402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.404 [2024-11-20 12:37:23.893410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.404 [2024-11-20 12:37:23.893419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.404 [2024-11-20 12:37:23.893426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.404 [2024-11-20 12:37:23.893436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.404 [2024-11-20 12:37:23.893445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.404 [2024-11-20 12:37:23.893455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.404 [2024-11-20 12:37:23.893462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.404 [2024-11-20 12:37:23.893471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.404 [2024-11-20 12:37:23.893479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.404 [2024-11-20 12:37:23.893488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.404 [2024-11-20 12:37:23.893496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.404 [2024-11-20 12:37:23.893504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.404 [2024-11-20 12:37:23.893512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.404 [2024-11-20 12:37:23.893523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.404 [2024-11-20 12:37:23.893531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.404 [2024-11-20 12:37:23.893540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.404 [2024-11-20 12:37:23.893548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.404 [2024-11-20 12:37:23.893556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.404 [2024-11-20 12:37:23.893566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.405 [2024-11-20 12:37:23.893575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.405 [2024-11-20 12:37:23.893583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.405 [2024-11-20 12:37:23.893591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.405 [2024-11-20 12:37:23.893599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.405 [2024-11-20 12:37:23.893609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.405 [2024-11-20 12:37:23.893617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.405 [2024-11-20 12:37:23.893626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.405 [2024-11-20 12:37:23.893634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.405 [2024-11-20 12:37:23.893643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.405 [2024-11-20 12:37:23.893652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.405 [2024-11-20 12:37:23.893663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.405 [2024-11-20 12:37:23.893671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.405 [2024-11-20 12:37:23.893680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.405 [2024-11-20 12:37:23.893688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.405 [2024-11-20 12:37:23.893697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.405 [2024-11-20 12:37:23.893704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.405 [2024-11-20 12:37:23.893714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.405 [2024-11-20 12:37:23.893721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.405 [2024-11-20 12:37:23.893730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.405 [2024-11-20 12:37:23.893738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.405 [2024-11-20 12:37:23.893747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.405 [2024-11-20 12:37:23.893754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.405 [2024-11-20 12:37:23.893763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.405 [2024-11-20 12:37:23.893771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.405 [2024-11-20 12:37:23.893779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.405 [2024-11-20 12:37:23.893787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.405 [2024-11-20 12:37:23.893797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.405 [2024-11-20 12:37:23.893805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.405 [2024-11-20 12:37:23.893814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.405 [2024-11-20 12:37:23.893822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.405 [2024-11-20 12:37:23.893830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.405 [2024-11-20 12:37:23.893838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.405 [2024-11-20 12:37:23.893847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.405 [2024-11-20 12:37:23.893855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.405 [2024-11-20 12:37:23.893864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.405 [2024-11-20 12:37:23.893873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.405 [2024-11-20 12:37:23.893882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.405 [2024-11-20 12:37:23.893890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.405 [2024-11-20 12:37:23.893899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.405 [2024-11-20 12:37:23.893908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.405 [2024-11-20 12:37:23.893917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.405 [2024-11-20 12:37:23.893925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.405 [2024-11-20 12:37:23.893935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.405 [2024-11-20 12:37:23.893943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.405 [2024-11-20 12:37:23.893953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.405 [2024-11-20 12:37:23.893960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.405 [2024-11-20 12:37:23.893970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.405 [2024-11-20 12:37:23.893976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.405 [2024-11-20 12:37:23.893987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.405 [2024-11-20 12:37:23.893994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.405 [2024-11-20 12:37:23.894003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.405 [2024-11-20 12:37:23.894010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.405 [2024-11-20 12:37:23.894020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.405 [2024-11-20 12:37:23.894027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.405 [2024-11-20 12:37:23.894037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.405 [2024-11-20 12:37:23.894043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.405 [2024-11-20 12:37:23.894053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.405 [2024-11-20 12:37:23.894060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.405 [2024-11-20 12:37:23.894070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.405 [2024-11-20 12:37:23.894078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.405 [2024-11-20 12:37:23.894088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.405 [2024-11-20 12:37:23.894096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.405 [2024-11-20 12:37:23.894106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.405 [2024-11-20 12:37:23.894113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.405 [2024-11-20 12:37:23.894123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.405 [2024-11-20 12:37:23.894130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.405 [2024-11-20 12:37:23.894139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.405 [2024-11-20 12:37:23.894147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.406 [2024-11-20 12:37:23.894155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc6a0 is same with the state(6) to be set 00:23:18.406 [2024-11-20 12:37:23.895233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.406 [2024-11-20 12:37:23.895246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.406 [2024-11-20 12:37:23.895257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.406 [2024-11-20 12:37:23.895265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.406 [2024-11-20 12:37:23.895276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.406 [2024-11-20 12:37:23.895284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.406 [2024-11-20 12:37:23.895293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.406 [2024-11-20 12:37:23.895302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.406 [2024-11-20 12:37:23.895311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.406 [2024-11-20 12:37:23.895319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.406 [2024-11-20 12:37:23.895328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.406 [2024-11-20 12:37:23.895336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.406 [2024-11-20 12:37:23.895346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.406 [2024-11-20 12:37:23.895354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.406 [2024-11-20 12:37:23.895362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.406 [2024-11-20 12:37:23.895370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.406 [2024-11-20 12:37:23.895379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.406 [2024-11-20 12:37:23.895389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.406 [2024-11-20 12:37:23.895400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.406 [2024-11-20 12:37:23.895408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.406 [2024-11-20 12:37:23.895417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.406 [2024-11-20 12:37:23.895425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.406 [2024-11-20 12:37:23.895434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.406 [2024-11-20 12:37:23.895442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.406 [2024-11-20 12:37:23.895451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.406 [2024-11-20 12:37:23.895459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.406 [2024-11-20 12:37:23.895468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.406 [2024-11-20 12:37:23.895476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.406 [2024-11-20 12:37:23.895486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.406 [2024-11-20 12:37:23.895494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.406 [2024-11-20 12:37:23.895503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.406 [2024-11-20 12:37:23.895511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.406 [2024-11-20 12:37:23.895520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.406 [2024-11-20 12:37:23.895528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.406 [2024-11-20 12:37:23.895536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.406 [2024-11-20 12:37:23.895544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.406 [2024-11-20 12:37:23.895555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.406 [2024-11-20 12:37:23.895563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.406 [2024-11-20 12:37:23.895572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.406 [2024-11-20 12:37:23.895580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.406 [2024-11-20 12:37:23.895589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.406 [2024-11-20 12:37:23.895597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.406 [2024-11-20 12:37:23.895608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.406 [2024-11-20 12:37:23.895616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.406 [2024-11-20 12:37:23.895626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.406 [2024-11-20 12:37:23.895634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.406 [2024-11-20 12:37:23.895644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.406 [2024-11-20 12:37:23.895652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.406 [2024-11-20 12:37:23.895661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.406 [2024-11-20 12:37:23.895668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.406 [2024-11-20 12:37:23.895677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.406 [2024-11-20 12:37:23.895685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.406 [2024-11-20 12:37:23.895695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.406 [2024-11-20 12:37:23.895701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.406 [2024-11-20 12:37:23.895711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.406 [2024-11-20 12:37:23.895718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.406 [2024-11-20 12:37:23.895728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.406 [2024-11-20 12:37:23.895735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.406 [2024-11-20 12:37:23.895745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.406 [2024-11-20 12:37:23.895752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.406 [2024-11-20 12:37:23.895762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.406 [2024-11-20 12:37:23.895769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.406 [2024-11-20 12:37:23.895779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.406 [2024-11-20 12:37:23.895787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.406 [2024-11-20 12:37:23.895796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.407 [2024-11-20 12:37:23.895804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.407 [2024-11-20 12:37:23.895813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.407 [2024-11-20 12:37:23.895825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.407 [2024-11-20 12:37:23.895835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.407 [2024-11-20 12:37:23.895842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.407 [2024-11-20 12:37:23.895852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.407 [2024-11-20 12:37:23.895860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.407 [2024-11-20 12:37:23.895870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.407 [2024-11-20 12:37:23.895878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.407 [2024-11-20 12:37:23.895887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.407 [2024-11-20 12:37:23.895895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.407 [2024-11-20 12:37:23.895905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.407 [2024-11-20 12:37:23.895912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.407 [2024-11-20 12:37:23.895923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.407 [2024-11-20 12:37:23.895930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.407 [2024-11-20 12:37:23.895940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.407 [2024-11-20 12:37:23.895948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.407 [2024-11-20 12:37:23.895958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.407 [2024-11-20 12:37:23.895965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.407 [2024-11-20 12:37:23.895974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.407 [2024-11-20 12:37:23.895982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.407 [2024-11-20 12:37:23.895990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.407 [2024-11-20 12:37:23.895998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.407 [2024-11-20 12:37:23.896007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.407 [2024-11-20 12:37:23.896014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.407 [2024-11-20 12:37:23.896023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.407 [2024-11-20 12:37:23.896031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.407 [2024-11-20 12:37:23.896041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.407 [2024-11-20 12:37:23.896048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.407 [2024-11-20 12:37:23.896058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.407 [2024-11-20 12:37:23.896065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.407 [2024-11-20 12:37:23.896074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.407 [2024-11-20 12:37:23.896082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.407 [2024-11-20 12:37:23.896091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.407 [2024-11-20 12:37:23.896099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.407 [2024-11-20 12:37:23.896108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.407 [2024-11-20 12:37:23.896116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.407 [2024-11-20 12:37:23.896125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.407 [2024-11-20 12:37:23.896133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.407 [2024-11-20 12:37:23.896142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.407 [2024-11-20 12:37:23.896151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.407 [2024-11-20 12:37:23.896159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.407 [2024-11-20 12:37:23.896167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.407 [2024-11-20 12:37:23.896176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.407 [2024-11-20 12:37:23.896184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.407 [2024-11-20 12:37:23.896193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.407 [2024-11-20 12:37:23.896204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.407 [2024-11-20 12:37:23.896215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.407 [2024-11-20 12:37:23.896222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.407 [2024-11-20 12:37:23.896231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.407 [2024-11-20 12:37:23.896238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.407 [2024-11-20 12:37:23.896248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.407 [2024-11-20 12:37:23.896256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.407 [2024-11-20 12:37:23.896266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.407 [2024-11-20 12:37:23.896273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.407 [2024-11-20 12:37:23.896283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.407 [2024-11-20 12:37:23.896290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.407 [2024-11-20 12:37:23.896300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.407 [2024-11-20 12:37:23.896306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.407 [2024-11-20 12:37:23.896317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.407 [2024-11-20 12:37:23.896324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.407 [2024-11-20 12:37:23.896334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.407 [2024-11-20 12:37:23.896342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.407 [2024-11-20 12:37:23.896350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8550 is same with the state(6) to be set 00:23:18.407 [2024-11-20 12:37:23.897646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:18.407 [2024-11-20 12:37:23.897665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:18.407 [2024-11-20 12:37:23.897676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:18.407 [2024-11-20 12:37:23.897893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.407 [2024-11-20 12:37:23.897911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x171b3e0 with addr=10.0.0.2, port=4420 00:23:18.407 [2024-11-20 12:37:23.897921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171b3e0 is same with the state(6) to be set 00:23:18.407 [2024-11-20 12:37:23.898162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.407 [2024-11-20 12:37:23.898178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12d71b0 with addr=10.0.0.2, port=4420 00:23:18.407 [2024-11-20 12:37:23.898186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d71b0 is same with the state(6) to be set 00:23:18.407 [2024-11-20 12:37:23.898391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.407 [2024-11-20 12:37:23.898404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12d6d50 with addr=10.0.0.2, port=4420 00:23:18.407 [2024-11-20 12:37:23.898413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d6d50 is same with the state(6) to be set 00:23:18.407 [2024-11-20 12:37:23.898499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.407 [2024-11-20 12:37:23.898512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12d4c70 with addr=10.0.0.2, port=4420 00:23:18.407 [2024-11-20 12:37:23.898521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d4c70 is same with the state(6) to be set 00:23:18.407 [2024-11-20 12:37:23.898535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171b3e0 (9): Bad file descriptor 00:23:18.407 [2024-11-20 12:37:23.899272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:18.408 [2024-11-20 12:37:23.899297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:18.408 [2024-11-20 12:37:23.899318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12d71b0 (9): Bad file descriptor 00:23:18.408 [2024-11-20 12:37:23.899329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12d6d50 (9): Bad file descriptor 00:23:18.408 [2024-11-20 12:37:23.899340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12d4c70 (9): Bad file descriptor 00:23:18.408 [2024-11-20 12:37:23.899349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:23:18.408 [2024-11-20 12:37:23.899357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:23:18.408 [2024-11-20 12:37:23.899367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:23:18.408 [2024-11-20 12:37:23.899376] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:23:18.408 [2024-11-20 12:37:23.899514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.408 [2024-11-20 12:37:23.899532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb50 with addr=10.0.0.2, port=4420 00:23:18.408 [2024-11-20 12:37:23.899540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb50 is same with the state(6) to be set 00:23:18.408 [2024-11-20 12:37:23.899760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.408 [2024-11-20 12:37:23.899774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17017a0 with addr=10.0.0.2, port=4420 00:23:18.408 [2024-11-20 12:37:23.899783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17017a0 is same with the state(6) to be set 00:23:18.408 [2024-11-20 12:37:23.899791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:18.408 [2024-11-20 12:37:23.899799] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:18.408 [2024-11-20 12:37:23.899807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:18.408 [2024-11-20 12:37:23.899815] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:18.408 [2024-11-20 12:37:23.899824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:18.408 [2024-11-20 12:37:23.899831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:18.408 [2024-11-20 12:37:23.899838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:18.408 [2024-11-20 12:37:23.899846] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:18.408 [2024-11-20 12:37:23.899854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:18.408 [2024-11-20 12:37:23.899861] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:18.408 [2024-11-20 12:37:23.899869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:18.408 [2024-11-20 12:37:23.899876] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:18.408 [2024-11-20 12:37:23.899923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb50 (9): Bad file descriptor 00:23:18.408 [2024-11-20 12:37:23.899938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17017a0 (9): Bad file descriptor 00:23:18.408 [2024-11-20 12:37:23.900003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:18.408 [2024-11-20 12:37:23.900013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:18.408 [2024-11-20 12:37:23.900020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:18.408 [2024-11-20 12:37:23.900028] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:18.408 [2024-11-20 12:37:23.900035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:18.408 [2024-11-20 12:37:23.900043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:18.408 [2024-11-20 12:37:23.900050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:18.408 [2024-11-20 12:37:23.900056] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:18.408 [2024-11-20 12:37:23.900109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.408 [2024-11-20 12:37:23.900121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.408 [2024-11-20 12:37:23.900134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.408 [2024-11-20 12:37:23.900143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.408 [2024-11-20 12:37:23.900152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.408 [2024-11-20 12:37:23.900160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.408 [2024-11-20 12:37:23.900170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.408 [2024-11-20 12:37:23.900177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.408 [2024-11-20 12:37:23.900187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.408 [2024-11-20 12:37:23.900194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.408 [2024-11-20 12:37:23.900211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.408 [2024-11-20 12:37:23.900220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.408 [2024-11-20 12:37:23.900230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.408 [2024-11-20 12:37:23.900237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.408 [2024-11-20 12:37:23.900248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.408 [2024-11-20 12:37:23.900255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.408 [2024-11-20 12:37:23.900264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.408 [2024-11-20 12:37:23.900272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.408 [2024-11-20 12:37:23.900285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.408 [2024-11-20 12:37:23.900293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.408 [2024-11-20 12:37:23.900303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.408 [2024-11-20 12:37:23.900311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.408 [2024-11-20 12:37:23.900320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.408 [2024-11-20 12:37:23.900328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.408 [2024-11-20 12:37:23.900337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.408 [2024-11-20 12:37:23.900345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.408 [2024-11-20 12:37:23.900354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.408 [2024-11-20 12:37:23.900362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.408 [2024-11-20 12:37:23.900376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.408 [2024-11-20 12:37:23.900383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.408 [2024-11-20 12:37:23.900393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.408 [2024-11-20 12:37:23.900400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.408 [2024-11-20 12:37:23.900410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.408 [2024-11-20 12:37:23.900418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.408 [2024-11-20 12:37:23.900428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.408 [2024-11-20 12:37:23.900436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.408 [2024-11-20 12:37:23.900446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.408 [2024-11-20 12:37:23.900454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.408 [2024-11-20 12:37:23.900463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.408 [2024-11-20 12:37:23.900472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.408 [2024-11-20 12:37:23.900480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.408 [2024-11-20 12:37:23.900488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.408 [2024-11-20 12:37:23.900497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.408 [2024-11-20 12:37:23.900507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.408 [2024-11-20 12:37:23.900518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.408 [2024-11-20 12:37:23.900526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.408 [2024-11-20 12:37:23.900536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.409 [2024-11-20 12:37:23.900545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.409 [2024-11-20 12:37:23.900555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.409 [2024-11-20 12:37:23.900563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.409 [2024-11-20 12:37:23.900572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.409 [2024-11-20 12:37:23.900580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.409 [2024-11-20 12:37:23.900589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.409 [2024-11-20 12:37:23.900597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.409 [2024-11-20 12:37:23.900606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.409 [2024-11-20 12:37:23.900614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.409 [2024-11-20 12:37:23.900623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.409 [2024-11-20 12:37:23.900631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.409 [2024-11-20 12:37:23.900640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.409 [2024-11-20 12:37:23.900648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.409 [2024-11-20 12:37:23.900657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.409 [2024-11-20 12:37:23.900664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.409 [2024-11-20 12:37:23.900673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.409 [2024-11-20 12:37:23.900680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.409 [2024-11-20 12:37:23.900689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.409 [2024-11-20 12:37:23.900696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.409 [2024-11-20 12:37:23.900707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.409 [2024-11-20 12:37:23.900714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.409 [2024-11-20 12:37:23.900725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.409 [2024-11-20 12:37:23.900733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.409 [2024-11-20 12:37:23.900742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.409 [2024-11-20 12:37:23.900749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.409 [2024-11-20 12:37:23.900758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.409 [2024-11-20 12:37:23.900765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.409 [2024-11-20 12:37:23.900775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.409 [2024-11-20 12:37:23.900781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.409 [2024-11-20 12:37:23.900791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.409 [2024-11-20 12:37:23.900798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.409 [2024-11-20 12:37:23.900808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.409 [2024-11-20 12:37:23.900815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.409 [2024-11-20 12:37:23.900824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.409 [2024-11-20 12:37:23.900831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.409 [2024-11-20 12:37:23.900841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.409 [2024-11-20 12:37:23.900848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.409 [2024-11-20 12:37:23.900858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.409 [2024-11-20 12:37:23.900866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.409 [2024-11-20 12:37:23.900875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.409 [2024-11-20 12:37:23.900883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.409 [2024-11-20 12:37:23.900893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.409 [2024-11-20 12:37:23.900901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.409 [2024-11-20 12:37:23.900909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.409 [2024-11-20 12:37:23.900917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.409 [2024-11-20 12:37:23.900925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.409 [2024-11-20 12:37:23.900935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.409 [2024-11-20 12:37:23.900943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.409 [2024-11-20 12:37:23.900951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.409 [2024-11-20 12:37:23.900960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.409 [2024-11-20 12:37:23.900968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.409 [2024-11-20 12:37:23.900978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.409 [2024-11-20 12:37:23.900986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.409 [2024-11-20 12:37:23.900995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.409 [2024-11-20 12:37:23.901003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.409 [2024-11-20 12:37:23.901013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.409 [2024-11-20 12:37:23.901021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.409 [2024-11-20 12:37:23.901030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.409 [2024-11-20 12:37:23.901038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.409 [2024-11-20 12:37:23.901048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.409 [2024-11-20 12:37:23.901055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.409 [2024-11-20 12:37:23.901065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.409 [2024-11-20 12:37:23.901072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.409 [2024-11-20 12:37:23.901082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.409 [2024-11-20 12:37:23.901090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.409 [2024-11-20 12:37:23.901100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.409 [2024-11-20 12:37:23.901107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.409 [2024-11-20 12:37:23.901117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.409 [2024-11-20 12:37:23.901124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.409 [2024-11-20 12:37:23.901133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.409 [2024-11-20 12:37:23.901141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.409 [2024-11-20 12:37:23.901151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.409 [2024-11-20 12:37:23.901158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.409 [2024-11-20 12:37:23.901167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.409 [2024-11-20 12:37:23.901175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.409 [2024-11-20 12:37:23.901185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.409 [2024-11-20 12:37:23.901192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.409 [2024-11-20 12:37:23.901208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.410 [2024-11-20 12:37:23.901216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.410 [2024-11-20 12:37:23.901226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.410 [2024-11-20 12:37:23.901233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.410 [2024-11-20 12:37:23.901242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16daf20 is same with the state(6) to be set 00:23:18.410 [2024-11-20 12:37:23.902308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.410 [2024-11-20 12:37:23.902324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.410 [2024-11-20 12:37:23.902336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.410 [2024-11-20 12:37:23.902344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.410 [2024-11-20 12:37:23.902354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.410 [2024-11-20 12:37:23.902360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.410 [2024-11-20 12:37:23.902370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.410 [2024-11-20 12:37:23.902385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.410 [2024-11-20 12:37:23.902395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.410 [2024-11-20 12:37:23.902402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.410 [2024-11-20 12:37:23.902412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.410 [2024-11-20 12:37:23.902419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.410 [2024-11-20 12:37:23.902428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.410 [2024-11-20 12:37:23.902435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.410 [2024-11-20 12:37:23.902446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.410 [2024-11-20 12:37:23.902453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.410 [2024-11-20 12:37:23.902462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.410 [2024-11-20 12:37:23.902470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.410 [2024-11-20 12:37:23.902478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.410 [2024-11-20 12:37:23.902486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.410 [2024-11-20 12:37:23.902494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.410 [2024-11-20 12:37:23.902501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.410 [2024-11-20 12:37:23.902510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.410 [2024-11-20 12:37:23.902517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.410 [2024-11-20 12:37:23.902526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.410 [2024-11-20 12:37:23.902534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.410 [2024-11-20 12:37:23.902542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.410 [2024-11-20 12:37:23.902548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.410 [2024-11-20 12:37:23.902557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.410 [2024-11-20 12:37:23.902563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.410 [2024-11-20 12:37:23.902573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.410 [2024-11-20 12:37:23.902580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.410 [2024-11-20 12:37:23.902589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.410 [2024-11-20 12:37:23.902596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.410 [2024-11-20 12:37:23.902605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.410 [2024-11-20 12:37:23.902612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.410 [2024-11-20 12:37:23.902620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.410 [2024-11-20 12:37:23.902630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.410 [2024-11-20 12:37:23.902639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.410 [2024-11-20 12:37:23.902647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.410 [2024-11-20 12:37:23.902656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.410 [2024-11-20 12:37:23.902663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.410 [2024-11-20 12:37:23.902671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.410 [2024-11-20 12:37:23.902678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.410 [2024-11-20 12:37:23.902686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.410 [2024-11-20 12:37:23.902694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.410 [2024-11-20 12:37:23.902702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.410 [2024-11-20 12:37:23.902709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.410 [2024-11-20 12:37:23.902717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.410 [2024-11-20 12:37:23.902724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.410 [2024-11-20 12:37:23.902732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.410 [2024-11-20 12:37:23.902741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.410 [2024-11-20 12:37:23.902750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.410 [2024-11-20 12:37:23.902756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.410 [2024-11-20 12:37:23.902765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.410 [2024-11-20 12:37:23.902772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.410 [2024-11-20 12:37:23.902780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.410 [2024-11-20 12:37:23.902787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.410 [2024-11-20 12:37:23.902796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.410 [2024-11-20 12:37:23.902803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.410 [2024-11-20 12:37:23.902811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.410 [2024-11-20 12:37:23.902817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.410 [2024-11-20 12:37:23.902826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.410 [2024-11-20 12:37:23.902833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.410 [2024-11-20 12:37:23.902842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.410 [2024-11-20 12:37:23.902850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.410 [2024-11-20 12:37:23.902860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.410 [2024-11-20 12:37:23.902867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.410 [2024-11-20 12:37:23.902875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.410 [2024-11-20 12:37:23.902882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.410 [2024-11-20 12:37:23.902890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.410 [2024-11-20 12:37:23.902897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.411 [2024-11-20 12:37:23.902906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.411 [2024-11-20 12:37:23.902913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.411 [2024-11-20 12:37:23.902922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.411 [2024-11-20 12:37:23.902929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.411 [2024-11-20 12:37:23.902936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.411 [2024-11-20 12:37:23.902944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.411 [2024-11-20 12:37:23.902952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.411 [2024-11-20 12:37:23.902959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.411 [2024-11-20 12:37:23.902967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.411 [2024-11-20 12:37:23.902974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.411 [2024-11-20 12:37:23.902983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.411 [2024-11-20 12:37:23.902989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.411 [2024-11-20 12:37:23.902997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.411 [2024-11-20 12:37:23.903004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.411 [2024-11-20 12:37:23.903011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.411 [2024-11-20 12:37:23.903018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.411 [2024-11-20 12:37:23.903026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.411 [2024-11-20 12:37:23.903035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.411 [2024-11-20 12:37:23.903043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.411 [2024-11-20 12:37:23.903050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.411 [2024-11-20 12:37:23.903058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.411 [2024-11-20 12:37:23.903064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.411 [2024-11-20 12:37:23.903073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.411 [2024-11-20 12:37:23.903080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.411 [2024-11-20 12:37:23.903088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.411 [2024-11-20 12:37:23.903096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.411 [2024-11-20 12:37:23.903104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.411 [2024-11-20 12:37:23.903111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.411 [2024-11-20 12:37:23.903119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.411 [2024-11-20 12:37:23.903126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.411 [2024-11-20 12:37:23.903135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.411 [2024-11-20 12:37:23.903142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.411 [2024-11-20 12:37:23.903150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.411 [2024-11-20 12:37:23.903158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.411 [2024-11-20 12:37:23.903166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.411 [2024-11-20 12:37:23.903173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.411 [2024-11-20 12:37:23.903182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.411 [2024-11-20 12:37:23.903188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.411 [2024-11-20 12:37:23.903196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.411 [2024-11-20 12:37:23.903207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.411 [2024-11-20 12:37:23.903217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.411 [2024-11-20 12:37:23.903224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.411 [2024-11-20 12:37:23.903234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.411 [2024-11-20 12:37:23.903241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.411 [2024-11-20 12:37:23.903249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.411 [2024-11-20 12:37:23.903256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.411 [2024-11-20 12:37:23.903265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.411 [2024-11-20 12:37:23.903272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.411 [2024-11-20 12:37:23.903281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.411 [2024-11-20 12:37:23.903288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.411 [2024-11-20 12:37:23.903297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.411 [2024-11-20 12:37:23.903304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.411 [2024-11-20 12:37:23.903312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.411 [2024-11-20 12:37:23.903319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.411 [2024-11-20 12:37:23.903328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.411 [2024-11-20 12:37:23.903334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.411 [2024-11-20 12:37:23.903343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dc4a0 is same with the state(6) to be set 00:23:18.411 [2024-11-20 12:37:23.904326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.411 [2024-11-20 12:37:23.904342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.411 [2024-11-20 12:37:23.904352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.411 [2024-11-20 12:37:23.904360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.411 [2024-11-20 12:37:23.904368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.411 [2024-11-20 12:37:23.904376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.411 [2024-11-20 12:37:23.904384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.411 [2024-11-20 12:37:23.904392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.411 [2024-11-20 12:37:23.904402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.411 [2024-11-20 12:37:23.904410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.411 [2024-11-20 12:37:23.904422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.411 [2024-11-20 12:37:23.904431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.411 [2024-11-20 12:37:23.904439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.411 [2024-11-20 12:37:23.904447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.411 [2024-11-20 12:37:23.904456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.412 [2024-11-20 12:37:23.904463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.412 [2024-11-20 12:37:23.904471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.412 [2024-11-20 12:37:23.904478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.412 [2024-11-20 12:37:23.904488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.412 [2024-11-20 12:37:23.904495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.412 [2024-11-20 12:37:23.904503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.412 [2024-11-20 12:37:23.904511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.412 [2024-11-20 12:37:23.904519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.412 [2024-11-20 12:37:23.904526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.412 [2024-11-20 12:37:23.904535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.412 [2024-11-20 12:37:23.904541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.412 [2024-11-20 12:37:23.904550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.412 [2024-11-20 12:37:23.904557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.412 [2024-11-20 12:37:23.904566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.412 [2024-11-20 12:37:23.904573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.412 [2024-11-20 12:37:23.904581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.412 [2024-11-20 12:37:23.904589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.412 [2024-11-20 12:37:23.904598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.412 [2024-11-20 12:37:23.904604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.412 [2024-11-20 12:37:23.904613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.412 [2024-11-20 12:37:23.904622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.412 [2024-11-20 12:37:23.904631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.412 [2024-11-20 12:37:23.904637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.412 [2024-11-20 12:37:23.904645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.412 [2024-11-20 12:37:23.904652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.412 [2024-11-20 12:37:23.904662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.412 [2024-11-20 12:37:23.904669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.412 [2024-11-20 12:37:23.904677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.412 [2024-11-20 12:37:23.904684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.412 [2024-11-20 12:37:23.904692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.412 [2024-11-20 12:37:23.904699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.412 [2024-11-20 12:37:23.904708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.412 [2024-11-20 12:37:23.904715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.412 [2024-11-20 12:37:23.904724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.412 [2024-11-20 12:37:23.904730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.412 [2024-11-20 12:37:23.904739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.412 [2024-11-20 12:37:23.904745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.412 [2024-11-20 12:37:23.904754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.412 [2024-11-20 12:37:23.904761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.412 [2024-11-20 12:37:23.904770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.412 [2024-11-20 12:37:23.904777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.412 [2024-11-20 12:37:23.904785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.412 [2024-11-20 12:37:23.904793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.412 [2024-11-20 12:37:23.904800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.412 [2024-11-20 12:37:23.904807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.412 [2024-11-20 12:37:23.904818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.412 [2024-11-20 12:37:23.904826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.412 [2024-11-20 12:37:23.904834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.412 [2024-11-20 12:37:23.904842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.412 [2024-11-20 12:37:23.904851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.412 [2024-11-20 12:37:23.904858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.412 [2024-11-20 12:37:23.904867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.412 [2024-11-20 12:37:23.904873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.412 [2024-11-20 12:37:23.904882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.412 [2024-11-20 12:37:23.904889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.412 [2024-11-20 12:37:23.904898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.412 [2024-11-20 12:37:23.904905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.412 [2024-11-20 12:37:23.904915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.412 [2024-11-20 12:37:23.904921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.412 [2024-11-20 12:37:23.904930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.412 [2024-11-20 12:37:23.904937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.412 [2024-11-20 12:37:23.904947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.412 [2024-11-20 12:37:23.904955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.412 [2024-11-20 12:37:23.904966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.412 [2024-11-20 12:37:23.904973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.412 [2024-11-20 12:37:23.904982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.412 [2024-11-20 12:37:23.904990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.412 [2024-11-20 12:37:23.904997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.412 [2024-11-20 12:37:23.905005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.412 [2024-11-20 12:37:23.905012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.412 [2024-11-20 12:37:23.905021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.412 [2024-11-20 12:37:23.905030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.412 [2024-11-20 12:37:23.905037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.412 [2024-11-20 12:37:23.905045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.412 [2024-11-20 12:37:23.905052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.412 [2024-11-20 12:37:23.905061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.412 [2024-11-20 12:37:23.905068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.412 [2024-11-20 12:37:23.905076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.412 [2024-11-20 12:37:23.905083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.413 [2024-11-20 12:37:23.905091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.413 [2024-11-20 12:37:23.905098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.413 [2024-11-20 12:37:23.905107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.413 [2024-11-20 12:37:23.905114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.413 [2024-11-20 12:37:23.905122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.413 [2024-11-20 12:37:23.905131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.413 [2024-11-20 12:37:23.905140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.413 [2024-11-20 12:37:23.905146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.413 [2024-11-20 12:37:23.905155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.413 [2024-11-20 12:37:23.905162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.413 [2024-11-20 12:37:23.905171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.413 [2024-11-20 12:37:23.905178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.413 [2024-11-20 12:37:23.905186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.413 [2024-11-20 12:37:23.905193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.413 [2024-11-20 12:37:23.905206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.413 [2024-11-20 12:37:23.905214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.413 [2024-11-20 12:37:23.905222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.413 [2024-11-20 12:37:23.905231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.413 [2024-11-20 12:37:23.905240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.413 [2024-11-20 12:37:23.905247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.413 [2024-11-20 12:37:23.905255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.413 [2024-11-20 12:37:23.905263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.413 [2024-11-20 12:37:23.905271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.413 [2024-11-20 12:37:23.905278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.413 [2024-11-20 12:37:23.905286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.413 [2024-11-20 12:37:23.905295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.413 [2024-11-20 12:37:23.905303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.413 [2024-11-20 12:37:23.905310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.413 [2024-11-20 12:37:23.905319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.413 [2024-11-20 12:37:23.905325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.413 [2024-11-20 12:37:23.905334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.413 [2024-11-20 12:37:23.905342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.413 [2024-11-20 12:37:23.905351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.413 [2024-11-20 12:37:23.905358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.413 [2024-11-20 12:37:23.905366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dda20 is same with the state(6) to be set 00:23:18.413 [2024-11-20 12:37:23.906353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.413 [2024-11-20 12:37:23.906367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.413 [2024-11-20 12:37:23.906378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.413 [2024-11-20 12:37:23.906384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.413 [2024-11-20 12:37:23.906393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.413 [2024-11-20 12:37:23.906400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.413 [2024-11-20 12:37:23.906409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.413 [2024-11-20 12:37:23.906419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.413 [2024-11-20 12:37:23.906428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.413 [2024-11-20 12:37:23.906435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.413 [2024-11-20 12:37:23.906444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.413 [2024-11-20 12:37:23.906452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.413 [2024-11-20 12:37:23.906460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.413 [2024-11-20 12:37:23.906467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.413 [2024-11-20 12:37:23.906476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.413 [2024-11-20 12:37:23.906484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.413 [2024-11-20 12:37:23.906492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.413 [2024-11-20 12:37:23.906499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.413 [2024-11-20 12:37:23.906508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.413 [2024-11-20 12:37:23.906514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.413 [2024-11-20 12:37:23.906523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.413 [2024-11-20 12:37:23.906530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.413 [2024-11-20 12:37:23.906538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.413 [2024-11-20 12:37:23.906545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.413 [2024-11-20 12:37:23.906554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.413 [2024-11-20 12:37:23.906561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.413 [2024-11-20 12:37:23.906569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.413 [2024-11-20 12:37:23.906576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.413 [2024-11-20 12:37:23.906584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.413 [2024-11-20 12:37:23.906591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.413 [2024-11-20 12:37:23.906599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.413 [2024-11-20 12:37:23.906607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.413 [2024-11-20 12:37:23.906617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.413 [2024-11-20 12:37:23.906624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.413 [2024-11-20 12:37:23.906633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.413 [2024-11-20 12:37:23.906640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.413 [2024-11-20 12:37:23.906648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.413 [2024-11-20 12:37:23.906654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.413 [2024-11-20 12:37:23.906663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.413 [2024-11-20 12:37:23.906669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.413 [2024-11-20 12:37:23.906678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.413 [2024-11-20 12:37:23.906684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.413 [2024-11-20 12:37:23.906693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.413 [2024-11-20 12:37:23.906701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.414 [2024-11-20 12:37:23.906709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.414 [2024-11-20 12:37:23.906716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.414 [2024-11-20 12:37:23.906724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.414 [2024-11-20 12:37:23.906731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.414 [2024-11-20 12:37:23.906740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.414 [2024-11-20 12:37:23.906747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.414 [2024-11-20 12:37:23.906755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.414 [2024-11-20 12:37:23.906762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.414 [2024-11-20 12:37:23.906771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.414 [2024-11-20 12:37:23.906779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.414 [2024-11-20 12:37:23.906787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.414 [2024-11-20 12:37:23.906794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.414 [2024-11-20 12:37:23.906803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.414 [2024-11-20 12:37:23.906812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.414 [2024-11-20 12:37:23.906821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.414 [2024-11-20 12:37:23.906828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.414 [2024-11-20 12:37:23.906836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.414 [2024-11-20 12:37:23.906843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.414 [2024-11-20 12:37:23.906851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.414 [2024-11-20 12:37:23.906857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.414 [2024-11-20 12:37:23.906866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.414 [2024-11-20 12:37:23.906873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.414 [2024-11-20 12:37:23.906881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.414 [2024-11-20 12:37:23.906888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.414 [2024-11-20 12:37:23.906897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.414 [2024-11-20 12:37:23.906904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.414 [2024-11-20 12:37:23.906911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.414 [2024-11-20 12:37:23.906919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.414 [2024-11-20 12:37:23.906927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.414 [2024-11-20 12:37:23.906934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.414 [2024-11-20 12:37:23.906942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.414 [2024-11-20 12:37:23.906950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.414 [2024-11-20 12:37:23.906958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.414 [2024-11-20 12:37:23.906966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.414 [2024-11-20 12:37:23.906974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.414 [2024-11-20 12:37:23.906981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.414 [2024-11-20 12:37:23.906989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.414 [2024-11-20 12:37:23.906996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.414 [2024-11-20 12:37:23.907006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.414 [2024-11-20 12:37:23.907013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.414 [2024-11-20 12:37:23.907022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.414 [2024-11-20 12:37:23.907029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.414 [2024-11-20 12:37:23.907039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.414 [2024-11-20 12:37:23.907046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.414 [2024-11-20 12:37:23.907054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.414 [2024-11-20 12:37:23.907062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.414 [2024-11-20 12:37:23.907069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.414 [2024-11-20 12:37:23.907077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.414 [2024-11-20 12:37:23.907085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.414 [2024-11-20 12:37:23.907092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.414 [2024-11-20 12:37:23.907100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.414 [2024-11-20 12:37:23.907107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.414 [2024-11-20 12:37:23.907115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.414 [2024-11-20 12:37:23.907122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.414 [2024-11-20 12:37:23.907130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.414 [2024-11-20 12:37:23.907138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.414 [2024-11-20 12:37:23.907146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.414 [2024-11-20 12:37:23.907152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.414 [2024-11-20 12:37:23.907161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.414 [2024-11-20 12:37:23.907168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.414 [2024-11-20 12:37:23.907177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.414 [2024-11-20 12:37:23.907183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.414 [2024-11-20 12:37:23.907193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.414 [2024-11-20 12:37:23.907206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.414 [2024-11-20 12:37:23.907216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.414 [2024-11-20 12:37:23.907224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.414 [2024-11-20 12:37:23.907232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.414 [2024-11-20 12:37:23.907240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.414 [2024-11-20 12:37:23.907248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.414 [2024-11-20 12:37:23.907255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.414 [2024-11-20 12:37:23.907264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.414 [2024-11-20 12:37:23.907271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.414 [2024-11-20 12:37:23.907279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.414 [2024-11-20 12:37:23.907287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.414 [2024-11-20 12:37:23.907295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.414 [2024-11-20 12:37:23.907303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.414 [2024-11-20 12:37:23.907311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.414 [2024-11-20 12:37:23.907318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.414 [2024-11-20 12:37:23.907326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.415 [2024-11-20 12:37:23.907334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.415 [2024-11-20 12:37:23.907342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.415 [2024-11-20 12:37:23.907349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.415 [2024-11-20 12:37:23.907357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.415 [2024-11-20 12:37:23.907364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.415 [2024-11-20 12:37:23.907373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2626450 is same with the state(6) to be set 00:23:18.415 [2024-11-20 12:37:23.908313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:18.415 [2024-11-20 12:37:23.908330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:18.415 [2024-11-20 12:37:23.908339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:18.415 task offset: 36608 on job bdev=Nvme4n1 fails 00:23:18.415 00:23:18.415 Latency(us) 00:23:18.415 [2024-11-20T11:37:24.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.415 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:18.415 Job: Nvme1n1 ended in about 0.94 seconds with error 00:23:18.415 Verification LBA range: start 0x0 length 0x400 00:23:18.415 Nvme1n1 : 0.94 203.77 12.74 67.92 0.00 233197.71 16227.96 212711.13 00:23:18.415 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:18.415 Job: Nvme2n1 ended in about 0.94 seconds with error 00:23:18.415 Verification LBA range: start 0x0 length 0x400 00:23:18.415 Nvme2n1 : 0.94 207.54 12.97 67.77 0.00 226377.19 26214.40 213709.78 00:23:18.415 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:18.415 Job: Nvme3n1 ended in about 0.95 seconds with error 00:23:18.415 Verification LBA range: start 0x0 length 0x400 00:23:18.415 Nvme3n1 : 0.95 202.83 12.68 67.61 0.00 226669.96 16103.13 208716.56 00:23:18.415 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:18.415 Job: Nvme4n1 ended in about 0.93 seconds with error 00:23:18.415 Verification LBA range: start 0x0 length 0x400 00:23:18.415 Nvme4n1 : 0.93 283.71 17.73 68.78 0.00 170673.52 14105.84 212711.13 00:23:18.415 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:18.415 Job: Nvme5n1 ended in about 0.95 seconds with error 00:23:18.415 Verification LBA range: start 0x0 length 0x400 00:23:18.415 Nvme5n1 : 0.95 201.79 12.61 67.26 0.00 220119.28 19223.89 217704.35 00:23:18.415 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:18.415 Job: Nvme6n1 ended in about 0.95 seconds with error 00:23:18.415 Verification LBA range: start 0x0 length 0x400 00:23:18.415 Nvme6n1 : 0.95 201.35 12.58 67.12 0.00 216810.30 22469.49 212711.13 00:23:18.415 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:18.415 Job: Nvme7n1 ended in about 0.96 seconds with error 00:23:18.415 Verification LBA range: start 0x0 length 0x400 00:23:18.415 Nvme7n1 : 0.96 200.93 12.56 66.98 0.00 213512.53 13419.28 223696.21 00:23:18.415 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:18.415 Job: Nvme8n1 ended in about 0.96 seconds with error 00:23:18.415 Verification LBA range: start 0x0 length 0x400 00:23:18.415 Nvme8n1 : 0.96 200.51 12.53 66.84 0.00 210147.96 15104.49 214708.42 00:23:18.415 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:18.415 Job: Nvme9n1 ended in about 0.94 seconds with error 00:23:18.415 Verification LBA range: start 0x0 length 0x400 00:23:18.415 Nvme9n1 : 0.94 208.80 13.05 68.18 0.00 198506.44 2449.80 219701.64 00:23:18.415 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:18.415 Job: Nvme10n1 ended in about 0.93 seconds with error 00:23:18.415 Verification LBA range: start 0x0 length 0x400 00:23:18.415 Nvme10n1 : 0.93 206.04 12.88 68.68 0.00 196178.41 16227.96 238675.87 00:23:18.415 [2024-11-20T11:37:24.181Z] =================================================================================================================== 00:23:18.415 [2024-11-20T11:37:24.181Z] Total : 2117.29 132.33 677.14 0.00 210117.24 2449.80 238675.87 00:23:18.415 [2024-11-20 12:37:23.942141] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:18.415 [2024-11-20 12:37:23.942191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:23:18.415 [2024-11-20 12:37:23.942768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.415 [2024-11-20 12:37:23.942797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17024c0 with addr=10.0.0.2, port=4420 00:23:18.415 [2024-11-20 12:37:23.942809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17024c0 is same with the state(6) to be set 00:23:18.415 [2024-11-20 12:37:23.943047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.415 [2024-11-20 12:37:23.943061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f8300 with addr=10.0.0.2, port=4420 00:23:18.415 [2024-11-20 12:37:23.943074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f8300 is same with the state(6) to be set 00:23:18.415 [2024-11-20 12:37:23.943196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.415 [2024-11-20 12:37:23.943237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11eb610 with addr=10.0.0.2, port=4420 00:23:18.415 [2024-11-20 12:37:23.943245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11eb610 is same with the state(6) to be set 00:23:18.415 [2024-11-20 12:37:23.943391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.415 [2024-11-20 12:37:23.943405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1714920 with addr=10.0.0.2, port=4420 00:23:18.415 [2024-11-20 12:37:23.943413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714920 is same with the state(6) to be set 00:23:18.415 [2024-11-20 12:37:23.943454] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:23:18.415 [2024-11-20 12:37:23.943468] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:23:18.415 [2024-11-20 12:37:23.943479] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:23:18.415 [2024-11-20 12:37:23.943489] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:23:18.415 [2024-11-20 12:37:23.944386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:23:18.415 [2024-11-20 12:37:23.944403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:18.415 [2024-11-20 12:37:23.944412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:18.415 [2024-11-20 12:37:23.944420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:18.415 [2024-11-20 12:37:23.944477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17024c0 (9): Bad file descriptor 00:23:18.415 [2024-11-20 12:37:23.944491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f8300 (9): Bad file descriptor 00:23:18.415 [2024-11-20 12:37:23.944500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11eb610 (9): Bad file descriptor 00:23:18.415 [2024-11-20 12:37:23.944508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1714920 (9): Bad file descriptor 00:23:18.415 [2024-11-20 12:37:23.944561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:18.415 [2024-11-20 12:37:23.944571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:18.415 [2024-11-20 12:37:23.944765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.415 [2024-11-20 12:37:23.944780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x171b3e0 with addr=10.0.0.2, port=4420 00:23:18.415 [2024-11-20 12:37:23.944788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171b3e0 is same with the state(6) to be set 00:23:18.415 [2024-11-20 12:37:23.945016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.415 [2024-11-20 12:37:23.945028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12d4c70 with addr=10.0.0.2, port=4420 00:23:18.415 [2024-11-20 12:37:23.945035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d4c70 is same with the state(6) to be set 00:23:18.415 [2024-11-20 12:37:23.945250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.415 [2024-11-20 12:37:23.945267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12d6d50 with addr=10.0.0.2, port=4420 00:23:18.415 [2024-11-20 12:37:23.945275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d6d50 is same with the state(6) to be set 00:23:18.415 [2024-11-20 12:37:23.945373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.415 [2024-11-20 12:37:23.945384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12d71b0 with addr=10.0.0.2, port=4420 00:23:18.416 [2024-11-20 12:37:23.945392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d71b0 is same with the state(6) to be set 00:23:18.416 [2024-11-20 12:37:23.945399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:18.416 [2024-11-20 12:37:23.945407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:18.416 [2024-11-20 12:37:23.945416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:18.416 [2024-11-20 12:37:23.945425] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:18.416 [2024-11-20 12:37:23.945432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:18.416 [2024-11-20 12:37:23.945438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:18.416 [2024-11-20 12:37:23.945445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:18.416 [2024-11-20 12:37:23.945452] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:18.416 [2024-11-20 12:37:23.945459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:18.416 [2024-11-20 12:37:23.945465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:18.416 [2024-11-20 12:37:23.945471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:18.416 [2024-11-20 12:37:23.945478] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:18.416 [2024-11-20 12:37:23.945484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:23:18.416 [2024-11-20 12:37:23.945491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:23:18.416 [2024-11-20 12:37:23.945497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:23:18.416 [2024-11-20 12:37:23.945504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:23:18.416 [2024-11-20 12:37:23.945699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.416 [2024-11-20 12:37:23.945711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17017a0 with addr=10.0.0.2, port=4420 00:23:18.416 [2024-11-20 12:37:23.945718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17017a0 is same with the state(6) to be set 00:23:18.416 [2024-11-20 12:37:23.945806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.416 [2024-11-20 12:37:23.945818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173fb50 with addr=10.0.0.2, port=4420 00:23:18.416 [2024-11-20 12:37:23.945825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173fb50 is same with the state(6) to be set 00:23:18.416 [2024-11-20 12:37:23.945835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171b3e0 (9): Bad file descriptor 00:23:18.416 [2024-11-20 12:37:23.945845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12d4c70 (9): Bad file descriptor 00:23:18.416 [2024-11-20 12:37:23.945856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12d6d50 (9): Bad file descriptor 00:23:18.416 [2024-11-20 12:37:23.945865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12d71b0 (9): Bad file descriptor 00:23:18.416 [2024-11-20 12:37:23.945890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17017a0 (9): Bad file descriptor 00:23:18.416 [2024-11-20 12:37:23.945901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173fb50 (9): Bad file descriptor 00:23:18.416 [2024-11-20 12:37:23.945908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:23:18.416 [2024-11-20 12:37:23.945915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:23:18.416 [2024-11-20 12:37:23.945922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:23:18.416 [2024-11-20 12:37:23.945929] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:23:18.416 [2024-11-20 12:37:23.945936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:18.416 [2024-11-20 12:37:23.945943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:18.416 [2024-11-20 12:37:23.945949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:18.416 [2024-11-20 12:37:23.945955] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:18.416 [2024-11-20 12:37:23.945962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:18.416 [2024-11-20 12:37:23.945968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:18.416 [2024-11-20 12:37:23.945975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:18.416 [2024-11-20 12:37:23.945982] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:18.416 [2024-11-20 12:37:23.945988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:18.416 [2024-11-20 12:37:23.945994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:18.416 [2024-11-20 12:37:23.946001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:18.416 [2024-11-20 12:37:23.946007] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:18.416 [2024-11-20 12:37:23.946031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:18.416 [2024-11-20 12:37:23.946038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:18.416 [2024-11-20 12:37:23.946045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:18.416 [2024-11-20 12:37:23.946051] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:18.416 [2024-11-20 12:37:23.946059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:18.416 [2024-11-20 12:37:23.946065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:18.416 [2024-11-20 12:37:23.946071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:18.416 [2024-11-20 12:37:23.946077] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:18.716 12:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:23:19.662 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 242320 00:23:19.662 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:23:19.662 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 242320 00:23:19.662 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:23:19.662 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:19.662 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:23:19.662 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:19.662 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 242320 00:23:19.662 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:23:19.662 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:19.662 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:23:19.662 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:23:19.662 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:23:19.662 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:19.662 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:23:19.662 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:19.662 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:19.662 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:19.662 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:19.662 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:19.662 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:23:19.662 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:19.662 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:23:19.662 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:19.662 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:19.662 rmmod nvme_tcp 00:23:19.662 rmmod nvme_fabrics 00:23:19.662 rmmod nvme_keyring 00:23:19.662 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:19.662 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:23:19.662 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:23:19.662 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 242022 ']' 00:23:19.662 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 242022 00:23:19.662 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 242022 ']' 00:23:19.662 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 242022 00:23:19.662 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (242022) - No such process 00:23:19.662 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 242022 is not found' 00:23:19.662 Process with pid 242022 is not found 00:23:19.663 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:19.663 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:19.663 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:19.663 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:23:19.663 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:23:19.663 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:19.663 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:23:19.663 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:19.663 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:19.663 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.663 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:19.663 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:22.201 00:23:22.201 real 0m8.092s 00:23:22.201 user 0m20.514s 00:23:22.201 sys 0m1.353s 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:22.201 ************************************ 00:23:22.201 END TEST nvmf_shutdown_tc3 00:23:22.201 ************************************ 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:22.201 ************************************ 00:23:22.201 START TEST nvmf_shutdown_tc4 00:23:22.201 ************************************ 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:22.201 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:22.201 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:22.201 Found net devices under 0000:86:00.0: cvl_0_0 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:22.201 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:22.202 Found net devices under 0000:86:00.1: cvl_0_1 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:22.202 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:22.202 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.360 ms 00:23:22.202 00:23:22.202 --- 10.0.0.2 ping statistics --- 00:23:22.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.202 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:22.202 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:22.202 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:23:22.202 00:23:22.202 --- 10.0.0.1 ping statistics --- 00:23:22.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.202 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=243380 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 243380 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 243380 ']' 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:22.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:22.202 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:22.202 [2024-11-20 12:37:27.886252] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:23:22.202 [2024-11-20 12:37:27.886294] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:22.462 [2024-11-20 12:37:27.967019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:22.462 [2024-11-20 12:37:28.007494] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:22.462 [2024-11-20 12:37:28.007532] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:22.462 [2024-11-20 12:37:28.007539] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:22.462 [2024-11-20 12:37:28.007546] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:22.462 [2024-11-20 12:37:28.007551] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:22.462 [2024-11-20 12:37:28.009221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:22.462 [2024-11-20 12:37:28.009314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:22.462 [2024-11-20 12:37:28.009350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:22.462 [2024-11-20 12:37:28.009358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:23.029 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:23.029 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:23:23.029 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:23.029 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:23.029 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:23.029 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:23.029 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:23.029 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.029 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:23.029 [2024-11-20 12:37:28.759456] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:23.029 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.029 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:23.029 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:23.029 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:23.029 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:23.029 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:23.029 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:23.029 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:23.029 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:23.029 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:23.029 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:23.029 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:23.029 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:23.029 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:23.029 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:23.029 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:23.289 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:23.289 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:23.289 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:23.289 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:23.289 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:23.289 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:23.289 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:23.289 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:23.289 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:23.289 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:23.289 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:23.289 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.289 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:23.289 Malloc1 00:23:23.289 [2024-11-20 12:37:28.865489] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:23.289 Malloc2 00:23:23.289 Malloc3 00:23:23.289 Malloc4 00:23:23.289 Malloc5 00:23:23.550 Malloc6 00:23:23.550 Malloc7 00:23:23.550 Malloc8 00:23:23.550 Malloc9 00:23:23.550 Malloc10 00:23:23.550 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.550 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:23.550 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:23.550 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:23.550 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=243667 00:23:23.550 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:23:23.550 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:23:23.808 [2024-11-20 12:37:29.362710] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:29.087 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:29.088 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 243380 00:23:29.088 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 243380 ']' 00:23:29.088 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 243380 00:23:29.088 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:23:29.088 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:29.088 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 243380 00:23:29.088 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:29.088 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:29.088 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 243380' 00:23:29.088 killing process with pid 243380 00:23:29.088 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 243380 00:23:29.088 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 243380 00:23:29.088 [2024-11-20 12:37:34.370627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63320 is same with the state(6) to be set 00:23:29.088 [2024-11-20 12:37:34.370671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63320 is same with the state(6) to be set 00:23:29.088 [2024-11-20 12:37:34.370680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63320 is same with the state(6) to be set 00:23:29.088 [2024-11-20 12:37:34.370687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63320 is same with the state(6) to be set 00:23:29.088 [2024-11-20 12:37:34.370694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63320 is same with the state(6) to be set 00:23:29.088 [2024-11-20 12:37:34.370700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63320 is same with the state(6) to be set 00:23:29.088 [2024-11-20 12:37:34.370706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63320 is same with the state(6) to be set 00:23:29.088 [2024-11-20 12:37:34.370712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63320 is same with the state(6) to be set 00:23:29.088 [2024-11-20 12:37:34.370719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63320 is same with the state(6) to be set 00:23:29.088 [2024-11-20 12:37:34.370726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63320 is same with the state(6) to be set 00:23:29.088 [2024-11-20 12:37:34.371311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d637f0 is same with the state(6) to be set 00:23:29.088 [2024-11-20 12:37:34.371342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d637f0 is same with the state(6) to be set 00:23:29.088 [2024-11-20 12:37:34.371350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d637f0 is same with the state(6) to be set 00:23:29.088 [2024-11-20 12:37:34.371358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d637f0 is same with the state(6) to be set 00:23:29.088 [2024-11-20 12:37:34.371365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d637f0 is same with the state(6) to be set 00:23:29.088 [2024-11-20 12:37:34.371372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d637f0 is same with the state(6) to be set 00:23:29.088 [2024-11-20 12:37:34.371379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d637f0 is same with the state(6) to be set 00:23:29.088 [2024-11-20 12:37:34.371386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d637f0 is same with the state(6) to be set 00:23:29.088 [2024-11-20 12:37:34.371898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63cc0 is same with the state(6) to be set 00:23:29.088 [2024-11-20 12:37:34.371930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63cc0 is same with the state(6) to be set 00:23:29.088 [2024-11-20 12:37:34.371939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63cc0 is same with the state(6) to be set 00:23:29.088 [2024-11-20 12:37:34.371946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63cc0 is same with the state(6) to be set 00:23:29.088 [2024-11-20 12:37:34.371952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63cc0 is same with the state(6) to be set 00:23:29.088 [2024-11-20 12:37:34.371958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63cc0 is same with the state(6) to be set 00:23:29.088 [2024-11-20 12:37:34.371965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63cc0 is same with the state(6) to be set 00:23:29.088 [2024-11-20 12:37:34.371971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63cc0 is same with the state(6) to be set 00:23:29.088 [2024-11-20 12:37:34.371977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63cc0 is same with the state(6) to be set 00:23:29.088 [2024-11-20 12:37:34.372876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d62e50 is same with the state(6) to be set 00:23:29.088 [2024-11-20 12:37:34.372903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d62e50 is same with the state(6) to be set 00:23:29.088 [2024-11-20 12:37:34.372911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d62e50 is same with the state(6) to be set 00:23:29.088 [2024-11-20 12:37:34.372919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d62e50 is same with the state(6) to be set 00:23:29.088 [2024-11-20 12:37:34.372926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d62e50 is same with the state(6) to be set 00:23:29.088 [2024-11-20 12:37:34.372932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d62e50 is same with the state(6) to be set 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 starting I/O failed: -6 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 starting I/O failed: -6 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 starting I/O failed: -6 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 starting I/O failed: -6 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 starting I/O failed: -6 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 starting I/O failed: -6 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 starting I/O failed: -6 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 starting I/O failed: -6 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 starting I/O failed: -6 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 starting I/O failed: -6 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 [2024-11-20 12:37:34.374781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.088 starting I/O failed: -6 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 starting I/O failed: -6 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 starting I/O failed: -6 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 starting I/O failed: -6 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 starting I/O failed: -6 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 starting I/O failed: -6 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 starting I/O failed: -6 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 starting I/O failed: -6 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 starting I/O failed: -6 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 starting I/O failed: -6 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 starting I/O failed: -6 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 starting I/O failed: -6 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 starting I/O failed: -6 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 starting I/O failed: -6 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 starting I/O failed: -6 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 Write completed with error (sct=0, sc=8) 00:23:29.088 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 [2024-11-20 12:37:34.375608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 [2024-11-20 12:37:34.376642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.089 starting I/O failed: -6 00:23:29.089 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 [2024-11-20 12:37:34.378183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c65c00 is same with the state(6) to be set 00:23:29.090 [2024-11-20 12:37:34.378211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c65c00 is same with the state(6) to be set 00:23:29.090 [2024-11-20 12:37:34.378219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c65c00 is same with the state(6) to be set 00:23:29.090 [2024-11-20 12:37:34.378220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ [2024-11-20 12:37:34.378225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c65c00 is same with transport error -6 (No such device or address) on qpair id 4 00:23:29.090 the state(6) to be set 00:23:29.090 [2024-11-20 12:37:34.378234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c65c00 is same with the state(6) to be set 00:23:29.090 NVMe io qpair process completion error 00:23:29.090 [2024-11-20 12:37:34.378240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c65c00 is same with the state(6) to be set 00:23:29.090 [2024-11-20 12:37:34.378247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c65c00 is same with the state(6) to be set 00:23:29.090 [2024-11-20 12:37:34.378253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c65c00 is same with the state(6) to be set 00:23:29.090 [2024-11-20 12:37:34.378259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c65c00 is same with the state(6) to be set 00:23:29.090 [2024-11-20 12:37:34.378265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c65c00 is same with the state(6) to be set 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 [2024-11-20 12:37:34.378780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50910 is same with the state(6) to be set 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 [2024-11-20 12:37:34.378801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50910 is same with the state(6) to be set 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 [2024-11-20 12:37:34.378809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50910 is same with starting I/O failed: -6 00:23:29.090 the state(6) to be set 00:23:29.090 [2024-11-20 12:37:34.378822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50910 is same with the state(6) to be set 00:23:29.090 [2024-11-20 12:37:34.378829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50910 is same with the state(6) to be set 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 [2024-11-20 12:37:34.379149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 [2024-11-20 12:37:34.380015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:29.090 starting I/O failed: -6 00:23:29.090 starting I/O failed: -6 00:23:29.090 starting I/O failed: -6 00:23:29.090 starting I/O failed: -6 00:23:29.090 starting I/O failed: -6 00:23:29.090 starting I/O failed: -6 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 Write completed with error (sct=0, sc=8) 00:23:29.090 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 [2024-11-20 12:37:34.381411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 [2024-11-20 12:37:34.383290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:29.091 NVMe io qpair process completion error 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 [2024-11-20 12:37:34.384061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.091 starting I/O failed: -6 00:23:29.091 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 [2024-11-20 12:37:34.384950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 [2024-11-20 12:37:34.385946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.092 Write completed with error (sct=0, sc=8) 00:23:29.092 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 [2024-11-20 12:37:34.387610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:29.093 NVMe io qpair process completion error 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 [2024-11-20 12:37:34.388612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 [2024-11-20 12:37:34.389477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 Write completed with error (sct=0, sc=8) 00:23:29.093 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 [2024-11-20 12:37:34.390512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 [2024-11-20 12:37:34.392654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:29.094 NVMe io qpair process completion error 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 starting I/O failed: -6 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.094 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 [2024-11-20 12:37:34.393683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.095 starting I/O failed: -6 00:23:29.095 starting I/O failed: -6 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 [2024-11-20 12:37:34.394615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 [2024-11-20 12:37:34.395695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.095 Write completed with error (sct=0, sc=8) 00:23:29.095 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 [2024-11-20 12:37:34.399463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:29.096 NVMe io qpair process completion error 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 [2024-11-20 12:37:34.400460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 [2024-11-20 12:37:34.401372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.096 starting I/O failed: -6 00:23:29.096 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 [2024-11-20 12:37:34.402370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 [2024-11-20 12:37:34.406019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:29.097 NVMe io qpair process completion error 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 Write completed with error (sct=0, sc=8) 00:23:29.097 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 [2024-11-20 12:37:34.406978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 [2024-11-20 12:37:34.407879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 [2024-11-20 12:37:34.408879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.098 Write completed with error (sct=0, sc=8) 00:23:29.098 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 [2024-11-20 12:37:34.410696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:29.099 NVMe io qpair process completion error 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 [2024-11-20 12:37:34.411723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 starting I/O failed: -6 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.099 Write completed with error (sct=0, sc=8) 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 [2024-11-20 12:37:34.412631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 [2024-11-20 12:37:34.413634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 starting I/O failed: -6 00:23:29.100 [2024-11-20 12:37:34.415390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:29.100 NVMe io qpair process completion error 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.100 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 [2024-11-20 12:37:34.416260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 [2024-11-20 12:37:34.417145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 [2024-11-20 12:37:34.418150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.101 Write completed with error (sct=0, sc=8) 00:23:29.101 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 [2024-11-20 12:37:34.421316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:29.102 NVMe io qpair process completion error 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 [2024-11-20 12:37:34.422269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.102 starting I/O failed: -6 00:23:29.102 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 [2024-11-20 12:37:34.423147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:29.103 starting I/O failed: -6 00:23:29.103 starting I/O failed: -6 00:23:29.103 starting I/O failed: -6 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 [2024-11-20 12:37:34.424336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.103 Write completed with error (sct=0, sc=8) 00:23:29.103 starting I/O failed: -6 00:23:29.104 Write completed with error (sct=0, sc=8) 00:23:29.104 starting I/O failed: -6 00:23:29.104 Write completed with error (sct=0, sc=8) 00:23:29.104 starting I/O failed: -6 00:23:29.104 Write completed with error (sct=0, sc=8) 00:23:29.104 starting I/O failed: -6 00:23:29.104 Write completed with error (sct=0, sc=8) 00:23:29.104 starting I/O failed: -6 00:23:29.104 Write completed with error (sct=0, sc=8) 00:23:29.104 starting I/O failed: -6 00:23:29.104 Write completed with error (sct=0, sc=8) 00:23:29.104 starting I/O failed: -6 00:23:29.104 Write completed with error (sct=0, sc=8) 00:23:29.104 starting I/O failed: -6 00:23:29.104 Write completed with error (sct=0, sc=8) 00:23:29.104 starting I/O failed: -6 00:23:29.104 [2024-11-20 12:37:34.427391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:29.104 NVMe io qpair process completion error 00:23:29.104 Initializing NVMe Controllers 00:23:29.104 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:23:29.104 Controller IO queue size 128, less than required. 00:23:29.104 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:29.104 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:23:29.104 Controller IO queue size 128, less than required. 00:23:29.104 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:29.104 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:29.104 Controller IO queue size 128, less than required. 00:23:29.104 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:29.104 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:23:29.104 Controller IO queue size 128, less than required. 00:23:29.104 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:29.104 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:23:29.104 Controller IO queue size 128, less than required. 00:23:29.104 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:29.104 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:23:29.104 Controller IO queue size 128, less than required. 00:23:29.104 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:29.104 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:23:29.104 Controller IO queue size 128, less than required. 00:23:29.104 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:29.104 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:23:29.104 Controller IO queue size 128, less than required. 00:23:29.104 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:29.104 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:23:29.104 Controller IO queue size 128, less than required. 00:23:29.104 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:29.104 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:23:29.104 Controller IO queue size 128, less than required. 00:23:29.104 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:29.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:23:29.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:23:29.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:29.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:23:29.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:23:29.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:23:29.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:23:29.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:23:29.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:23:29.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:23:29.104 Initialization complete. Launching workers. 00:23:29.104 ======================================================== 00:23:29.104 Latency(us) 00:23:29.104 Device Information : IOPS MiB/s Average min max 00:23:29.104 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2192.42 94.21 58386.29 703.72 110630.81 00:23:29.104 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2212.84 95.08 57877.66 777.86 118105.34 00:23:29.104 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2197.84 94.44 57651.57 681.86 105984.25 00:23:29.104 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2193.25 94.24 57784.49 745.61 96350.50 00:23:29.104 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2193.88 94.27 57780.20 817.11 103101.80 00:23:29.104 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2199.50 94.51 57643.37 859.06 102188.51 00:23:29.104 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2198.25 94.46 57695.47 652.78 100690.25 00:23:29.104 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2195.54 94.34 57803.17 689.94 106744.21 00:23:29.104 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2160.33 92.83 58779.70 715.53 110837.77 00:23:29.104 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2159.08 92.77 58826.70 725.29 113303.88 00:23:29.104 ======================================================== 00:23:29.104 Total : 21902.93 941.14 58020.04 652.78 118105.34 00:23:29.104 00:23:29.104 [2024-11-20 12:37:34.430377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fef0 is same with the state(6) to be set 00:23:29.104 [2024-11-20 12:37:34.430422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50a70 is same with the state(6) to be set 00:23:29.104 [2024-11-20 12:37:34.430452] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51720 is same with the state(6) to be set 00:23:29.104 [2024-11-20 12:37:34.430481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f560 is same with the state(6) to be set 00:23:29.104 [2024-11-20 12:37:34.430509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fbc0 is same with the state(6) to be set 00:23:29.104 [2024-11-20 12:37:34.430537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51900 is same with the state(6) to be set 00:23:29.104 [2024-11-20 12:37:34.430564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f890 is same with the state(6) to be set 00:23:29.104 [2024-11-20 12:37:34.430590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50740 is same with the state(6) to be set 00:23:29.104 [2024-11-20 12:37:34.430618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51ae0 is same with the state(6) to be set 00:23:29.104 [2024-11-20 12:37:34.430645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50410 is same with the state(6) to be set 00:23:29.104 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:23:29.104 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:23:30.041 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 243667 00:23:30.041 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:23:30.041 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 243667 00:23:30.041 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:23:30.041 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:30.041 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:23:30.041 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:30.041 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 243667 00:23:30.041 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:23:30.041 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:30.041 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:30.041 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:30.041 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:23:30.041 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:30.041 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:30.041 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:30.041 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:30.041 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:30.041 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:23:30.041 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:30.041 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:23:30.041 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:30.041 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:30.041 rmmod nvme_tcp 00:23:30.041 rmmod nvme_fabrics 00:23:30.300 rmmod nvme_keyring 00:23:30.300 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:30.300 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:23:30.300 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:23:30.300 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 243380 ']' 00:23:30.300 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 243380 00:23:30.300 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 243380 ']' 00:23:30.300 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 243380 00:23:30.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (243380) - No such process 00:23:30.300 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 243380 is not found' 00:23:30.300 Process with pid 243380 is not found 00:23:30.300 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:30.300 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:30.300 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:30.300 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:23:30.300 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:23:30.300 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:30.300 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:23:30.300 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:30.300 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:30.300 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.300 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:30.300 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:32.205 12:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:32.205 00:23:32.205 real 0m10.407s 00:23:32.205 user 0m27.528s 00:23:32.205 sys 0m5.209s 00:23:32.205 12:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:32.205 12:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:32.205 ************************************ 00:23:32.205 END TEST nvmf_shutdown_tc4 00:23:32.205 ************************************ 00:23:32.205 12:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:23:32.205 00:23:32.205 real 0m41.611s 00:23:32.205 user 1m43.669s 00:23:32.205 sys 0m14.064s 00:23:32.205 12:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:32.205 12:37:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:32.205 ************************************ 00:23:32.205 END TEST nvmf_shutdown 00:23:32.205 ************************************ 00:23:32.465 12:37:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:32.465 12:37:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:32.465 12:37:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:32.465 12:37:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:32.465 ************************************ 00:23:32.465 START TEST nvmf_nsid 00:23:32.465 ************************************ 00:23:32.465 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:32.465 * Looking for test storage... 00:23:32.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:32.465 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:32.465 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:23:32.465 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:32.465 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:32.465 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:32.465 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:32.465 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:32.465 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:23:32.465 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:23:32.465 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:23:32.465 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:23:32.465 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:23:32.465 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:23:32.465 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:23:32.465 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:32.465 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:23:32.465 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:23:32.465 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:32.465 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:32.465 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:23:32.465 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:23:32.465 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:32.465 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:23:32.465 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:23:32.465 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:23:32.465 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:23:32.465 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:32.465 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:23:32.465 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:23:32.465 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:32.465 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:32.465 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:23:32.465 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:32.465 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:32.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.465 --rc genhtml_branch_coverage=1 00:23:32.465 --rc genhtml_function_coverage=1 00:23:32.465 --rc genhtml_legend=1 00:23:32.465 --rc geninfo_all_blocks=1 00:23:32.465 --rc geninfo_unexecuted_blocks=1 00:23:32.465 00:23:32.465 ' 00:23:32.465 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:32.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.465 --rc genhtml_branch_coverage=1 00:23:32.465 --rc genhtml_function_coverage=1 00:23:32.465 --rc genhtml_legend=1 00:23:32.465 --rc geninfo_all_blocks=1 00:23:32.465 --rc geninfo_unexecuted_blocks=1 00:23:32.465 00:23:32.465 ' 00:23:32.465 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:32.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.465 --rc genhtml_branch_coverage=1 00:23:32.465 --rc genhtml_function_coverage=1 00:23:32.465 --rc genhtml_legend=1 00:23:32.465 --rc geninfo_all_blocks=1 00:23:32.465 --rc geninfo_unexecuted_blocks=1 00:23:32.465 00:23:32.465 ' 00:23:32.465 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:32.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.465 --rc genhtml_branch_coverage=1 00:23:32.465 --rc genhtml_function_coverage=1 00:23:32.465 --rc genhtml_legend=1 00:23:32.465 --rc geninfo_all_blocks=1 00:23:32.465 --rc geninfo_unexecuted_blocks=1 00:23:32.465 00:23:32.465 ' 00:23:32.465 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:32.465 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:23:32.465 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:32.465 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:32.465 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:32.465 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:32.466 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:32.466 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:32.466 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:32.466 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:32.466 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:32.466 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:32.725 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:32.725 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:32.725 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:32.725 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:32.725 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:32.725 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:32.725 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:32.725 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:23:32.725 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:32.725 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:32.725 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:32.725 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.726 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.726 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.726 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:23:32.726 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.726 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:23:32.726 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:32.726 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:32.726 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:32.726 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:32.726 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:32.726 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:32.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:32.726 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:32.726 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:32.726 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:32.726 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:23:32.726 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:23:32.726 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:23:32.726 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:23:32.726 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:23:32.726 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:23:32.726 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:32.726 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:32.726 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:32.726 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:32.726 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:32.726 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:32.726 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:32.726 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:32.726 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:32.726 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:32.726 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:23:32.726 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:39.317 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:39.317 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:39.317 Found net devices under 0000:86:00.0: cvl_0_0 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:39.317 Found net devices under 0000:86:00.1: cvl_0_1 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:39.317 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:39.317 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:39.317 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:39.317 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:39.317 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:39.317 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:39.317 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.465 ms 00:23:39.317 00:23:39.317 --- 10.0.0.2 ping statistics --- 00:23:39.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.317 rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms 00:23:39.317 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:39.317 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:39.318 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:23:39.318 00:23:39.318 --- 10.0.0.1 ping statistics --- 00:23:39.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.318 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:23:39.318 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:39.318 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:23:39.318 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:39.318 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:39.318 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:39.318 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:39.318 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:39.318 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:39.318 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:39.318 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:23:39.318 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:39.318 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:39.318 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:39.318 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=248339 00:23:39.318 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:23:39.318 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 248339 00:23:39.318 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 248339 ']' 00:23:39.318 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.318 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:39.318 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.318 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:39.318 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:39.318 [2024-11-20 12:37:44.198468] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:23:39.318 [2024-11-20 12:37:44.198510] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:39.318 [2024-11-20 12:37:44.272283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.318 [2024-11-20 12:37:44.313648] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.318 [2024-11-20 12:37:44.313681] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.318 [2024-11-20 12:37:44.313689] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:39.318 [2024-11-20 12:37:44.313694] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:39.318 [2024-11-20 12:37:44.313699] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.318 [2024-11-20 12:37:44.314243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:39.318 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:39.318 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:23:39.318 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:39.318 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:39.318 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:39.318 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.318 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:39.318 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=248374 00:23:39.318 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:23:39.318 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:23:39.318 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:23:39.318 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:23:39.318 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:39.318 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:39.318 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.318 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.318 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:39.318 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.318 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:39.318 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:39.318 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:39.318 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:23:39.318 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:23:39.318 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=9d8bfea3-25aa-4bee-af42-786e533571be 00:23:39.318 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:23:39.318 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=45411b0d-42ff-4cc4-b93c-9ac865021235 00:23:39.577 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:23:39.577 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=1b342378-86ca-415e-af86-153cfad4b9a2 00:23:39.577 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:23:39.577 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.577 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:39.577 null0 00:23:39.577 null1 00:23:39.577 [2024-11-20 12:37:45.110528] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:23:39.578 [2024-11-20 12:37:45.110574] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid248374 ] 00:23:39.578 null2 00:23:39.578 [2024-11-20 12:37:45.115457] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:39.578 [2024-11-20 12:37:45.139636] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:39.578 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.578 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 248374 /var/tmp/tgt2.sock 00:23:39.578 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 248374 ']' 00:23:39.578 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:23:39.578 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:39.578 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:23:39.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:23:39.578 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:39.578 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:39.578 [2024-11-20 12:37:45.184033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.578 [2024-11-20 12:37:45.230005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.837 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:39.837 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:23:39.837 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:23:40.095 [2024-11-20 12:37:45.752475] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:40.095 [2024-11-20 12:37:45.768583] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:23:40.095 nvme0n1 nvme0n2 00:23:40.095 nvme1n1 00:23:40.095 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:23:40.095 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:23:40.095 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:41.474 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:23:41.474 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:23:41.474 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:23:41.474 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:23:41.474 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:23:41.474 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:23:41.474 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:23:41.474 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:41.474 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:41.474 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:41.474 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:23:41.474 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:23:41.474 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:23:42.412 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:42.412 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:42.412 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:42.412 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:23:42.412 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:42.412 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 9d8bfea3-25aa-4bee-af42-786e533571be 00:23:42.412 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:42.412 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:23:42.412 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:23:42.412 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:23:42.412 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:42.412 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=9d8bfea325aa4beeaf42786e533571be 00:23:42.412 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 9D8BFEA325AA4BEEAF42786E533571BE 00:23:42.412 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 9D8BFEA325AA4BEEAF42786E533571BE == \9\D\8\B\F\E\A\3\2\5\A\A\4\B\E\E\A\F\4\2\7\8\6\E\5\3\3\5\7\1\B\E ]] 00:23:42.412 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:23:42.412 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:42.412 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:42.412 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:23:42.412 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:42.412 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:23:42.412 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:42.412 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 45411b0d-42ff-4cc4-b93c-9ac865021235 00:23:42.412 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:42.412 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:23:42.412 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:23:42.412 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:23:42.412 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:42.412 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=45411b0d42ff4cc4b93c9ac865021235 00:23:42.412 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 45411B0D42FF4CC4B93C9AC865021235 00:23:42.412 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 45411B0D42FF4CC4B93C9AC865021235 == \4\5\4\1\1\B\0\D\4\2\F\F\4\C\C\4\B\9\3\C\9\A\C\8\6\5\0\2\1\2\3\5 ]] 00:23:42.412 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:23:42.412 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:42.412 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:23:42.412 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:42.412 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:42.412 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:23:42.412 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:42.412 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 1b342378-86ca-415e-af86-153cfad4b9a2 00:23:42.412 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:42.412 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:23:42.412 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:23:42.412 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:23:42.412 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:42.412 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=1b34237886ca415eaf86153cfad4b9a2 00:23:42.412 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 1B34237886CA415EAF86153CFAD4B9A2 00:23:42.412 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 1B34237886CA415EAF86153CFAD4B9A2 == \1\B\3\4\2\3\7\8\8\6\C\A\4\1\5\E\A\F\8\6\1\5\3\C\F\A\D\4\B\9\A\2 ]] 00:23:42.412 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:23:42.672 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:23:42.672 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:23:42.672 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 248374 00:23:42.672 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 248374 ']' 00:23:42.672 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 248374 00:23:42.672 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:23:42.672 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:42.672 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 248374 00:23:42.672 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:42.672 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:42.672 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 248374' 00:23:42.672 killing process with pid 248374 00:23:42.672 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 248374 00:23:42.672 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 248374 00:23:42.931 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:23:42.931 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:42.931 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:23:42.931 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:42.931 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:23:42.931 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:42.931 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:42.931 rmmod nvme_tcp 00:23:42.931 rmmod nvme_fabrics 00:23:43.191 rmmod nvme_keyring 00:23:43.191 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:43.191 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:23:43.191 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:23:43.191 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 248339 ']' 00:23:43.191 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 248339 00:23:43.191 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 248339 ']' 00:23:43.191 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 248339 00:23:43.191 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:23:43.191 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:43.191 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 248339 00:23:43.191 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:43.191 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:43.191 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 248339' 00:23:43.191 killing process with pid 248339 00:23:43.191 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 248339 00:23:43.191 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 248339 00:23:43.191 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:43.191 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:43.191 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:43.191 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:23:43.191 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:23:43.191 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:43.191 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:23:43.191 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:43.191 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:43.191 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.191 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:43.191 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:45.727 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:45.727 00:23:45.727 real 0m12.970s 00:23:45.727 user 0m10.418s 00:23:45.727 sys 0m5.506s 00:23:45.727 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:45.727 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:45.727 ************************************ 00:23:45.727 END TEST nvmf_nsid 00:23:45.727 ************************************ 00:23:45.727 12:37:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:23:45.727 00:23:45.727 real 11m58.629s 00:23:45.727 user 25m33.409s 00:23:45.727 sys 3m45.505s 00:23:45.727 12:37:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:45.727 12:37:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:45.727 ************************************ 00:23:45.727 END TEST nvmf_target_extra 00:23:45.727 ************************************ 00:23:45.727 12:37:51 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:45.727 12:37:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:45.727 12:37:51 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:45.727 12:37:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:45.727 ************************************ 00:23:45.727 START TEST nvmf_host 00:23:45.727 ************************************ 00:23:45.727 12:37:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:45.727 * Looking for test storage... 00:23:45.727 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:23:45.727 12:37:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:45.727 12:37:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:23:45.727 12:37:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:45.727 12:37:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:45.727 12:37:51 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:45.727 12:37:51 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:45.727 12:37:51 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:45.727 12:37:51 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:45.727 12:37:51 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:45.727 12:37:51 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:45.727 12:37:51 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:45.727 12:37:51 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:45.727 12:37:51 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:45.727 12:37:51 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:45.727 12:37:51 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:45.727 12:37:51 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:23:45.727 12:37:51 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:23:45.727 12:37:51 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:45.727 12:37:51 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:45.727 12:37:51 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:23:45.727 12:37:51 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:23:45.727 12:37:51 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:45.727 12:37:51 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:23:45.727 12:37:51 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:45.727 12:37:51 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:23:45.727 12:37:51 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:23:45.727 12:37:51 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:45.727 12:37:51 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:23:45.727 12:37:51 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:45.727 12:37:51 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:45.727 12:37:51 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:45.727 12:37:51 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:23:45.727 12:37:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:45.727 12:37:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:45.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.727 --rc genhtml_branch_coverage=1 00:23:45.727 --rc genhtml_function_coverage=1 00:23:45.727 --rc genhtml_legend=1 00:23:45.727 --rc geninfo_all_blocks=1 00:23:45.727 --rc geninfo_unexecuted_blocks=1 00:23:45.727 00:23:45.727 ' 00:23:45.727 12:37:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:45.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.727 --rc genhtml_branch_coverage=1 00:23:45.727 --rc genhtml_function_coverage=1 00:23:45.727 --rc genhtml_legend=1 00:23:45.727 --rc geninfo_all_blocks=1 00:23:45.727 --rc geninfo_unexecuted_blocks=1 00:23:45.727 00:23:45.727 ' 00:23:45.727 12:37:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:45.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.727 --rc genhtml_branch_coverage=1 00:23:45.727 --rc genhtml_function_coverage=1 00:23:45.727 --rc genhtml_legend=1 00:23:45.727 --rc geninfo_all_blocks=1 00:23:45.727 --rc geninfo_unexecuted_blocks=1 00:23:45.727 00:23:45.727 ' 00:23:45.727 12:37:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:45.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.727 --rc genhtml_branch_coverage=1 00:23:45.727 --rc genhtml_function_coverage=1 00:23:45.727 --rc genhtml_legend=1 00:23:45.727 --rc geninfo_all_blocks=1 00:23:45.727 --rc geninfo_unexecuted_blocks=1 00:23:45.727 00:23:45.727 ' 00:23:45.727 12:37:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:45.727 12:37:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:23:45.727 12:37:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:45.728 12:37:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:45.728 12:37:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:45.728 12:37:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:45.728 12:37:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:45.728 12:37:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:45.728 12:37:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:45.728 12:37:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:45.728 12:37:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:45.728 12:37:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:45.728 12:37:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:45.728 12:37:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:45.728 12:37:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:45.728 12:37:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:45.728 12:37:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:45.728 12:37:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:45.728 12:37:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:45.728 12:37:51 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:45.728 12:37:51 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:45.728 12:37:51 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:45.728 12:37:51 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:45.728 12:37:51 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.728 12:37:51 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.728 12:37:51 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.728 12:37:51 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:23:45.728 12:37:51 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.728 12:37:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:23:45.728 12:37:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:45.728 12:37:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:45.728 12:37:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:45.728 12:37:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:45.728 12:37:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:45.728 12:37:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:45.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:45.728 12:37:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:45.728 12:37:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:45.728 12:37:51 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:45.728 12:37:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:45.728 12:37:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:23:45.728 12:37:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:23:45.728 12:37:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:45.728 12:37:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:45.728 12:37:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:45.728 12:37:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.728 ************************************ 00:23:45.728 START TEST nvmf_multicontroller 00:23:45.728 ************************************ 00:23:45.728 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:45.728 * Looking for test storage... 00:23:45.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:45.728 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:45.728 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:23:45.728 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:45.988 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:45.988 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:45.988 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:45.988 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:45.988 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:23:45.988 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:23:45.988 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:23:45.988 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:23:45.988 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:23:45.988 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:23:45.988 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:23:45.988 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:45.988 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:23:45.988 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:23:45.988 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:45.988 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:45.988 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:23:45.988 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:23:45.988 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:45.988 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:23:45.988 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:23:45.988 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:23:45.988 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:23:45.988 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:45.988 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:23:45.988 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:23:45.988 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:45.988 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:45.988 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:23:45.988 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:45.988 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:45.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.988 --rc genhtml_branch_coverage=1 00:23:45.988 --rc genhtml_function_coverage=1 00:23:45.988 --rc genhtml_legend=1 00:23:45.988 --rc geninfo_all_blocks=1 00:23:45.988 --rc geninfo_unexecuted_blocks=1 00:23:45.988 00:23:45.988 ' 00:23:45.988 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:45.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.988 --rc genhtml_branch_coverage=1 00:23:45.988 --rc genhtml_function_coverage=1 00:23:45.988 --rc genhtml_legend=1 00:23:45.988 --rc geninfo_all_blocks=1 00:23:45.988 --rc geninfo_unexecuted_blocks=1 00:23:45.988 00:23:45.988 ' 00:23:45.988 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:45.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.988 --rc genhtml_branch_coverage=1 00:23:45.988 --rc genhtml_function_coverage=1 00:23:45.988 --rc genhtml_legend=1 00:23:45.988 --rc geninfo_all_blocks=1 00:23:45.988 --rc geninfo_unexecuted_blocks=1 00:23:45.988 00:23:45.989 ' 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:45.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.989 --rc genhtml_branch_coverage=1 00:23:45.989 --rc genhtml_function_coverage=1 00:23:45.989 --rc genhtml_legend=1 00:23:45.989 --rc geninfo_all_blocks=1 00:23:45.989 --rc geninfo_unexecuted_blocks=1 00:23:45.989 00:23:45.989 ' 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:45.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:23:45.989 12:37:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:52.558 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:52.558 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:52.558 Found net devices under 0000:86:00.0: cvl_0_0 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:52.558 Found net devices under 0000:86:00.1: cvl_0_1 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:23:52.558 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:52.559 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:52.559 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.500 ms 00:23:52.559 00:23:52.559 --- 10.0.0.2 ping statistics --- 00:23:52.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.559 rtt min/avg/max/mdev = 0.500/0.500/0.500/0.000 ms 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:52.559 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:52.559 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:23:52.559 00:23:52.559 --- 10.0.0.1 ping statistics --- 00:23:52.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.559 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=252678 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 252678 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 252678 ']' 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.559 [2024-11-20 12:37:57.561106] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:23:52.559 [2024-11-20 12:37:57.561157] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:52.559 [2024-11-20 12:37:57.641128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:52.559 [2024-11-20 12:37:57.681430] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:52.559 [2024-11-20 12:37:57.681468] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:52.559 [2024-11-20 12:37:57.681476] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:52.559 [2024-11-20 12:37:57.681482] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:52.559 [2024-11-20 12:37:57.681487] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:52.559 [2024-11-20 12:37:57.682910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:52.559 [2024-11-20 12:37:57.682999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:52.559 [2024-11-20 12:37:57.683000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.559 [2024-11-20 12:37:57.830545] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.559 Malloc0 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.559 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:52.560 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.560 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.560 [2024-11-20 12:37:57.895540] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:52.560 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.560 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:52.560 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.560 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.560 [2024-11-20 12:37:57.903467] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:52.560 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.560 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:52.560 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.560 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.560 Malloc1 00:23:52.560 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.560 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:52.560 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.560 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.560 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.560 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:52.560 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.560 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.560 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.560 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:52.560 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.560 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.560 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.560 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:52.560 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.560 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.560 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.560 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=252710 00:23:52.560 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:52.560 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:52.560 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 252710 /var/tmp/bdevperf.sock 00:23:52.560 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 252710 ']' 00:23:52.560 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:52.560 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:52.560 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:52.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:52.560 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:52.560 12:37:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.560 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:52.560 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:23:52.560 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:52.560 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.560 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.819 NVMe0n1 00:23:52.820 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.820 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:52.820 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:52.820 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.820 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.820 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.820 1 00:23:52.820 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:52.820 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:52.820 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:52.820 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:52.820 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:52.820 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:52.820 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:52.820 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:52.820 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.820 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.820 request: 00:23:52.820 { 00:23:52.820 "name": "NVMe0", 00:23:52.820 "trtype": "tcp", 00:23:52.820 "traddr": "10.0.0.2", 00:23:52.820 "adrfam": "ipv4", 00:23:52.820 "trsvcid": "4420", 00:23:52.820 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.820 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:52.820 "hostaddr": "10.0.0.1", 00:23:52.820 "prchk_reftag": false, 00:23:52.820 "prchk_guard": false, 00:23:52.820 "hdgst": false, 00:23:52.820 "ddgst": false, 00:23:52.820 "allow_unrecognized_csi": false, 00:23:52.820 "method": "bdev_nvme_attach_controller", 00:23:52.820 "req_id": 1 00:23:52.820 } 00:23:52.820 Got JSON-RPC error response 00:23:52.820 response: 00:23:52.820 { 00:23:52.820 "code": -114, 00:23:52.820 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:52.820 } 00:23:52.820 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:52.820 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:52.820 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:52.820 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:52.820 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:52.820 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:52.820 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:52.820 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:52.820 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:52.820 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:52.820 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:52.820 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:52.820 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:52.820 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.820 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.820 request: 00:23:52.820 { 00:23:52.820 "name": "NVMe0", 00:23:52.820 "trtype": "tcp", 00:23:52.820 "traddr": "10.0.0.2", 00:23:52.820 "adrfam": "ipv4", 00:23:52.820 "trsvcid": "4420", 00:23:52.820 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:52.820 "hostaddr": "10.0.0.1", 00:23:52.820 "prchk_reftag": false, 00:23:52.820 "prchk_guard": false, 00:23:52.820 "hdgst": false, 00:23:52.820 "ddgst": false, 00:23:52.820 "allow_unrecognized_csi": false, 00:23:52.820 "method": "bdev_nvme_attach_controller", 00:23:52.820 "req_id": 1 00:23:52.820 } 00:23:52.820 Got JSON-RPC error response 00:23:52.820 response: 00:23:52.820 { 00:23:52.820 "code": -114, 00:23:52.820 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:52.820 } 00:23:52.820 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:52.820 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:52.820 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:52.820 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:52.820 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:52.820 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:52.820 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:52.820 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:52.820 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:52.820 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:52.820 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:52.820 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:52.820 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:52.820 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.820 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.820 request: 00:23:52.820 { 00:23:52.820 "name": "NVMe0", 00:23:52.820 "trtype": "tcp", 00:23:52.820 "traddr": "10.0.0.2", 00:23:52.820 "adrfam": "ipv4", 00:23:52.820 "trsvcid": "4420", 00:23:52.820 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.820 "hostaddr": "10.0.0.1", 00:23:52.820 "prchk_reftag": false, 00:23:52.820 "prchk_guard": false, 00:23:52.820 "hdgst": false, 00:23:52.820 "ddgst": false, 00:23:52.820 "multipath": "disable", 00:23:52.820 "allow_unrecognized_csi": false, 00:23:52.820 "method": "bdev_nvme_attach_controller", 00:23:52.820 "req_id": 1 00:23:52.820 } 00:23:52.820 Got JSON-RPC error response 00:23:52.820 response: 00:23:52.820 { 00:23:52.820 "code": -114, 00:23:52.820 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:23:52.820 } 00:23:52.821 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:52.821 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:52.821 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:52.821 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:52.821 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:52.821 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:52.821 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:52.821 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:52.821 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:52.821 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:52.821 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:52.821 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:52.821 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:52.821 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.821 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.821 request: 00:23:52.821 { 00:23:52.821 "name": "NVMe0", 00:23:52.821 "trtype": "tcp", 00:23:52.821 "traddr": "10.0.0.2", 00:23:52.821 "adrfam": "ipv4", 00:23:52.821 "trsvcid": "4420", 00:23:52.821 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.821 "hostaddr": "10.0.0.1", 00:23:52.821 "prchk_reftag": false, 00:23:52.821 "prchk_guard": false, 00:23:52.821 "hdgst": false, 00:23:52.821 "ddgst": false, 00:23:52.821 "multipath": "failover", 00:23:52.821 "allow_unrecognized_csi": false, 00:23:52.821 "method": "bdev_nvme_attach_controller", 00:23:52.821 "req_id": 1 00:23:52.821 } 00:23:52.821 Got JSON-RPC error response 00:23:52.821 response: 00:23:52.821 { 00:23:52.821 "code": -114, 00:23:52.821 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:52.821 } 00:23:52.821 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:52.821 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:52.821 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:52.821 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:52.821 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:52.821 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:52.821 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.821 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:53.080 NVMe0n1 00:23:53.080 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.080 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:53.080 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.080 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:53.080 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.080 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:53.080 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.080 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:53.080 00:23:53.080 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.080 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:53.080 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:53.080 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.080 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:53.339 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.339 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:53.339 12:37:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:54.273 { 00:23:54.273 "results": [ 00:23:54.273 { 00:23:54.273 "job": "NVMe0n1", 00:23:54.273 "core_mask": "0x1", 00:23:54.273 "workload": "write", 00:23:54.273 "status": "finished", 00:23:54.273 "queue_depth": 128, 00:23:54.273 "io_size": 4096, 00:23:54.273 "runtime": 1.006566, 00:23:54.273 "iops": 23642.761627156095, 00:23:54.273 "mibps": 92.3545376060785, 00:23:54.273 "io_failed": 0, 00:23:54.273 "io_timeout": 0, 00:23:54.273 "avg_latency_us": 5396.712396639974, 00:23:54.273 "min_latency_us": 4181.820952380953, 00:23:54.273 "max_latency_us": 12170.971428571429 00:23:54.273 } 00:23:54.273 ], 00:23:54.273 "core_count": 1 00:23:54.274 } 00:23:54.274 12:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:54.274 12:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.274 12:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:54.274 12:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.274 12:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:23:54.274 12:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 252710 00:23:54.274 12:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 252710 ']' 00:23:54.274 12:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 252710 00:23:54.274 12:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:23:54.274 12:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:54.274 12:37:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 252710 00:23:54.533 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:54.533 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:54.533 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 252710' 00:23:54.533 killing process with pid 252710 00:23:54.533 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 252710 00:23:54.533 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 252710 00:23:54.533 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:54.533 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.533 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:54.533 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.533 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:54.533 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.533 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:54.533 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.533 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:23:54.533 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:54.533 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:23:54.533 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:54.533 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:23:54.533 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:23:54.533 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:54.533 [2024-11-20 12:37:58.006851] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:23:54.533 [2024-11-20 12:37:58.006902] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid252710 ] 00:23:54.533 [2024-11-20 12:37:58.081083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.533 [2024-11-20 12:37:58.122098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.533 [2024-11-20 12:37:58.829551] bdev.c:4700:bdev_name_add: *ERROR*: Bdev name 377f4046-9051-4e29-b267-98cf6a81ce15 already exists 00:23:54.533 [2024-11-20 12:37:58.829576] bdev.c:7842:bdev_register: *ERROR*: Unable to add uuid:377f4046-9051-4e29-b267-98cf6a81ce15 alias for bdev NVMe1n1 00:23:54.533 [2024-11-20 12:37:58.829584] bdev_nvme.c:4659:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:54.533 Running I/O for 1 seconds... 00:23:54.533 23638.00 IOPS, 92.34 MiB/s 00:23:54.533 Latency(us) 00:23:54.533 [2024-11-20T11:38:00.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.533 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:54.533 NVMe0n1 : 1.01 23642.76 92.35 0.00 0.00 5396.71 4181.82 12170.97 00:23:54.533 [2024-11-20T11:38:00.299Z] =================================================================================================================== 00:23:54.533 [2024-11-20T11:38:00.299Z] Total : 23642.76 92.35 0.00 0.00 5396.71 4181.82 12170.97 00:23:54.533 Received shutdown signal, test time was about 1.000000 seconds 00:23:54.533 00:23:54.533 Latency(us) 00:23:54.533 [2024-11-20T11:38:00.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.533 [2024-11-20T11:38:00.299Z] =================================================================================================================== 00:23:54.533 [2024-11-20T11:38:00.300Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:54.534 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:54.534 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:54.534 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:23:54.534 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:23:54.534 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:54.534 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:23:54.534 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:54.534 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:23:54.534 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:54.534 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:54.534 rmmod nvme_tcp 00:23:54.534 rmmod nvme_fabrics 00:23:54.534 rmmod nvme_keyring 00:23:54.534 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:54.793 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:23:54.793 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:23:54.793 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 252678 ']' 00:23:54.793 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 252678 00:23:54.793 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 252678 ']' 00:23:54.793 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 252678 00:23:54.793 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:23:54.793 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:54.793 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 252678 00:23:54.793 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:54.793 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:54.793 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 252678' 00:23:54.793 killing process with pid 252678 00:23:54.793 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 252678 00:23:54.793 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 252678 00:23:54.793 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:54.793 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:54.793 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:54.793 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:23:54.793 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:23:55.052 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:55.052 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:23:55.052 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:55.052 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:55.052 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.052 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:55.052 12:38:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.957 12:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:56.957 00:23:56.957 real 0m11.260s 00:23:56.957 user 0m12.590s 00:23:56.957 sys 0m5.282s 00:23:56.957 12:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:56.957 12:38:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:56.957 ************************************ 00:23:56.957 END TEST nvmf_multicontroller 00:23:56.957 ************************************ 00:23:56.957 12:38:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:56.957 12:38:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:56.957 12:38:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:56.957 12:38:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.957 ************************************ 00:23:56.957 START TEST nvmf_aer 00:23:56.957 ************************************ 00:23:56.957 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:57.216 * Looking for test storage... 00:23:57.216 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:57.216 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:57.216 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:23:57.216 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:57.216 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:57.216 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:57.216 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:57.216 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:57.216 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:23:57.216 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:23:57.216 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:23:57.216 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:23:57.216 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:23:57.216 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:23:57.216 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:23:57.216 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:57.216 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:23:57.216 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:23:57.216 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:57.216 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:57.216 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:23:57.216 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:23:57.216 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:57.216 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:23:57.216 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:23:57.216 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:23:57.216 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:23:57.216 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:57.216 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:23:57.216 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:23:57.216 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:57.216 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:57.216 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:23:57.216 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:57.216 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:57.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.216 --rc genhtml_branch_coverage=1 00:23:57.216 --rc genhtml_function_coverage=1 00:23:57.216 --rc genhtml_legend=1 00:23:57.216 --rc geninfo_all_blocks=1 00:23:57.216 --rc geninfo_unexecuted_blocks=1 00:23:57.216 00:23:57.216 ' 00:23:57.216 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:57.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.216 --rc genhtml_branch_coverage=1 00:23:57.216 --rc genhtml_function_coverage=1 00:23:57.216 --rc genhtml_legend=1 00:23:57.216 --rc geninfo_all_blocks=1 00:23:57.217 --rc geninfo_unexecuted_blocks=1 00:23:57.217 00:23:57.217 ' 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:57.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.217 --rc genhtml_branch_coverage=1 00:23:57.217 --rc genhtml_function_coverage=1 00:23:57.217 --rc genhtml_legend=1 00:23:57.217 --rc geninfo_all_blocks=1 00:23:57.217 --rc geninfo_unexecuted_blocks=1 00:23:57.217 00:23:57.217 ' 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:57.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.217 --rc genhtml_branch_coverage=1 00:23:57.217 --rc genhtml_function_coverage=1 00:23:57.217 --rc genhtml_legend=1 00:23:57.217 --rc geninfo_all_blocks=1 00:23:57.217 --rc geninfo_unexecuted_blocks=1 00:23:57.217 00:23:57.217 ' 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:57.217 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:23:57.217 12:38:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:03.793 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:03.793 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:03.793 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:03.794 Found net devices under 0000:86:00.0: cvl_0_0 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:03.794 Found net devices under 0000:86:00.1: cvl_0_1 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:03.794 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:03.794 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.482 ms 00:24:03.794 00:24:03.794 --- 10.0.0.2 ping statistics --- 00:24:03.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.794 rtt min/avg/max/mdev = 0.482/0.482/0.482/0.000 ms 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:03.794 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:03.794 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:24:03.794 00:24:03.794 --- 10.0.0.1 ping statistics --- 00:24:03.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.794 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=256696 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 256696 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 256696 ']' 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:03.794 12:38:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:03.794 [2024-11-20 12:38:08.927253] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:24:03.794 [2024-11-20 12:38:08.927311] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:03.794 [2024-11-20 12:38:09.008750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:03.794 [2024-11-20 12:38:09.050012] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:03.795 [2024-11-20 12:38:09.050051] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:03.795 [2024-11-20 12:38:09.050058] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:03.795 [2024-11-20 12:38:09.050066] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:03.795 [2024-11-20 12:38:09.050070] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:03.795 [2024-11-20 12:38:09.051655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:03.795 [2024-11-20 12:38:09.051763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:03.795 [2024-11-20 12:38:09.051869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.795 [2024-11-20 12:38:09.051871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:04.054 12:38:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:04.054 12:38:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:24:04.054 12:38:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:04.054 12:38:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:04.054 12:38:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:04.054 12:38:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:04.054 12:38:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:04.054 12:38:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.054 12:38:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:04.054 [2024-11-20 12:38:09.814282] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:04.314 12:38:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.314 12:38:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:04.314 12:38:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.314 12:38:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:04.314 Malloc0 00:24:04.314 12:38:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.314 12:38:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:04.314 12:38:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.314 12:38:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:04.314 12:38:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.314 12:38:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:04.314 12:38:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.314 12:38:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:04.314 12:38:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.314 12:38:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:04.314 12:38:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.314 12:38:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:04.314 [2024-11-20 12:38:09.885885] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:04.314 12:38:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.314 12:38:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:04.314 12:38:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.314 12:38:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:04.314 [ 00:24:04.314 { 00:24:04.314 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:04.314 "subtype": "Discovery", 00:24:04.314 "listen_addresses": [], 00:24:04.314 "allow_any_host": true, 00:24:04.314 "hosts": [] 00:24:04.314 }, 00:24:04.314 { 00:24:04.314 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:04.314 "subtype": "NVMe", 00:24:04.314 "listen_addresses": [ 00:24:04.314 { 00:24:04.314 "trtype": "TCP", 00:24:04.314 "adrfam": "IPv4", 00:24:04.314 "traddr": "10.0.0.2", 00:24:04.314 "trsvcid": "4420" 00:24:04.314 } 00:24:04.314 ], 00:24:04.314 "allow_any_host": true, 00:24:04.314 "hosts": [], 00:24:04.314 "serial_number": "SPDK00000000000001", 00:24:04.314 "model_number": "SPDK bdev Controller", 00:24:04.314 "max_namespaces": 2, 00:24:04.314 "min_cntlid": 1, 00:24:04.314 "max_cntlid": 65519, 00:24:04.314 "namespaces": [ 00:24:04.314 { 00:24:04.314 "nsid": 1, 00:24:04.314 "bdev_name": "Malloc0", 00:24:04.314 "name": "Malloc0", 00:24:04.314 "nguid": "6B1521101EBB48A8A0ADD99EC9631318", 00:24:04.314 "uuid": "6b152110-1ebb-48a8-a0ad-d99ec9631318" 00:24:04.314 } 00:24:04.314 ] 00:24:04.314 } 00:24:04.314 ] 00:24:04.314 12:38:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.314 12:38:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:04.314 12:38:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:04.314 12:38:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=256945 00:24:04.314 12:38:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:04.314 12:38:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:04.314 12:38:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:24:04.314 12:38:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:04.314 12:38:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:24:04.314 12:38:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:24:04.314 12:38:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:24:04.314 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:04.314 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:24:04.314 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:24:04.314 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:24:04.574 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:04.574 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:04.574 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:24:04.574 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:04.574 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.574 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:04.574 Malloc1 00:24:04.574 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.574 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:04.574 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.574 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:04.574 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.574 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:04.574 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.574 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:04.574 Asynchronous Event Request test 00:24:04.574 Attaching to 10.0.0.2 00:24:04.574 Attached to 10.0.0.2 00:24:04.574 Registering asynchronous event callbacks... 00:24:04.574 Starting namespace attribute notice tests for all controllers... 00:24:04.574 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:04.574 aer_cb - Changed Namespace 00:24:04.574 Cleaning up... 00:24:04.574 [ 00:24:04.574 { 00:24:04.574 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:04.574 "subtype": "Discovery", 00:24:04.574 "listen_addresses": [], 00:24:04.574 "allow_any_host": true, 00:24:04.574 "hosts": [] 00:24:04.574 }, 00:24:04.574 { 00:24:04.574 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:04.574 "subtype": "NVMe", 00:24:04.574 "listen_addresses": [ 00:24:04.574 { 00:24:04.574 "trtype": "TCP", 00:24:04.574 "adrfam": "IPv4", 00:24:04.574 "traddr": "10.0.0.2", 00:24:04.574 "trsvcid": "4420" 00:24:04.574 } 00:24:04.574 ], 00:24:04.574 "allow_any_host": true, 00:24:04.574 "hosts": [], 00:24:04.574 "serial_number": "SPDK00000000000001", 00:24:04.574 "model_number": "SPDK bdev Controller", 00:24:04.574 "max_namespaces": 2, 00:24:04.574 "min_cntlid": 1, 00:24:04.574 "max_cntlid": 65519, 00:24:04.574 "namespaces": [ 00:24:04.574 { 00:24:04.574 "nsid": 1, 00:24:04.574 "bdev_name": "Malloc0", 00:24:04.574 "name": "Malloc0", 00:24:04.574 "nguid": "6B1521101EBB48A8A0ADD99EC9631318", 00:24:04.574 "uuid": "6b152110-1ebb-48a8-a0ad-d99ec9631318" 00:24:04.574 }, 00:24:04.574 { 00:24:04.574 "nsid": 2, 00:24:04.574 "bdev_name": "Malloc1", 00:24:04.574 "name": "Malloc1", 00:24:04.574 "nguid": "6C35374E41F94E2099F88A62EC7CE369", 00:24:04.574 "uuid": "6c35374e-41f9-4e20-99f8-8a62ec7ce369" 00:24:04.574 } 00:24:04.574 ] 00:24:04.574 } 00:24:04.574 ] 00:24:04.574 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.574 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 256945 00:24:04.574 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:04.574 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.574 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:04.574 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.574 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:04.574 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.574 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:04.574 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.574 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:04.574 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.574 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:04.574 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.574 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:04.574 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:24:04.574 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:04.574 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:24:04.574 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:04.574 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:24:04.574 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:04.574 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:04.574 rmmod nvme_tcp 00:24:04.574 rmmod nvme_fabrics 00:24:04.574 rmmod nvme_keyring 00:24:04.574 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:04.574 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:24:04.574 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:24:04.574 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 256696 ']' 00:24:04.574 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 256696 00:24:04.574 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 256696 ']' 00:24:04.574 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 256696 00:24:04.574 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:24:04.574 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:04.574 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 256696 00:24:04.834 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:04.834 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:04.834 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 256696' 00:24:04.834 killing process with pid 256696 00:24:04.834 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 256696 00:24:04.834 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 256696 00:24:04.834 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:04.834 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:04.834 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:04.834 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:24:04.834 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:24:04.834 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:04.834 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:24:04.834 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:04.834 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:04.834 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.834 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:04.834 12:38:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:07.371 12:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:07.371 00:24:07.371 real 0m9.897s 00:24:07.371 user 0m7.814s 00:24:07.371 sys 0m4.876s 00:24:07.371 12:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:07.371 12:38:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:07.371 ************************************ 00:24:07.371 END TEST nvmf_aer 00:24:07.371 ************************************ 00:24:07.371 12:38:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:07.371 12:38:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:07.371 12:38:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:07.371 12:38:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.371 ************************************ 00:24:07.371 START TEST nvmf_async_init 00:24:07.371 ************************************ 00:24:07.371 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:07.371 * Looking for test storage... 00:24:07.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:07.371 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:07.371 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:24:07.371 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:07.371 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:07.371 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:07.371 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:07.371 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:07.371 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:24:07.371 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:24:07.371 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:24:07.371 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:24:07.371 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:24:07.371 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:24:07.371 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:07.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.372 --rc genhtml_branch_coverage=1 00:24:07.372 --rc genhtml_function_coverage=1 00:24:07.372 --rc genhtml_legend=1 00:24:07.372 --rc geninfo_all_blocks=1 00:24:07.372 --rc geninfo_unexecuted_blocks=1 00:24:07.372 00:24:07.372 ' 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:07.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.372 --rc genhtml_branch_coverage=1 00:24:07.372 --rc genhtml_function_coverage=1 00:24:07.372 --rc genhtml_legend=1 00:24:07.372 --rc geninfo_all_blocks=1 00:24:07.372 --rc geninfo_unexecuted_blocks=1 00:24:07.372 00:24:07.372 ' 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:07.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.372 --rc genhtml_branch_coverage=1 00:24:07.372 --rc genhtml_function_coverage=1 00:24:07.372 --rc genhtml_legend=1 00:24:07.372 --rc geninfo_all_blocks=1 00:24:07.372 --rc geninfo_unexecuted_blocks=1 00:24:07.372 00:24:07.372 ' 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:07.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.372 --rc genhtml_branch_coverage=1 00:24:07.372 --rc genhtml_function_coverage=1 00:24:07.372 --rc genhtml_legend=1 00:24:07.372 --rc geninfo_all_blocks=1 00:24:07.372 --rc geninfo_unexecuted_blocks=1 00:24:07.372 00:24:07.372 ' 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:07.372 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=00d23c5ea2524cc1ade4f1bc2c6d9a84 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:24:07.372 12:38:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:13.943 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:13.943 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:13.943 Found net devices under 0000:86:00.0: cvl_0_0 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:13.943 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:13.944 Found net devices under 0000:86:00.1: cvl_0_1 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:13.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:13.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.395 ms 00:24:13.944 00:24:13.944 --- 10.0.0.2 ping statistics --- 00:24:13.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:13.944 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:13.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:13.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:24:13.944 00:24:13.944 --- 10.0.0.1 ping statistics --- 00:24:13.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:13.944 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=260487 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 260487 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 260487 ']' 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:13.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:13.944 12:38:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:13.944 [2024-11-20 12:38:18.877378] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:24:13.944 [2024-11-20 12:38:18.877431] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:13.944 [2024-11-20 12:38:18.955502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.944 [2024-11-20 12:38:18.995982] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:13.944 [2024-11-20 12:38:18.996018] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:13.944 [2024-11-20 12:38:18.996025] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:13.944 [2024-11-20 12:38:18.996031] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:13.944 [2024-11-20 12:38:18.996035] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:13.944 [2024-11-20 12:38:18.996603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.944 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:13.944 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:24:13.944 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:13.944 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:13.944 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:13.944 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:13.944 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:13.944 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.944 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:13.944 [2024-11-20 12:38:19.139888] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:13.944 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.944 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:13.944 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.944 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:13.944 null0 00:24:13.944 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.944 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:13.944 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.944 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:13.944 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.944 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:13.944 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.944 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:13.944 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.944 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 00d23c5ea2524cc1ade4f1bc2c6d9a84 00:24:13.944 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.944 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:13.944 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.944 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:13.944 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.944 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:13.944 [2024-11-20 12:38:19.192174] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:13.944 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.944 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:13.944 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.945 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:13.945 nvme0n1 00:24:13.945 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.945 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:13.945 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.945 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:13.945 [ 00:24:13.945 { 00:24:13.945 "name": "nvme0n1", 00:24:13.945 "aliases": [ 00:24:13.945 "00d23c5e-a252-4cc1-ade4-f1bc2c6d9a84" 00:24:13.945 ], 00:24:13.945 "product_name": "NVMe disk", 00:24:13.945 "block_size": 512, 00:24:13.945 "num_blocks": 2097152, 00:24:13.945 "uuid": "00d23c5e-a252-4cc1-ade4-f1bc2c6d9a84", 00:24:13.945 "numa_id": 1, 00:24:13.945 "assigned_rate_limits": { 00:24:13.945 "rw_ios_per_sec": 0, 00:24:13.945 "rw_mbytes_per_sec": 0, 00:24:13.945 "r_mbytes_per_sec": 0, 00:24:13.945 "w_mbytes_per_sec": 0 00:24:13.945 }, 00:24:13.945 "claimed": false, 00:24:13.945 "zoned": false, 00:24:13.945 "supported_io_types": { 00:24:13.945 "read": true, 00:24:13.945 "write": true, 00:24:13.945 "unmap": false, 00:24:13.945 "flush": true, 00:24:13.945 "reset": true, 00:24:13.945 "nvme_admin": true, 00:24:13.945 "nvme_io": true, 00:24:13.945 "nvme_io_md": false, 00:24:13.945 "write_zeroes": true, 00:24:13.945 "zcopy": false, 00:24:13.945 "get_zone_info": false, 00:24:13.945 "zone_management": false, 00:24:13.945 "zone_append": false, 00:24:13.945 "compare": true, 00:24:13.945 "compare_and_write": true, 00:24:13.945 "abort": true, 00:24:13.945 "seek_hole": false, 00:24:13.945 "seek_data": false, 00:24:13.945 "copy": true, 00:24:13.945 "nvme_iov_md": false 00:24:13.945 }, 00:24:13.945 "memory_domains": [ 00:24:13.945 { 00:24:13.945 "dma_device_id": "system", 00:24:13.945 "dma_device_type": 1 00:24:13.945 } 00:24:13.945 ], 00:24:13.945 "driver_specific": { 00:24:13.945 "nvme": [ 00:24:13.945 { 00:24:13.945 "trid": { 00:24:13.945 "trtype": "TCP", 00:24:13.945 "adrfam": "IPv4", 00:24:13.945 "traddr": "10.0.0.2", 00:24:13.945 "trsvcid": "4420", 00:24:13.945 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:13.945 }, 00:24:13.945 "ctrlr_data": { 00:24:13.945 "cntlid": 1, 00:24:13.945 "vendor_id": "0x8086", 00:24:13.945 "model_number": "SPDK bdev Controller", 00:24:13.945 "serial_number": "00000000000000000000", 00:24:13.945 "firmware_revision": "25.01", 00:24:13.945 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:13.945 "oacs": { 00:24:13.945 "security": 0, 00:24:13.945 "format": 0, 00:24:13.945 "firmware": 0, 00:24:13.945 "ns_manage": 0 00:24:13.945 }, 00:24:13.945 "multi_ctrlr": true, 00:24:13.945 "ana_reporting": false 00:24:13.945 }, 00:24:13.945 "vs": { 00:24:13.945 "nvme_version": "1.3" 00:24:13.945 }, 00:24:13.945 "ns_data": { 00:24:13.945 "id": 1, 00:24:13.945 "can_share": true 00:24:13.945 } 00:24:13.945 } 00:24:13.945 ], 00:24:13.945 "mp_policy": "active_passive" 00:24:13.945 } 00:24:13.945 } 00:24:13.945 ] 00:24:13.945 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.945 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:13.945 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.945 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:13.945 [2024-11-20 12:38:19.460686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:13.945 [2024-11-20 12:38:19.460741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2121220 (9): Bad file descriptor 00:24:13.945 [2024-11-20 12:38:19.592289] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:24:13.945 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.945 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:13.945 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.945 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:13.945 [ 00:24:13.945 { 00:24:13.945 "name": "nvme0n1", 00:24:13.945 "aliases": [ 00:24:13.945 "00d23c5e-a252-4cc1-ade4-f1bc2c6d9a84" 00:24:13.945 ], 00:24:13.945 "product_name": "NVMe disk", 00:24:13.945 "block_size": 512, 00:24:13.945 "num_blocks": 2097152, 00:24:13.945 "uuid": "00d23c5e-a252-4cc1-ade4-f1bc2c6d9a84", 00:24:13.945 "numa_id": 1, 00:24:13.945 "assigned_rate_limits": { 00:24:13.945 "rw_ios_per_sec": 0, 00:24:13.945 "rw_mbytes_per_sec": 0, 00:24:13.945 "r_mbytes_per_sec": 0, 00:24:13.945 "w_mbytes_per_sec": 0 00:24:13.945 }, 00:24:13.945 "claimed": false, 00:24:13.945 "zoned": false, 00:24:13.945 "supported_io_types": { 00:24:13.945 "read": true, 00:24:13.945 "write": true, 00:24:13.945 "unmap": false, 00:24:13.945 "flush": true, 00:24:13.945 "reset": true, 00:24:13.945 "nvme_admin": true, 00:24:13.945 "nvme_io": true, 00:24:13.945 "nvme_io_md": false, 00:24:13.945 "write_zeroes": true, 00:24:13.945 "zcopy": false, 00:24:13.945 "get_zone_info": false, 00:24:13.945 "zone_management": false, 00:24:13.945 "zone_append": false, 00:24:13.945 "compare": true, 00:24:13.945 "compare_and_write": true, 00:24:13.945 "abort": true, 00:24:13.945 "seek_hole": false, 00:24:13.945 "seek_data": false, 00:24:13.945 "copy": true, 00:24:13.945 "nvme_iov_md": false 00:24:13.945 }, 00:24:13.945 "memory_domains": [ 00:24:13.945 { 00:24:13.945 "dma_device_id": "system", 00:24:13.945 "dma_device_type": 1 00:24:13.945 } 00:24:13.945 ], 00:24:13.945 "driver_specific": { 00:24:13.945 "nvme": [ 00:24:13.945 { 00:24:13.945 "trid": { 00:24:13.945 "trtype": "TCP", 00:24:13.945 "adrfam": "IPv4", 00:24:13.945 "traddr": "10.0.0.2", 00:24:13.945 "trsvcid": "4420", 00:24:13.945 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:13.945 }, 00:24:13.945 "ctrlr_data": { 00:24:13.945 "cntlid": 2, 00:24:13.945 "vendor_id": "0x8086", 00:24:13.945 "model_number": "SPDK bdev Controller", 00:24:13.945 "serial_number": "00000000000000000000", 00:24:13.945 "firmware_revision": "25.01", 00:24:13.945 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:13.945 "oacs": { 00:24:13.945 "security": 0, 00:24:13.945 "format": 0, 00:24:13.945 "firmware": 0, 00:24:13.945 "ns_manage": 0 00:24:13.945 }, 00:24:13.945 "multi_ctrlr": true, 00:24:13.945 "ana_reporting": false 00:24:13.945 }, 00:24:13.945 "vs": { 00:24:13.945 "nvme_version": "1.3" 00:24:13.945 }, 00:24:13.945 "ns_data": { 00:24:13.945 "id": 1, 00:24:13.945 "can_share": true 00:24:13.945 } 00:24:13.945 } 00:24:13.945 ], 00:24:13.945 "mp_policy": "active_passive" 00:24:13.945 } 00:24:13.945 } 00:24:13.945 ] 00:24:13.945 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.945 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.945 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.945 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:13.945 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.945 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:13.945 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.5ZKUzCkEyI 00:24:13.945 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:13.945 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.5ZKUzCkEyI 00:24:13.945 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.5ZKUzCkEyI 00:24:13.945 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.945 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:13.945 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.945 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:13.945 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.945 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:13.945 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.945 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:13.945 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.945 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:13.945 [2024-11-20 12:38:19.665299] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:13.945 [2024-11-20 12:38:19.665404] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:13.945 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.945 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:24:13.945 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.945 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:13.945 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.946 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:13.946 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.946 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:13.946 [2024-11-20 12:38:19.685369] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:14.206 nvme0n1 00:24:14.206 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.206 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:14.206 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.206 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:14.206 [ 00:24:14.206 { 00:24:14.206 "name": "nvme0n1", 00:24:14.206 "aliases": [ 00:24:14.206 "00d23c5e-a252-4cc1-ade4-f1bc2c6d9a84" 00:24:14.206 ], 00:24:14.206 "product_name": "NVMe disk", 00:24:14.206 "block_size": 512, 00:24:14.206 "num_blocks": 2097152, 00:24:14.206 "uuid": "00d23c5e-a252-4cc1-ade4-f1bc2c6d9a84", 00:24:14.206 "numa_id": 1, 00:24:14.206 "assigned_rate_limits": { 00:24:14.206 "rw_ios_per_sec": 0, 00:24:14.206 "rw_mbytes_per_sec": 0, 00:24:14.206 "r_mbytes_per_sec": 0, 00:24:14.206 "w_mbytes_per_sec": 0 00:24:14.206 }, 00:24:14.206 "claimed": false, 00:24:14.206 "zoned": false, 00:24:14.206 "supported_io_types": { 00:24:14.206 "read": true, 00:24:14.206 "write": true, 00:24:14.206 "unmap": false, 00:24:14.206 "flush": true, 00:24:14.206 "reset": true, 00:24:14.206 "nvme_admin": true, 00:24:14.206 "nvme_io": true, 00:24:14.206 "nvme_io_md": false, 00:24:14.206 "write_zeroes": true, 00:24:14.206 "zcopy": false, 00:24:14.206 "get_zone_info": false, 00:24:14.206 "zone_management": false, 00:24:14.206 "zone_append": false, 00:24:14.206 "compare": true, 00:24:14.206 "compare_and_write": true, 00:24:14.206 "abort": true, 00:24:14.206 "seek_hole": false, 00:24:14.206 "seek_data": false, 00:24:14.206 "copy": true, 00:24:14.206 "nvme_iov_md": false 00:24:14.206 }, 00:24:14.206 "memory_domains": [ 00:24:14.206 { 00:24:14.206 "dma_device_id": "system", 00:24:14.206 "dma_device_type": 1 00:24:14.206 } 00:24:14.206 ], 00:24:14.206 "driver_specific": { 00:24:14.206 "nvme": [ 00:24:14.206 { 00:24:14.206 "trid": { 00:24:14.206 "trtype": "TCP", 00:24:14.206 "adrfam": "IPv4", 00:24:14.206 "traddr": "10.0.0.2", 00:24:14.206 "trsvcid": "4421", 00:24:14.206 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:14.206 }, 00:24:14.206 "ctrlr_data": { 00:24:14.206 "cntlid": 3, 00:24:14.206 "vendor_id": "0x8086", 00:24:14.206 "model_number": "SPDK bdev Controller", 00:24:14.206 "serial_number": "00000000000000000000", 00:24:14.206 "firmware_revision": "25.01", 00:24:14.206 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:14.206 "oacs": { 00:24:14.206 "security": 0, 00:24:14.206 "format": 0, 00:24:14.206 "firmware": 0, 00:24:14.206 "ns_manage": 0 00:24:14.206 }, 00:24:14.206 "multi_ctrlr": true, 00:24:14.206 "ana_reporting": false 00:24:14.206 }, 00:24:14.206 "vs": { 00:24:14.206 "nvme_version": "1.3" 00:24:14.206 }, 00:24:14.206 "ns_data": { 00:24:14.206 "id": 1, 00:24:14.206 "can_share": true 00:24:14.206 } 00:24:14.206 } 00:24:14.206 ], 00:24:14.206 "mp_policy": "active_passive" 00:24:14.206 } 00:24:14.206 } 00:24:14.206 ] 00:24:14.206 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.206 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.206 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.206 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:14.206 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.206 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.5ZKUzCkEyI 00:24:14.206 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:24:14.206 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:24:14.206 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:14.206 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:24:14.206 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:14.206 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:24:14.206 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:14.206 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:14.206 rmmod nvme_tcp 00:24:14.206 rmmod nvme_fabrics 00:24:14.206 rmmod nvme_keyring 00:24:14.206 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:14.206 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:24:14.206 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:24:14.206 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 260487 ']' 00:24:14.206 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 260487 00:24:14.206 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 260487 ']' 00:24:14.206 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 260487 00:24:14.206 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:24:14.206 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:14.206 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 260487 00:24:14.206 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:14.206 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:14.206 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 260487' 00:24:14.206 killing process with pid 260487 00:24:14.206 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 260487 00:24:14.206 12:38:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 260487 00:24:14.466 12:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:14.466 12:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:14.466 12:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:14.466 12:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:24:14.466 12:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:14.466 12:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:24:14.466 12:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:24:14.466 12:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:14.466 12:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:14.466 12:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.466 12:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:14.466 12:38:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:17.003 12:38:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:17.003 00:24:17.003 real 0m9.475s 00:24:17.003 user 0m3.065s 00:24:17.003 sys 0m4.848s 00:24:17.003 12:38:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:17.003 12:38:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.003 ************************************ 00:24:17.003 END TEST nvmf_async_init 00:24:17.003 ************************************ 00:24:17.003 12:38:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:17.003 12:38:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.004 ************************************ 00:24:17.004 START TEST dma 00:24:17.004 ************************************ 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:17.004 * Looking for test storage... 00:24:17.004 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:17.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.004 --rc genhtml_branch_coverage=1 00:24:17.004 --rc genhtml_function_coverage=1 00:24:17.004 --rc genhtml_legend=1 00:24:17.004 --rc geninfo_all_blocks=1 00:24:17.004 --rc geninfo_unexecuted_blocks=1 00:24:17.004 00:24:17.004 ' 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:17.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.004 --rc genhtml_branch_coverage=1 00:24:17.004 --rc genhtml_function_coverage=1 00:24:17.004 --rc genhtml_legend=1 00:24:17.004 --rc geninfo_all_blocks=1 00:24:17.004 --rc geninfo_unexecuted_blocks=1 00:24:17.004 00:24:17.004 ' 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:17.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.004 --rc genhtml_branch_coverage=1 00:24:17.004 --rc genhtml_function_coverage=1 00:24:17.004 --rc genhtml_legend=1 00:24:17.004 --rc geninfo_all_blocks=1 00:24:17.004 --rc geninfo_unexecuted_blocks=1 00:24:17.004 00:24:17.004 ' 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:17.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.004 --rc genhtml_branch_coverage=1 00:24:17.004 --rc genhtml_function_coverage=1 00:24:17.004 --rc genhtml_legend=1 00:24:17.004 --rc geninfo_all_blocks=1 00:24:17.004 --rc geninfo_unexecuted_blocks=1 00:24:17.004 00:24:17.004 ' 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:17.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:17.004 12:38:22 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:24:17.004 00:24:17.004 real 0m0.208s 00:24:17.004 user 0m0.125s 00:24:17.004 sys 0m0.098s 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:24:17.005 ************************************ 00:24:17.005 END TEST dma 00:24:17.005 ************************************ 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.005 ************************************ 00:24:17.005 START TEST nvmf_identify 00:24:17.005 ************************************ 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:17.005 * Looking for test storage... 00:24:17.005 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:17.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.005 --rc genhtml_branch_coverage=1 00:24:17.005 --rc genhtml_function_coverage=1 00:24:17.005 --rc genhtml_legend=1 00:24:17.005 --rc geninfo_all_blocks=1 00:24:17.005 --rc geninfo_unexecuted_blocks=1 00:24:17.005 00:24:17.005 ' 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:17.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.005 --rc genhtml_branch_coverage=1 00:24:17.005 --rc genhtml_function_coverage=1 00:24:17.005 --rc genhtml_legend=1 00:24:17.005 --rc geninfo_all_blocks=1 00:24:17.005 --rc geninfo_unexecuted_blocks=1 00:24:17.005 00:24:17.005 ' 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:17.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.005 --rc genhtml_branch_coverage=1 00:24:17.005 --rc genhtml_function_coverage=1 00:24:17.005 --rc genhtml_legend=1 00:24:17.005 --rc geninfo_all_blocks=1 00:24:17.005 --rc geninfo_unexecuted_blocks=1 00:24:17.005 00:24:17.005 ' 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:17.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.005 --rc genhtml_branch_coverage=1 00:24:17.005 --rc genhtml_function_coverage=1 00:24:17.005 --rc genhtml_legend=1 00:24:17.005 --rc geninfo_all_blocks=1 00:24:17.005 --rc geninfo_unexecuted_blocks=1 00:24:17.005 00:24:17.005 ' 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:17.005 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:17.006 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:17.006 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:17.006 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:17.006 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:17.006 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:17.006 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:17.006 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:17.006 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:17.006 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:17.006 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:17.006 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:17.006 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:17.006 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.006 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:17.006 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:17.006 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:17.006 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:17.006 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:24:17.006 12:38:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:23.577 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:23.577 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:24:23.577 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:23.577 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:23.577 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:23.577 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:23.577 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:23.577 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:24:23.577 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:23.577 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:24:23.577 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:24:23.577 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:24:23.577 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:24:23.577 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:24:23.577 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:24:23.577 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:23.577 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:23.577 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:23.577 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:23.577 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:23.577 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:23.577 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:23.577 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:23.577 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:23.577 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:23.577 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:23.577 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:23.577 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:23.577 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:23.577 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:23.577 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:23.577 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:23.577 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:23.577 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:23.578 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:23.578 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:23.578 Found net devices under 0000:86:00.0: cvl_0_0 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:23.578 Found net devices under 0000:86:00.1: cvl_0_1 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:23.578 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:23.578 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.312 ms 00:24:23.578 00:24:23.578 --- 10.0.0.2 ping statistics --- 00:24:23.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.578 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:23.578 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:23.578 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:24:23.578 00:24:23.578 --- 10.0.0.1 ping statistics --- 00:24:23.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.578 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=264304 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 264304 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 264304 ']' 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:23.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:23.578 12:38:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:23.578 [2024-11-20 12:38:28.684908] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:24:23.578 [2024-11-20 12:38:28.684954] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:23.578 [2024-11-20 12:38:28.765434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:23.578 [2024-11-20 12:38:28.808810] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:23.578 [2024-11-20 12:38:28.808848] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:23.578 [2024-11-20 12:38:28.808856] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:23.578 [2024-11-20 12:38:28.808862] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:23.578 [2024-11-20 12:38:28.808867] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:23.578 [2024-11-20 12:38:28.810486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:23.578 [2024-11-20 12:38:28.810603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:23.578 [2024-11-20 12:38:28.810710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:23.578 [2024-11-20 12:38:28.810711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:23.837 12:38:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:23.837 12:38:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:24:23.837 12:38:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:23.837 12:38:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.837 12:38:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:23.837 [2024-11-20 12:38:29.525489] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:23.837 12:38:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.837 12:38:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:23.837 12:38:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:23.837 12:38:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:23.837 12:38:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:23.837 12:38:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.837 12:38:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:24.097 Malloc0 00:24:24.097 12:38:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.097 12:38:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:24.097 12:38:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.097 12:38:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:24.097 12:38:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.097 12:38:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:24.097 12:38:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.097 12:38:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:24.097 12:38:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.097 12:38:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:24.097 12:38:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.097 12:38:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:24.098 [2024-11-20 12:38:29.625608] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:24.098 12:38:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.098 12:38:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:24.098 12:38:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.098 12:38:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:24.098 12:38:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.098 12:38:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:24.098 12:38:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.098 12:38:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:24.098 [ 00:24:24.098 { 00:24:24.098 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:24.098 "subtype": "Discovery", 00:24:24.098 "listen_addresses": [ 00:24:24.098 { 00:24:24.098 "trtype": "TCP", 00:24:24.098 "adrfam": "IPv4", 00:24:24.098 "traddr": "10.0.0.2", 00:24:24.098 "trsvcid": "4420" 00:24:24.098 } 00:24:24.098 ], 00:24:24.098 "allow_any_host": true, 00:24:24.098 "hosts": [] 00:24:24.098 }, 00:24:24.098 { 00:24:24.098 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:24.098 "subtype": "NVMe", 00:24:24.098 "listen_addresses": [ 00:24:24.098 { 00:24:24.098 "trtype": "TCP", 00:24:24.098 "adrfam": "IPv4", 00:24:24.098 "traddr": "10.0.0.2", 00:24:24.098 "trsvcid": "4420" 00:24:24.098 } 00:24:24.098 ], 00:24:24.098 "allow_any_host": true, 00:24:24.098 "hosts": [], 00:24:24.098 "serial_number": "SPDK00000000000001", 00:24:24.098 "model_number": "SPDK bdev Controller", 00:24:24.098 "max_namespaces": 32, 00:24:24.098 "min_cntlid": 1, 00:24:24.098 "max_cntlid": 65519, 00:24:24.098 "namespaces": [ 00:24:24.098 { 00:24:24.098 "nsid": 1, 00:24:24.098 "bdev_name": "Malloc0", 00:24:24.098 "name": "Malloc0", 00:24:24.098 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:24.098 "eui64": "ABCDEF0123456789", 00:24:24.098 "uuid": "38dbae00-28ad-4eba-b848-6ccbf0d10fc0" 00:24:24.098 } 00:24:24.098 ] 00:24:24.098 } 00:24:24.098 ] 00:24:24.098 12:38:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.098 12:38:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:24.098 [2024-11-20 12:38:29.676692] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:24:24.098 [2024-11-20 12:38:29.676724] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid264356 ] 00:24:24.098 [2024-11-20 12:38:29.715729] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:24:24.098 [2024-11-20 12:38:29.715776] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:24.098 [2024-11-20 12:38:29.715781] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:24.098 [2024-11-20 12:38:29.715794] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:24.098 [2024-11-20 12:38:29.715805] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:24.098 [2024-11-20 12:38:29.719501] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:24:24.098 [2024-11-20 12:38:29.719532] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x96e690 0 00:24:24.098 [2024-11-20 12:38:29.727215] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:24.098 [2024-11-20 12:38:29.727228] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:24.098 [2024-11-20 12:38:29.727233] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:24.098 [2024-11-20 12:38:29.727236] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:24.098 [2024-11-20 12:38:29.727266] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.098 [2024-11-20 12:38:29.727272] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.098 [2024-11-20 12:38:29.727275] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x96e690) 00:24:24.098 [2024-11-20 12:38:29.727289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:24.098 [2024-11-20 12:38:29.727306] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d0100, cid 0, qid 0 00:24:24.098 [2024-11-20 12:38:29.734213] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.098 [2024-11-20 12:38:29.734222] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.098 [2024-11-20 12:38:29.734225] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.098 [2024-11-20 12:38:29.734229] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d0100) on tqpair=0x96e690 00:24:24.098 [2024-11-20 12:38:29.734239] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:24.098 [2024-11-20 12:38:29.734246] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:24:24.098 [2024-11-20 12:38:29.734251] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:24:24.098 [2024-11-20 12:38:29.734264] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.098 [2024-11-20 12:38:29.734268] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.098 [2024-11-20 12:38:29.734271] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x96e690) 00:24:24.098 [2024-11-20 12:38:29.734278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.098 [2024-11-20 12:38:29.734291] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d0100, cid 0, qid 0 00:24:24.098 [2024-11-20 12:38:29.734366] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.098 [2024-11-20 12:38:29.734372] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.098 [2024-11-20 12:38:29.734375] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.098 [2024-11-20 12:38:29.734378] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d0100) on tqpair=0x96e690 00:24:24.098 [2024-11-20 12:38:29.734382] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:24:24.098 [2024-11-20 12:38:29.734389] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:24:24.098 [2024-11-20 12:38:29.734395] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.098 [2024-11-20 12:38:29.734398] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.098 [2024-11-20 12:38:29.734401] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x96e690) 00:24:24.098 [2024-11-20 12:38:29.734407] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.098 [2024-11-20 12:38:29.734417] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d0100, cid 0, qid 0 00:24:24.098 [2024-11-20 12:38:29.734477] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.098 [2024-11-20 12:38:29.734482] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.098 [2024-11-20 12:38:29.734485] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.098 [2024-11-20 12:38:29.734491] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d0100) on tqpair=0x96e690 00:24:24.098 [2024-11-20 12:38:29.734495] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:24:24.098 [2024-11-20 12:38:29.734502] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:24.098 [2024-11-20 12:38:29.734508] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.098 [2024-11-20 12:38:29.734511] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.098 [2024-11-20 12:38:29.734514] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x96e690) 00:24:24.098 [2024-11-20 12:38:29.734519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.098 [2024-11-20 12:38:29.734528] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d0100, cid 0, qid 0 00:24:24.098 [2024-11-20 12:38:29.734587] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.098 [2024-11-20 12:38:29.734592] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.098 [2024-11-20 12:38:29.734596] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.098 [2024-11-20 12:38:29.734599] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d0100) on tqpair=0x96e690 00:24:24.098 [2024-11-20 12:38:29.734603] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:24.098 [2024-11-20 12:38:29.734611] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.098 [2024-11-20 12:38:29.734614] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.098 [2024-11-20 12:38:29.734617] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x96e690) 00:24:24.098 [2024-11-20 12:38:29.734623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.098 [2024-11-20 12:38:29.734632] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d0100, cid 0, qid 0 00:24:24.098 [2024-11-20 12:38:29.734696] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.098 [2024-11-20 12:38:29.734701] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.098 [2024-11-20 12:38:29.734704] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.098 [2024-11-20 12:38:29.734707] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d0100) on tqpair=0x96e690 00:24:24.098 [2024-11-20 12:38:29.734711] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:24.098 [2024-11-20 12:38:29.734716] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:24.099 [2024-11-20 12:38:29.734722] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:24.099 [2024-11-20 12:38:29.734830] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:24:24.099 [2024-11-20 12:38:29.734834] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:24.099 [2024-11-20 12:38:29.734842] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.099 [2024-11-20 12:38:29.734845] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.099 [2024-11-20 12:38:29.734848] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x96e690) 00:24:24.099 [2024-11-20 12:38:29.734854] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.099 [2024-11-20 12:38:29.734863] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d0100, cid 0, qid 0 00:24:24.099 [2024-11-20 12:38:29.734935] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.099 [2024-11-20 12:38:29.734941] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.099 [2024-11-20 12:38:29.734944] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.099 [2024-11-20 12:38:29.734947] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d0100) on tqpair=0x96e690 00:24:24.099 [2024-11-20 12:38:29.734951] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:24.099 [2024-11-20 12:38:29.734959] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.099 [2024-11-20 12:38:29.734962] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.099 [2024-11-20 12:38:29.734965] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x96e690) 00:24:24.099 [2024-11-20 12:38:29.734971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.099 [2024-11-20 12:38:29.734980] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d0100, cid 0, qid 0 00:24:24.099 [2024-11-20 12:38:29.735046] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.099 [2024-11-20 12:38:29.735051] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.099 [2024-11-20 12:38:29.735054] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.099 [2024-11-20 12:38:29.735058] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d0100) on tqpair=0x96e690 00:24:24.099 [2024-11-20 12:38:29.735061] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:24.099 [2024-11-20 12:38:29.735065] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:24.099 [2024-11-20 12:38:29.735073] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:24:24.099 [2024-11-20 12:38:29.735082] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:24.099 [2024-11-20 12:38:29.735090] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.099 [2024-11-20 12:38:29.735093] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x96e690) 00:24:24.099 [2024-11-20 12:38:29.735098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.099 [2024-11-20 12:38:29.735108] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d0100, cid 0, qid 0 00:24:24.099 [2024-11-20 12:38:29.735206] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:24.099 [2024-11-20 12:38:29.735212] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:24.099 [2024-11-20 12:38:29.735215] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:24.099 [2024-11-20 12:38:29.735219] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x96e690): datao=0, datal=4096, cccid=0 00:24:24.099 [2024-11-20 12:38:29.735223] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9d0100) on tqpair(0x96e690): expected_datao=0, payload_size=4096 00:24:24.099 [2024-11-20 12:38:29.735227] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.099 [2024-11-20 12:38:29.735234] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:24.099 [2024-11-20 12:38:29.735237] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:24.099 [2024-11-20 12:38:29.735251] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.099 [2024-11-20 12:38:29.735256] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.099 [2024-11-20 12:38:29.735258] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.099 [2024-11-20 12:38:29.735261] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d0100) on tqpair=0x96e690 00:24:24.099 [2024-11-20 12:38:29.735270] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:24:24.099 [2024-11-20 12:38:29.735274] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:24:24.099 [2024-11-20 12:38:29.735278] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:24:24.099 [2024-11-20 12:38:29.735285] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:24:24.099 [2024-11-20 12:38:29.735289] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:24:24.099 [2024-11-20 12:38:29.735293] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:24:24.099 [2024-11-20 12:38:29.735304] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:24.099 [2024-11-20 12:38:29.735310] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.099 [2024-11-20 12:38:29.735313] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.099 [2024-11-20 12:38:29.735316] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x96e690) 00:24:24.099 [2024-11-20 12:38:29.735322] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:24.099 [2024-11-20 12:38:29.735332] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d0100, cid 0, qid 0 00:24:24.099 [2024-11-20 12:38:29.735398] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.099 [2024-11-20 12:38:29.735403] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.099 [2024-11-20 12:38:29.735406] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.099 [2024-11-20 12:38:29.735410] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d0100) on tqpair=0x96e690 00:24:24.099 [2024-11-20 12:38:29.735416] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.099 [2024-11-20 12:38:29.735420] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.099 [2024-11-20 12:38:29.735423] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x96e690) 00:24:24.099 [2024-11-20 12:38:29.735428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.099 [2024-11-20 12:38:29.735433] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.099 [2024-11-20 12:38:29.735436] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.099 [2024-11-20 12:38:29.735439] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x96e690) 00:24:24.099 [2024-11-20 12:38:29.735444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.099 [2024-11-20 12:38:29.735449] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.099 [2024-11-20 12:38:29.735452] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.099 [2024-11-20 12:38:29.735455] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x96e690) 00:24:24.099 [2024-11-20 12:38:29.735460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.099 [2024-11-20 12:38:29.735464] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.099 [2024-11-20 12:38:29.735468] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.099 [2024-11-20 12:38:29.735471] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x96e690) 00:24:24.099 [2024-11-20 12:38:29.735475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.099 [2024-11-20 12:38:29.735479] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:24.099 [2024-11-20 12:38:29.735489] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:24.099 [2024-11-20 12:38:29.735494] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.099 [2024-11-20 12:38:29.735497] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x96e690) 00:24:24.099 [2024-11-20 12:38:29.735503] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.099 [2024-11-20 12:38:29.735513] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d0100, cid 0, qid 0 00:24:24.099 [2024-11-20 12:38:29.735518] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d0280, cid 1, qid 0 00:24:24.099 [2024-11-20 12:38:29.735522] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d0400, cid 2, qid 0 00:24:24.099 [2024-11-20 12:38:29.735526] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d0580, cid 3, qid 0 00:24:24.099 [2024-11-20 12:38:29.735529] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d0700, cid 4, qid 0 00:24:24.099 [2024-11-20 12:38:29.735637] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.099 [2024-11-20 12:38:29.735642] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.099 [2024-11-20 12:38:29.735645] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.099 [2024-11-20 12:38:29.735648] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d0700) on tqpair=0x96e690 00:24:24.099 [2024-11-20 12:38:29.735661] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:24:24.099 [2024-11-20 12:38:29.735666] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:24:24.099 [2024-11-20 12:38:29.735676] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.099 [2024-11-20 12:38:29.735679] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x96e690) 00:24:24.099 [2024-11-20 12:38:29.735684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.099 [2024-11-20 12:38:29.735694] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d0700, cid 4, qid 0 00:24:24.100 [2024-11-20 12:38:29.735766] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:24.100 [2024-11-20 12:38:29.735771] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:24.100 [2024-11-20 12:38:29.735774] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:24.100 [2024-11-20 12:38:29.735778] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x96e690): datao=0, datal=4096, cccid=4 00:24:24.100 [2024-11-20 12:38:29.735781] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9d0700) on tqpair(0x96e690): expected_datao=0, payload_size=4096 00:24:24.100 [2024-11-20 12:38:29.735785] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.100 [2024-11-20 12:38:29.735796] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:24.100 [2024-11-20 12:38:29.735800] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:24.100 [2024-11-20 12:38:29.779212] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.100 [2024-11-20 12:38:29.779224] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.100 [2024-11-20 12:38:29.779227] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.100 [2024-11-20 12:38:29.779231] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d0700) on tqpair=0x96e690 00:24:24.100 [2024-11-20 12:38:29.779244] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:24:24.100 [2024-11-20 12:38:29.779267] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.100 [2024-11-20 12:38:29.779271] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x96e690) 00:24:24.100 [2024-11-20 12:38:29.779281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.100 [2024-11-20 12:38:29.779287] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.100 [2024-11-20 12:38:29.779290] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.100 [2024-11-20 12:38:29.779293] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x96e690) 00:24:24.100 [2024-11-20 12:38:29.779298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.100 [2024-11-20 12:38:29.779315] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d0700, cid 4, qid 0 00:24:24.100 [2024-11-20 12:38:29.779320] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d0880, cid 5, qid 0 00:24:24.100 [2024-11-20 12:38:29.779420] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:24.100 [2024-11-20 12:38:29.779426] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:24.100 [2024-11-20 12:38:29.779429] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:24.100 [2024-11-20 12:38:29.779432] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x96e690): datao=0, datal=1024, cccid=4 00:24:24.100 [2024-11-20 12:38:29.779436] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9d0700) on tqpair(0x96e690): expected_datao=0, payload_size=1024 00:24:24.100 [2024-11-20 12:38:29.779439] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.100 [2024-11-20 12:38:29.779445] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:24.100 [2024-11-20 12:38:29.779448] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:24.100 [2024-11-20 12:38:29.779453] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.100 [2024-11-20 12:38:29.779458] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.100 [2024-11-20 12:38:29.779461] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.100 [2024-11-20 12:38:29.779464] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d0880) on tqpair=0x96e690 00:24:24.100 [2024-11-20 12:38:29.827210] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.100 [2024-11-20 12:38:29.827225] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.100 [2024-11-20 12:38:29.827228] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.100 [2024-11-20 12:38:29.827233] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d0700) on tqpair=0x96e690 00:24:24.100 [2024-11-20 12:38:29.827248] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.100 [2024-11-20 12:38:29.827252] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x96e690) 00:24:24.100 [2024-11-20 12:38:29.827260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.100 [2024-11-20 12:38:29.827277] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d0700, cid 4, qid 0 00:24:24.100 [2024-11-20 12:38:29.827392] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:24.100 [2024-11-20 12:38:29.827399] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:24.100 [2024-11-20 12:38:29.827402] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:24.100 [2024-11-20 12:38:29.827405] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x96e690): datao=0, datal=3072, cccid=4 00:24:24.100 [2024-11-20 12:38:29.827409] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9d0700) on tqpair(0x96e690): expected_datao=0, payload_size=3072 00:24:24.100 [2024-11-20 12:38:29.827413] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.100 [2024-11-20 12:38:29.827419] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:24.100 [2024-11-20 12:38:29.827422] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:24.100 [2024-11-20 12:38:29.827432] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.100 [2024-11-20 12:38:29.827441] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.100 [2024-11-20 12:38:29.827444] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.100 [2024-11-20 12:38:29.827447] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d0700) on tqpair=0x96e690 00:24:24.100 [2024-11-20 12:38:29.827455] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.100 [2024-11-20 12:38:29.827458] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x96e690) 00:24:24.100 [2024-11-20 12:38:29.827464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.100 [2024-11-20 12:38:29.827479] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d0700, cid 4, qid 0 00:24:24.100 [2024-11-20 12:38:29.827550] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:24.100 [2024-11-20 12:38:29.827555] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:24.100 [2024-11-20 12:38:29.827558] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:24.100 [2024-11-20 12:38:29.827561] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x96e690): datao=0, datal=8, cccid=4 00:24:24.100 [2024-11-20 12:38:29.827565] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9d0700) on tqpair(0x96e690): expected_datao=0, payload_size=8 00:24:24.100 [2024-11-20 12:38:29.827568] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.100 [2024-11-20 12:38:29.827574] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:24.100 [2024-11-20 12:38:29.827577] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:24.364 [2024-11-20 12:38:29.868329] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.364 [2024-11-20 12:38:29.868344] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.364 [2024-11-20 12:38:29.868347] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.364 [2024-11-20 12:38:29.868351] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d0700) on tqpair=0x96e690 00:24:24.364 ===================================================== 00:24:24.364 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:24.364 ===================================================== 00:24:24.364 Controller Capabilities/Features 00:24:24.364 ================================ 00:24:24.364 Vendor ID: 0000 00:24:24.364 Subsystem Vendor ID: 0000 00:24:24.364 Serial Number: .................... 00:24:24.364 Model Number: ........................................ 00:24:24.364 Firmware Version: 25.01 00:24:24.364 Recommended Arb Burst: 0 00:24:24.364 IEEE OUI Identifier: 00 00 00 00:24:24.364 Multi-path I/O 00:24:24.364 May have multiple subsystem ports: No 00:24:24.364 May have multiple controllers: No 00:24:24.364 Associated with SR-IOV VF: No 00:24:24.364 Max Data Transfer Size: 131072 00:24:24.364 Max Number of Namespaces: 0 00:24:24.364 Max Number of I/O Queues: 1024 00:24:24.364 NVMe Specification Version (VS): 1.3 00:24:24.364 NVMe Specification Version (Identify): 1.3 00:24:24.364 Maximum Queue Entries: 128 00:24:24.364 Contiguous Queues Required: Yes 00:24:24.364 Arbitration Mechanisms Supported 00:24:24.364 Weighted Round Robin: Not Supported 00:24:24.364 Vendor Specific: Not Supported 00:24:24.364 Reset Timeout: 15000 ms 00:24:24.364 Doorbell Stride: 4 bytes 00:24:24.364 NVM Subsystem Reset: Not Supported 00:24:24.364 Command Sets Supported 00:24:24.364 NVM Command Set: Supported 00:24:24.364 Boot Partition: Not Supported 00:24:24.364 Memory Page Size Minimum: 4096 bytes 00:24:24.364 Memory Page Size Maximum: 4096 bytes 00:24:24.364 Persistent Memory Region: Not Supported 00:24:24.364 Optional Asynchronous Events Supported 00:24:24.364 Namespace Attribute Notices: Not Supported 00:24:24.364 Firmware Activation Notices: Not Supported 00:24:24.364 ANA Change Notices: Not Supported 00:24:24.364 PLE Aggregate Log Change Notices: Not Supported 00:24:24.364 LBA Status Info Alert Notices: Not Supported 00:24:24.364 EGE Aggregate Log Change Notices: Not Supported 00:24:24.364 Normal NVM Subsystem Shutdown event: Not Supported 00:24:24.364 Zone Descriptor Change Notices: Not Supported 00:24:24.364 Discovery Log Change Notices: Supported 00:24:24.364 Controller Attributes 00:24:24.364 128-bit Host Identifier: Not Supported 00:24:24.364 Non-Operational Permissive Mode: Not Supported 00:24:24.364 NVM Sets: Not Supported 00:24:24.364 Read Recovery Levels: Not Supported 00:24:24.364 Endurance Groups: Not Supported 00:24:24.364 Predictable Latency Mode: Not Supported 00:24:24.364 Traffic Based Keep ALive: Not Supported 00:24:24.364 Namespace Granularity: Not Supported 00:24:24.364 SQ Associations: Not Supported 00:24:24.364 UUID List: Not Supported 00:24:24.364 Multi-Domain Subsystem: Not Supported 00:24:24.364 Fixed Capacity Management: Not Supported 00:24:24.364 Variable Capacity Management: Not Supported 00:24:24.364 Delete Endurance Group: Not Supported 00:24:24.364 Delete NVM Set: Not Supported 00:24:24.364 Extended LBA Formats Supported: Not Supported 00:24:24.364 Flexible Data Placement Supported: Not Supported 00:24:24.364 00:24:24.364 Controller Memory Buffer Support 00:24:24.364 ================================ 00:24:24.364 Supported: No 00:24:24.364 00:24:24.364 Persistent Memory Region Support 00:24:24.364 ================================ 00:24:24.364 Supported: No 00:24:24.364 00:24:24.364 Admin Command Set Attributes 00:24:24.364 ============================ 00:24:24.364 Security Send/Receive: Not Supported 00:24:24.364 Format NVM: Not Supported 00:24:24.364 Firmware Activate/Download: Not Supported 00:24:24.364 Namespace Management: Not Supported 00:24:24.364 Device Self-Test: Not Supported 00:24:24.364 Directives: Not Supported 00:24:24.364 NVMe-MI: Not Supported 00:24:24.364 Virtualization Management: Not Supported 00:24:24.364 Doorbell Buffer Config: Not Supported 00:24:24.364 Get LBA Status Capability: Not Supported 00:24:24.364 Command & Feature Lockdown Capability: Not Supported 00:24:24.364 Abort Command Limit: 1 00:24:24.364 Async Event Request Limit: 4 00:24:24.364 Number of Firmware Slots: N/A 00:24:24.364 Firmware Slot 1 Read-Only: N/A 00:24:24.364 Firmware Activation Without Reset: N/A 00:24:24.364 Multiple Update Detection Support: N/A 00:24:24.364 Firmware Update Granularity: No Information Provided 00:24:24.364 Per-Namespace SMART Log: No 00:24:24.364 Asymmetric Namespace Access Log Page: Not Supported 00:24:24.364 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:24.364 Command Effects Log Page: Not Supported 00:24:24.364 Get Log Page Extended Data: Supported 00:24:24.364 Telemetry Log Pages: Not Supported 00:24:24.364 Persistent Event Log Pages: Not Supported 00:24:24.364 Supported Log Pages Log Page: May Support 00:24:24.364 Commands Supported & Effects Log Page: Not Supported 00:24:24.364 Feature Identifiers & Effects Log Page:May Support 00:24:24.364 NVMe-MI Commands & Effects Log Page: May Support 00:24:24.364 Data Area 4 for Telemetry Log: Not Supported 00:24:24.364 Error Log Page Entries Supported: 128 00:24:24.364 Keep Alive: Not Supported 00:24:24.364 00:24:24.364 NVM Command Set Attributes 00:24:24.364 ========================== 00:24:24.365 Submission Queue Entry Size 00:24:24.365 Max: 1 00:24:24.365 Min: 1 00:24:24.365 Completion Queue Entry Size 00:24:24.365 Max: 1 00:24:24.365 Min: 1 00:24:24.365 Number of Namespaces: 0 00:24:24.365 Compare Command: Not Supported 00:24:24.365 Write Uncorrectable Command: Not Supported 00:24:24.365 Dataset Management Command: Not Supported 00:24:24.365 Write Zeroes Command: Not Supported 00:24:24.365 Set Features Save Field: Not Supported 00:24:24.365 Reservations: Not Supported 00:24:24.365 Timestamp: Not Supported 00:24:24.365 Copy: Not Supported 00:24:24.365 Volatile Write Cache: Not Present 00:24:24.365 Atomic Write Unit (Normal): 1 00:24:24.365 Atomic Write Unit (PFail): 1 00:24:24.365 Atomic Compare & Write Unit: 1 00:24:24.365 Fused Compare & Write: Supported 00:24:24.365 Scatter-Gather List 00:24:24.365 SGL Command Set: Supported 00:24:24.365 SGL Keyed: Supported 00:24:24.365 SGL Bit Bucket Descriptor: Not Supported 00:24:24.365 SGL Metadata Pointer: Not Supported 00:24:24.365 Oversized SGL: Not Supported 00:24:24.365 SGL Metadata Address: Not Supported 00:24:24.365 SGL Offset: Supported 00:24:24.365 Transport SGL Data Block: Not Supported 00:24:24.365 Replay Protected Memory Block: Not Supported 00:24:24.365 00:24:24.365 Firmware Slot Information 00:24:24.365 ========================= 00:24:24.365 Active slot: 0 00:24:24.365 00:24:24.365 00:24:24.365 Error Log 00:24:24.365 ========= 00:24:24.365 00:24:24.365 Active Namespaces 00:24:24.365 ================= 00:24:24.365 Discovery Log Page 00:24:24.365 ================== 00:24:24.365 Generation Counter: 2 00:24:24.365 Number of Records: 2 00:24:24.365 Record Format: 0 00:24:24.365 00:24:24.365 Discovery Log Entry 0 00:24:24.365 ---------------------- 00:24:24.365 Transport Type: 3 (TCP) 00:24:24.365 Address Family: 1 (IPv4) 00:24:24.365 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:24.365 Entry Flags: 00:24:24.365 Duplicate Returned Information: 1 00:24:24.365 Explicit Persistent Connection Support for Discovery: 1 00:24:24.365 Transport Requirements: 00:24:24.365 Secure Channel: Not Required 00:24:24.365 Port ID: 0 (0x0000) 00:24:24.365 Controller ID: 65535 (0xffff) 00:24:24.365 Admin Max SQ Size: 128 00:24:24.365 Transport Service Identifier: 4420 00:24:24.365 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:24.365 Transport Address: 10.0.0.2 00:24:24.365 Discovery Log Entry 1 00:24:24.365 ---------------------- 00:24:24.365 Transport Type: 3 (TCP) 00:24:24.365 Address Family: 1 (IPv4) 00:24:24.365 Subsystem Type: 2 (NVM Subsystem) 00:24:24.365 Entry Flags: 00:24:24.365 Duplicate Returned Information: 0 00:24:24.365 Explicit Persistent Connection Support for Discovery: 0 00:24:24.365 Transport Requirements: 00:24:24.365 Secure Channel: Not Required 00:24:24.365 Port ID: 0 (0x0000) 00:24:24.365 Controller ID: 65535 (0xffff) 00:24:24.365 Admin Max SQ Size: 128 00:24:24.365 Transport Service Identifier: 4420 00:24:24.365 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:24.365 Transport Address: 10.0.0.2 [2024-11-20 12:38:29.868438] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:24:24.365 [2024-11-20 12:38:29.868449] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d0100) on tqpair=0x96e690 00:24:24.365 [2024-11-20 12:38:29.868455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.365 [2024-11-20 12:38:29.868460] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d0280) on tqpair=0x96e690 00:24:24.365 [2024-11-20 12:38:29.868464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.365 [2024-11-20 12:38:29.868468] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d0400) on tqpair=0x96e690 00:24:24.365 [2024-11-20 12:38:29.868472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.365 [2024-11-20 12:38:29.868476] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d0580) on tqpair=0x96e690 00:24:24.365 [2024-11-20 12:38:29.868480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.365 [2024-11-20 12:38:29.868490] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.365 [2024-11-20 12:38:29.868494] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.365 [2024-11-20 12:38:29.868497] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x96e690) 00:24:24.365 [2024-11-20 12:38:29.868504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.365 [2024-11-20 12:38:29.868518] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d0580, cid 3, qid 0 00:24:24.365 [2024-11-20 12:38:29.868580] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.365 [2024-11-20 12:38:29.868588] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.365 [2024-11-20 12:38:29.868591] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.365 [2024-11-20 12:38:29.868594] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d0580) on tqpair=0x96e690 00:24:24.365 [2024-11-20 12:38:29.868601] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.365 [2024-11-20 12:38:29.868605] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.365 [2024-11-20 12:38:29.868607] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x96e690) 00:24:24.365 [2024-11-20 12:38:29.868613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.365 [2024-11-20 12:38:29.868625] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d0580, cid 3, qid 0 00:24:24.365 [2024-11-20 12:38:29.868707] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.365 [2024-11-20 12:38:29.868713] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.365 [2024-11-20 12:38:29.868716] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.365 [2024-11-20 12:38:29.868719] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d0580) on tqpair=0x96e690 00:24:24.365 [2024-11-20 12:38:29.868724] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:24:24.365 [2024-11-20 12:38:29.868727] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:24:24.365 [2024-11-20 12:38:29.868735] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.365 [2024-11-20 12:38:29.868739] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.365 [2024-11-20 12:38:29.868742] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x96e690) 00:24:24.365 [2024-11-20 12:38:29.868748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.365 [2024-11-20 12:38:29.868757] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d0580, cid 3, qid 0 00:24:24.365 [2024-11-20 12:38:29.868816] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.365 [2024-11-20 12:38:29.868822] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.365 [2024-11-20 12:38:29.868825] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.365 [2024-11-20 12:38:29.868828] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d0580) on tqpair=0x96e690 00:24:24.365 [2024-11-20 12:38:29.868837] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.365 [2024-11-20 12:38:29.868840] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.365 [2024-11-20 12:38:29.868843] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x96e690) 00:24:24.365 [2024-11-20 12:38:29.868849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.365 [2024-11-20 12:38:29.868858] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d0580, cid 3, qid 0 00:24:24.365 [2024-11-20 12:38:29.868919] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.365 [2024-11-20 12:38:29.868925] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.365 [2024-11-20 12:38:29.868928] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.365 [2024-11-20 12:38:29.868931] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d0580) on tqpair=0x96e690 00:24:24.365 [2024-11-20 12:38:29.868939] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.365 [2024-11-20 12:38:29.868943] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.365 [2024-11-20 12:38:29.868946] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x96e690) 00:24:24.365 [2024-11-20 12:38:29.868951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.365 [2024-11-20 12:38:29.868962] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d0580, cid 3, qid 0 00:24:24.365 [2024-11-20 12:38:29.869026] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.365 [2024-11-20 12:38:29.869031] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.365 [2024-11-20 12:38:29.869034] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.365 [2024-11-20 12:38:29.869037] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d0580) on tqpair=0x96e690 00:24:24.365 [2024-11-20 12:38:29.869045] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.365 [2024-11-20 12:38:29.869049] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.366 [2024-11-20 12:38:29.869052] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x96e690) 00:24:24.366 [2024-11-20 12:38:29.869057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.366 [2024-11-20 12:38:29.869066] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d0580, cid 3, qid 0 00:24:24.366 [2024-11-20 12:38:29.869124] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.366 [2024-11-20 12:38:29.869129] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.366 [2024-11-20 12:38:29.869132] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.366 [2024-11-20 12:38:29.869135] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d0580) on tqpair=0x96e690 00:24:24.366 [2024-11-20 12:38:29.869143] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.366 [2024-11-20 12:38:29.869147] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.366 [2024-11-20 12:38:29.869150] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x96e690) 00:24:24.366 [2024-11-20 12:38:29.869155] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.366 [2024-11-20 12:38:29.869164] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d0580, cid 3, qid 0 00:24:24.366 [2024-11-20 12:38:29.869230] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.366 [2024-11-20 12:38:29.869237] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.366 [2024-11-20 12:38:29.869239] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.366 [2024-11-20 12:38:29.869243] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d0580) on tqpair=0x96e690 00:24:24.366 [2024-11-20 12:38:29.869251] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.366 [2024-11-20 12:38:29.869254] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.366 [2024-11-20 12:38:29.869257] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x96e690) 00:24:24.366 [2024-11-20 12:38:29.869262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.366 [2024-11-20 12:38:29.869272] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d0580, cid 3, qid 0 00:24:24.366 [2024-11-20 12:38:29.869330] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.366 [2024-11-20 12:38:29.869336] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.366 [2024-11-20 12:38:29.869339] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.366 [2024-11-20 12:38:29.869342] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d0580) on tqpair=0x96e690 00:24:24.366 [2024-11-20 12:38:29.869350] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.366 [2024-11-20 12:38:29.869353] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.366 [2024-11-20 12:38:29.869356] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x96e690) 00:24:24.366 [2024-11-20 12:38:29.869362] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.366 [2024-11-20 12:38:29.869371] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d0580, cid 3, qid 0 00:24:24.366 [2024-11-20 12:38:29.869431] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.366 [2024-11-20 12:38:29.869437] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.366 [2024-11-20 12:38:29.869440] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.366 [2024-11-20 12:38:29.869443] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d0580) on tqpair=0x96e690 00:24:24.366 [2024-11-20 12:38:29.869451] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.366 [2024-11-20 12:38:29.869455] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.366 [2024-11-20 12:38:29.869458] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x96e690) 00:24:24.366 [2024-11-20 12:38:29.869463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.366 [2024-11-20 12:38:29.869472] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d0580, cid 3, qid 0 00:24:24.366 [2024-11-20 12:38:29.869543] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.366 [2024-11-20 12:38:29.869548] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.366 [2024-11-20 12:38:29.869551] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.366 [2024-11-20 12:38:29.869554] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d0580) on tqpair=0x96e690 00:24:24.366 [2024-11-20 12:38:29.869563] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.366 [2024-11-20 12:38:29.869566] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.366 [2024-11-20 12:38:29.869569] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x96e690) 00:24:24.366 [2024-11-20 12:38:29.869575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.366 [2024-11-20 12:38:29.869584] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d0580, cid 3, qid 0 00:24:24.366 [2024-11-20 12:38:29.869642] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.366 [2024-11-20 12:38:29.869647] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.366 [2024-11-20 12:38:29.869650] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.366 [2024-11-20 12:38:29.869654] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d0580) on tqpair=0x96e690 00:24:24.366 [2024-11-20 12:38:29.869662] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.366 [2024-11-20 12:38:29.869665] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.366 [2024-11-20 12:38:29.869668] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x96e690) 00:24:24.366 [2024-11-20 12:38:29.869673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.366 [2024-11-20 12:38:29.869682] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d0580, cid 3, qid 0 00:24:24.366 [2024-11-20 12:38:29.869746] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.366 [2024-11-20 12:38:29.869752] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.366 [2024-11-20 12:38:29.869755] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.366 [2024-11-20 12:38:29.869758] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d0580) on tqpair=0x96e690 00:24:24.366 [2024-11-20 12:38:29.869766] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.366 [2024-11-20 12:38:29.869770] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.366 [2024-11-20 12:38:29.869773] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x96e690) 00:24:24.366 [2024-11-20 12:38:29.869778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.366 [2024-11-20 12:38:29.869787] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d0580, cid 3, qid 0 00:24:24.366 [2024-11-20 12:38:29.869846] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.366 [2024-11-20 12:38:29.869851] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.366 [2024-11-20 12:38:29.869856] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.366 [2024-11-20 12:38:29.869859] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d0580) on tqpair=0x96e690 00:24:24.366 [2024-11-20 12:38:29.869867] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.366 [2024-11-20 12:38:29.869871] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.366 [2024-11-20 12:38:29.869874] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x96e690) 00:24:24.366 [2024-11-20 12:38:29.869879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.366 [2024-11-20 12:38:29.869888] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d0580, cid 3, qid 0 00:24:24.366 [2024-11-20 12:38:29.869951] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.366 [2024-11-20 12:38:29.869956] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.366 [2024-11-20 12:38:29.869959] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.366 [2024-11-20 12:38:29.869962] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d0580) on tqpair=0x96e690 00:24:24.366 [2024-11-20 12:38:29.869971] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.366 [2024-11-20 12:38:29.869974] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.366 [2024-11-20 12:38:29.869977] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x96e690) 00:24:24.366 [2024-11-20 12:38:29.869982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.366 [2024-11-20 12:38:29.869992] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d0580, cid 3, qid 0 00:24:24.366 [2024-11-20 12:38:29.870050] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.366 [2024-11-20 12:38:29.870056] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.366 [2024-11-20 12:38:29.870059] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.366 [2024-11-20 12:38:29.870062] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d0580) on tqpair=0x96e690 00:24:24.366 [2024-11-20 12:38:29.870070] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.366 [2024-11-20 12:38:29.870073] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.366 [2024-11-20 12:38:29.870076] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x96e690) 00:24:24.366 [2024-11-20 12:38:29.870082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.366 [2024-11-20 12:38:29.870091] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d0580, cid 3, qid 0 00:24:24.366 [2024-11-20 12:38:29.870159] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.366 [2024-11-20 12:38:29.870164] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.366 [2024-11-20 12:38:29.870167] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.366 [2024-11-20 12:38:29.870170] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d0580) on tqpair=0x96e690 00:24:24.366 [2024-11-20 12:38:29.870178] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.366 [2024-11-20 12:38:29.870182] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.366 [2024-11-20 12:38:29.870185] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x96e690) 00:24:24.366 [2024-11-20 12:38:29.870190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.366 [2024-11-20 12:38:29.870199] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d0580, cid 3, qid 0 00:24:24.366 [2024-11-20 12:38:29.870264] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.366 [2024-11-20 12:38:29.870271] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.366 [2024-11-20 12:38:29.870273] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.367 [2024-11-20 12:38:29.870279] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d0580) on tqpair=0x96e690 00:24:24.367 [2024-11-20 12:38:29.870287] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.367 [2024-11-20 12:38:29.870290] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.367 [2024-11-20 12:38:29.870293] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x96e690) 00:24:24.367 [2024-11-20 12:38:29.870299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.367 [2024-11-20 12:38:29.870309] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d0580, cid 3, qid 0 00:24:24.367 [2024-11-20 12:38:29.870371] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.367 [2024-11-20 12:38:29.870377] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.367 [2024-11-20 12:38:29.870379] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.367 [2024-11-20 12:38:29.870383] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d0580) on tqpair=0x96e690 00:24:24.367 [2024-11-20 12:38:29.870391] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.367 [2024-11-20 12:38:29.870394] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.367 [2024-11-20 12:38:29.870397] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x96e690) 00:24:24.367 [2024-11-20 12:38:29.870403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.367 [2024-11-20 12:38:29.870412] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d0580, cid 3, qid 0 00:24:24.367 [2024-11-20 12:38:29.870476] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.367 [2024-11-20 12:38:29.870482] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.367 [2024-11-20 12:38:29.870485] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.367 [2024-11-20 12:38:29.870488] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d0580) on tqpair=0x96e690 00:24:24.367 [2024-11-20 12:38:29.870497] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.367 [2024-11-20 12:38:29.870500] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.367 [2024-11-20 12:38:29.870503] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x96e690) 00:24:24.367 [2024-11-20 12:38:29.870508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.367 [2024-11-20 12:38:29.870518] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d0580, cid 3, qid 0 00:24:24.367 [2024-11-20 12:38:29.870583] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.367 [2024-11-20 12:38:29.870588] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.367 [2024-11-20 12:38:29.870591] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.367 [2024-11-20 12:38:29.870594] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d0580) on tqpair=0x96e690 00:24:24.367 [2024-11-20 12:38:29.870603] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.367 [2024-11-20 12:38:29.870607] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.367 [2024-11-20 12:38:29.870609] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x96e690) 00:24:24.367 [2024-11-20 12:38:29.870615] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.367 [2024-11-20 12:38:29.870624] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d0580, cid 3, qid 0 00:24:24.367 [2024-11-20 12:38:29.870692] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.367 [2024-11-20 12:38:29.870698] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.367 [2024-11-20 12:38:29.870701] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.367 [2024-11-20 12:38:29.870704] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d0580) on tqpair=0x96e690 00:24:24.367 [2024-11-20 12:38:29.870713] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.367 [2024-11-20 12:38:29.870717] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.367 [2024-11-20 12:38:29.870720] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x96e690) 00:24:24.367 [2024-11-20 12:38:29.870726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.367 [2024-11-20 12:38:29.870735] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d0580, cid 3, qid 0 00:24:24.367 [2024-11-20 12:38:29.870803] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.367 [2024-11-20 12:38:29.870808] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.367 [2024-11-20 12:38:29.870811] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.367 [2024-11-20 12:38:29.870814] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d0580) on tqpair=0x96e690 00:24:24.367 [2024-11-20 12:38:29.870823] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.367 [2024-11-20 12:38:29.870826] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.367 [2024-11-20 12:38:29.870829] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x96e690) 00:24:24.367 [2024-11-20 12:38:29.870835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.367 [2024-11-20 12:38:29.870844] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d0580, cid 3, qid 0 00:24:24.367 [2024-11-20 12:38:29.870906] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.367 [2024-11-20 12:38:29.870911] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.367 [2024-11-20 12:38:29.870914] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.367 [2024-11-20 12:38:29.870917] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d0580) on tqpair=0x96e690 00:24:24.367 [2024-11-20 12:38:29.870925] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.367 [2024-11-20 12:38:29.870929] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.367 [2024-11-20 12:38:29.870932] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x96e690) 00:24:24.367 [2024-11-20 12:38:29.870937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.367 [2024-11-20 12:38:29.870946] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d0580, cid 3, qid 0 00:24:24.367 [2024-11-20 12:38:29.871015] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.367 [2024-11-20 12:38:29.871021] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.367 [2024-11-20 12:38:29.871023] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.367 [2024-11-20 12:38:29.871027] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d0580) on tqpair=0x96e690 00:24:24.367 [2024-11-20 12:38:29.871036] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.367 [2024-11-20 12:38:29.871039] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.367 [2024-11-20 12:38:29.871042] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x96e690) 00:24:24.367 [2024-11-20 12:38:29.871047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.367 [2024-11-20 12:38:29.871056] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d0580, cid 3, qid 0 00:24:24.367 [2024-11-20 12:38:29.871117] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.367 [2024-11-20 12:38:29.871123] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.367 [2024-11-20 12:38:29.871126] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.367 [2024-11-20 12:38:29.871129] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d0580) on tqpair=0x96e690 00:24:24.367 [2024-11-20 12:38:29.871137] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.367 [2024-11-20 12:38:29.871142] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.367 [2024-11-20 12:38:29.871146] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x96e690) 00:24:24.367 [2024-11-20 12:38:29.871151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.367 [2024-11-20 12:38:29.871160] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d0580, cid 3, qid 0 00:24:24.367 [2024-11-20 12:38:29.875210] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.367 [2024-11-20 12:38:29.875218] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.367 [2024-11-20 12:38:29.875221] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.367 [2024-11-20 12:38:29.875224] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d0580) on tqpair=0x96e690 00:24:24.367 [2024-11-20 12:38:29.875234] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.367 [2024-11-20 12:38:29.875238] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.367 [2024-11-20 12:38:29.875241] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x96e690) 00:24:24.367 [2024-11-20 12:38:29.875246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.367 [2024-11-20 12:38:29.875257] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d0580, cid 3, qid 0 00:24:24.367 [2024-11-20 12:38:29.875412] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.367 [2024-11-20 12:38:29.875417] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.367 [2024-11-20 12:38:29.875420] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.367 [2024-11-20 12:38:29.875423] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d0580) on tqpair=0x96e690 00:24:24.367 [2024-11-20 12:38:29.875430] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:24:24.367 00:24:24.367 12:38:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:24.367 [2024-11-20 12:38:29.912069] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:24:24.367 [2024-11-20 12:38:29.912107] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid264487 ] 00:24:24.367 [2024-11-20 12:38:29.953341] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:24:24.367 [2024-11-20 12:38:29.953389] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:24.367 [2024-11-20 12:38:29.953393] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:24.367 [2024-11-20 12:38:29.953405] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:24.368 [2024-11-20 12:38:29.953414] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:24.368 [2024-11-20 12:38:29.953749] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:24:24.368 [2024-11-20 12:38:29.953774] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xa33690 0 00:24:24.368 [2024-11-20 12:38:29.960214] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:24.368 [2024-11-20 12:38:29.960229] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:24.368 [2024-11-20 12:38:29.960233] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:24.368 [2024-11-20 12:38:29.960238] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:24.368 [2024-11-20 12:38:29.960268] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.368 [2024-11-20 12:38:29.960272] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.368 [2024-11-20 12:38:29.960276] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa33690) 00:24:24.368 [2024-11-20 12:38:29.960285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:24.368 [2024-11-20 12:38:29.960302] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa95100, cid 0, qid 0 00:24:24.368 [2024-11-20 12:38:29.967212] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.368 [2024-11-20 12:38:29.967221] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.368 [2024-11-20 12:38:29.967224] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.368 [2024-11-20 12:38:29.967228] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa95100) on tqpair=0xa33690 00:24:24.368 [2024-11-20 12:38:29.967236] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:24.368 [2024-11-20 12:38:29.967242] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:24:24.368 [2024-11-20 12:38:29.967246] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:24:24.368 [2024-11-20 12:38:29.967258] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.368 [2024-11-20 12:38:29.967262] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.368 [2024-11-20 12:38:29.967265] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa33690) 00:24:24.368 [2024-11-20 12:38:29.967272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.368 [2024-11-20 12:38:29.967285] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa95100, cid 0, qid 0 00:24:24.368 [2024-11-20 12:38:29.967370] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.368 [2024-11-20 12:38:29.967376] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.368 [2024-11-20 12:38:29.967379] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.368 [2024-11-20 12:38:29.967382] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa95100) on tqpair=0xa33690 00:24:24.368 [2024-11-20 12:38:29.967386] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:24:24.368 [2024-11-20 12:38:29.967393] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:24:24.368 [2024-11-20 12:38:29.967399] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.368 [2024-11-20 12:38:29.967402] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.368 [2024-11-20 12:38:29.967405] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa33690) 00:24:24.368 [2024-11-20 12:38:29.967411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.368 [2024-11-20 12:38:29.967421] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa95100, cid 0, qid 0 00:24:24.368 [2024-11-20 12:38:29.967483] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.368 [2024-11-20 12:38:29.967489] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.368 [2024-11-20 12:38:29.967492] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.368 [2024-11-20 12:38:29.967495] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa95100) on tqpair=0xa33690 00:24:24.368 [2024-11-20 12:38:29.967499] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:24:24.368 [2024-11-20 12:38:29.967505] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:24.368 [2024-11-20 12:38:29.967513] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.368 [2024-11-20 12:38:29.967517] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.368 [2024-11-20 12:38:29.967520] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa33690) 00:24:24.368 [2024-11-20 12:38:29.967525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.368 [2024-11-20 12:38:29.967535] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa95100, cid 0, qid 0 00:24:24.368 [2024-11-20 12:38:29.967601] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.368 [2024-11-20 12:38:29.967607] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.368 [2024-11-20 12:38:29.967610] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.368 [2024-11-20 12:38:29.967613] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa95100) on tqpair=0xa33690 00:24:24.368 [2024-11-20 12:38:29.967617] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:24.368 [2024-11-20 12:38:29.967625] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.368 [2024-11-20 12:38:29.967629] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.368 [2024-11-20 12:38:29.967632] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa33690) 00:24:24.368 [2024-11-20 12:38:29.967637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.368 [2024-11-20 12:38:29.967646] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa95100, cid 0, qid 0 00:24:24.368 [2024-11-20 12:38:29.967710] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.368 [2024-11-20 12:38:29.967716] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.368 [2024-11-20 12:38:29.967719] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.368 [2024-11-20 12:38:29.967722] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa95100) on tqpair=0xa33690 00:24:24.368 [2024-11-20 12:38:29.967725] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:24.368 [2024-11-20 12:38:29.967730] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:24.368 [2024-11-20 12:38:29.967737] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:24.368 [2024-11-20 12:38:29.967845] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:24:24.368 [2024-11-20 12:38:29.967849] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:24.368 [2024-11-20 12:38:29.967855] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.368 [2024-11-20 12:38:29.967859] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.368 [2024-11-20 12:38:29.967862] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa33690) 00:24:24.368 [2024-11-20 12:38:29.967867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.368 [2024-11-20 12:38:29.967876] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa95100, cid 0, qid 0 00:24:24.368 [2024-11-20 12:38:29.967938] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.368 [2024-11-20 12:38:29.967944] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.368 [2024-11-20 12:38:29.967946] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.368 [2024-11-20 12:38:29.967950] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa95100) on tqpair=0xa33690 00:24:24.368 [2024-11-20 12:38:29.967953] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:24.368 [2024-11-20 12:38:29.967963] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.368 [2024-11-20 12:38:29.967967] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.368 [2024-11-20 12:38:29.967970] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa33690) 00:24:24.368 [2024-11-20 12:38:29.967975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.368 [2024-11-20 12:38:29.967984] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa95100, cid 0, qid 0 00:24:24.368 [2024-11-20 12:38:29.968056] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.368 [2024-11-20 12:38:29.968062] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.368 [2024-11-20 12:38:29.968065] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.368 [2024-11-20 12:38:29.968068] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa95100) on tqpair=0xa33690 00:24:24.368 [2024-11-20 12:38:29.968072] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:24.368 [2024-11-20 12:38:29.968076] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:24.368 [2024-11-20 12:38:29.968082] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:24:24.368 [2024-11-20 12:38:29.968095] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:24.368 [2024-11-20 12:38:29.968102] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.368 [2024-11-20 12:38:29.968105] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa33690) 00:24:24.368 [2024-11-20 12:38:29.968111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.368 [2024-11-20 12:38:29.968120] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa95100, cid 0, qid 0 00:24:24.368 [2024-11-20 12:38:29.968219] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:24.368 [2024-11-20 12:38:29.968225] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:24.368 [2024-11-20 12:38:29.968228] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:24.368 [2024-11-20 12:38:29.968231] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa33690): datao=0, datal=4096, cccid=0 00:24:24.369 [2024-11-20 12:38:29.968235] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa95100) on tqpair(0xa33690): expected_datao=0, payload_size=4096 00:24:24.369 [2024-11-20 12:38:29.968239] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.369 [2024-11-20 12:38:29.968249] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:24.369 [2024-11-20 12:38:29.968253] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:24.369 [2024-11-20 12:38:30.011217] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.369 [2024-11-20 12:38:30.011236] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.369 [2024-11-20 12:38:30.011241] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.369 [2024-11-20 12:38:30.011246] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa95100) on tqpair=0xa33690 00:24:24.369 [2024-11-20 12:38:30.011262] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:24:24.369 [2024-11-20 12:38:30.011272] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:24:24.369 [2024-11-20 12:38:30.011284] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:24:24.369 [2024-11-20 12:38:30.011304] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:24:24.369 [2024-11-20 12:38:30.011320] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:24:24.369 [2024-11-20 12:38:30.011331] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:24:24.369 [2024-11-20 12:38:30.011349] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:24.369 [2024-11-20 12:38:30.011358] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.369 [2024-11-20 12:38:30.011367] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.369 [2024-11-20 12:38:30.011372] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa33690) 00:24:24.369 [2024-11-20 12:38:30.011381] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:24.369 [2024-11-20 12:38:30.011415] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa95100, cid 0, qid 0 00:24:24.369 [2024-11-20 12:38:30.011543] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.369 [2024-11-20 12:38:30.011551] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.369 [2024-11-20 12:38:30.011557] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.369 [2024-11-20 12:38:30.011564] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa95100) on tqpair=0xa33690 00:24:24.369 [2024-11-20 12:38:30.011575] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.369 [2024-11-20 12:38:30.011580] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.369 [2024-11-20 12:38:30.011584] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa33690) 00:24:24.369 [2024-11-20 12:38:30.011591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.369 [2024-11-20 12:38:30.011598] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.369 [2024-11-20 12:38:30.011605] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.369 [2024-11-20 12:38:30.011611] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xa33690) 00:24:24.369 [2024-11-20 12:38:30.011617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.369 [2024-11-20 12:38:30.011623] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.369 [2024-11-20 12:38:30.011630] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.369 [2024-11-20 12:38:30.011635] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xa33690) 00:24:24.369 [2024-11-20 12:38:30.011644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.369 [2024-11-20 12:38:30.011656] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.369 [2024-11-20 12:38:30.011683] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.369 [2024-11-20 12:38:30.011704] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa33690) 00:24:24.369 [2024-11-20 12:38:30.011713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.369 [2024-11-20 12:38:30.011723] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:24.369 [2024-11-20 12:38:30.011736] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:24.369 [2024-11-20 12:38:30.011752] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.369 [2024-11-20 12:38:30.011760] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa33690) 00:24:24.369 [2024-11-20 12:38:30.011772] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.369 [2024-11-20 12:38:30.011803] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa95100, cid 0, qid 0 00:24:24.369 [2024-11-20 12:38:30.011822] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa95280, cid 1, qid 0 00:24:24.369 [2024-11-20 12:38:30.011832] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa95400, cid 2, qid 0 00:24:24.369 [2024-11-20 12:38:30.011840] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa95580, cid 3, qid 0 00:24:24.369 [2024-11-20 12:38:30.011848] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa95700, cid 4, qid 0 00:24:24.369 [2024-11-20 12:38:30.012048] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.369 [2024-11-20 12:38:30.012060] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.369 [2024-11-20 12:38:30.012075] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.369 [2024-11-20 12:38:30.012084] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa95700) on tqpair=0xa33690 00:24:24.369 [2024-11-20 12:38:30.012106] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:24:24.369 [2024-11-20 12:38:30.012114] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:24.369 [2024-11-20 12:38:30.012127] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:24:24.369 [2024-11-20 12:38:30.012134] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:24.369 [2024-11-20 12:38:30.012142] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.369 [2024-11-20 12:38:30.012149] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.369 [2024-11-20 12:38:30.012153] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa33690) 00:24:24.369 [2024-11-20 12:38:30.012160] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:24.369 [2024-11-20 12:38:30.012178] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa95700, cid 4, qid 0 00:24:24.369 [2024-11-20 12:38:30.012269] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.369 [2024-11-20 12:38:30.012278] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.369 [2024-11-20 12:38:30.012281] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.369 [2024-11-20 12:38:30.012285] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa95700) on tqpair=0xa33690 00:24:24.369 [2024-11-20 12:38:30.012346] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:24:24.369 [2024-11-20 12:38:30.012357] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:24.369 [2024-11-20 12:38:30.012365] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.369 [2024-11-20 12:38:30.012369] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa33690) 00:24:24.369 [2024-11-20 12:38:30.012376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.369 [2024-11-20 12:38:30.012388] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa95700, cid 4, qid 0 00:24:24.369 [2024-11-20 12:38:30.012493] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:24.369 [2024-11-20 12:38:30.012499] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:24.369 [2024-11-20 12:38:30.012502] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:24.369 [2024-11-20 12:38:30.012505] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa33690): datao=0, datal=4096, cccid=4 00:24:24.369 [2024-11-20 12:38:30.012512] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa95700) on tqpair(0xa33690): expected_datao=0, payload_size=4096 00:24:24.369 [2024-11-20 12:38:30.012517] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.369 [2024-11-20 12:38:30.012523] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:24.369 [2024-11-20 12:38:30.012527] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:24.370 [2024-11-20 12:38:30.012541] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.370 [2024-11-20 12:38:30.012547] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.370 [2024-11-20 12:38:30.012550] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.370 [2024-11-20 12:38:30.012554] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa95700) on tqpair=0xa33690 00:24:24.370 [2024-11-20 12:38:30.012563] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:24:24.370 [2024-11-20 12:38:30.012573] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:24:24.370 [2024-11-20 12:38:30.012583] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:24:24.370 [2024-11-20 12:38:30.012590] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.370 [2024-11-20 12:38:30.012593] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa33690) 00:24:24.370 [2024-11-20 12:38:30.012599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.370 [2024-11-20 12:38:30.012611] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa95700, cid 4, qid 0 00:24:24.370 [2024-11-20 12:38:30.012704] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:24.370 [2024-11-20 12:38:30.012710] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:24.370 [2024-11-20 12:38:30.012714] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:24.370 [2024-11-20 12:38:30.012717] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa33690): datao=0, datal=4096, cccid=4 00:24:24.370 [2024-11-20 12:38:30.012721] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa95700) on tqpair(0xa33690): expected_datao=0, payload_size=4096 00:24:24.370 [2024-11-20 12:38:30.012725] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.370 [2024-11-20 12:38:30.012731] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:24.370 [2024-11-20 12:38:30.012735] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:24.370 [2024-11-20 12:38:30.012743] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.370 [2024-11-20 12:38:30.012749] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.370 [2024-11-20 12:38:30.012752] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.370 [2024-11-20 12:38:30.012756] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa95700) on tqpair=0xa33690 00:24:24.370 [2024-11-20 12:38:30.012770] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:24.370 [2024-11-20 12:38:30.012780] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:24.370 [2024-11-20 12:38:30.012787] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.370 [2024-11-20 12:38:30.012791] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa33690) 00:24:24.370 [2024-11-20 12:38:30.012797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.370 [2024-11-20 12:38:30.012808] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa95700, cid 4, qid 0 00:24:24.370 [2024-11-20 12:38:30.012888] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:24.370 [2024-11-20 12:38:30.012896] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:24.370 [2024-11-20 12:38:30.012900] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:24.370 [2024-11-20 12:38:30.012903] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa33690): datao=0, datal=4096, cccid=4 00:24:24.370 [2024-11-20 12:38:30.012908] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa95700) on tqpair(0xa33690): expected_datao=0, payload_size=4096 00:24:24.370 [2024-11-20 12:38:30.012912] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.370 [2024-11-20 12:38:30.012918] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:24.370 [2024-11-20 12:38:30.012921] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:24.370 [2024-11-20 12:38:30.012930] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.370 [2024-11-20 12:38:30.012935] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.370 [2024-11-20 12:38:30.012939] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.370 [2024-11-20 12:38:30.012942] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa95700) on tqpair=0xa33690 00:24:24.370 [2024-11-20 12:38:30.012949] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:24.370 [2024-11-20 12:38:30.012957] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:24:24.370 [2024-11-20 12:38:30.012966] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:24:24.370 [2024-11-20 12:38:30.012972] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:24.370 [2024-11-20 12:38:30.012978] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:24.370 [2024-11-20 12:38:30.012983] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:24:24.370 [2024-11-20 12:38:30.012988] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:24:24.370 [2024-11-20 12:38:30.012993] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:24:24.370 [2024-11-20 12:38:30.012998] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:24:24.370 [2024-11-20 12:38:30.013011] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.370 [2024-11-20 12:38:30.013015] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa33690) 00:24:24.370 [2024-11-20 12:38:30.013021] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.370 [2024-11-20 12:38:30.013028] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.370 [2024-11-20 12:38:30.013031] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.370 [2024-11-20 12:38:30.013035] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa33690) 00:24:24.370 [2024-11-20 12:38:30.013041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.370 [2024-11-20 12:38:30.013055] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa95700, cid 4, qid 0 00:24:24.370 [2024-11-20 12:38:30.013060] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa95880, cid 5, qid 0 00:24:24.370 [2024-11-20 12:38:30.013145] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.370 [2024-11-20 12:38:30.013152] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.370 [2024-11-20 12:38:30.013155] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.370 [2024-11-20 12:38:30.013159] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa95700) on tqpair=0xa33690 00:24:24.370 [2024-11-20 12:38:30.013167] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.370 [2024-11-20 12:38:30.013173] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.370 [2024-11-20 12:38:30.013176] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.370 [2024-11-20 12:38:30.013179] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa95880) on tqpair=0xa33690 00:24:24.370 [2024-11-20 12:38:30.013188] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.370 [2024-11-20 12:38:30.013192] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa33690) 00:24:24.370 [2024-11-20 12:38:30.013198] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.370 [2024-11-20 12:38:30.013214] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa95880, cid 5, qid 0 00:24:24.370 [2024-11-20 12:38:30.013282] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.370 [2024-11-20 12:38:30.013288] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.370 [2024-11-20 12:38:30.013291] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.370 [2024-11-20 12:38:30.013295] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa95880) on tqpair=0xa33690 00:24:24.370 [2024-11-20 12:38:30.013303] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.370 [2024-11-20 12:38:30.013307] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa33690) 00:24:24.370 [2024-11-20 12:38:30.013313] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.370 [2024-11-20 12:38:30.013323] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa95880, cid 5, qid 0 00:24:24.370 [2024-11-20 12:38:30.013391] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.370 [2024-11-20 12:38:30.013398] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.370 [2024-11-20 12:38:30.013401] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.370 [2024-11-20 12:38:30.013405] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa95880) on tqpair=0xa33690 00:24:24.370 [2024-11-20 12:38:30.013414] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.370 [2024-11-20 12:38:30.013418] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa33690) 00:24:24.370 [2024-11-20 12:38:30.013424] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.370 [2024-11-20 12:38:30.013434] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa95880, cid 5, qid 0 00:24:24.370 [2024-11-20 12:38:30.013517] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.370 [2024-11-20 12:38:30.013524] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.370 [2024-11-20 12:38:30.013527] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.370 [2024-11-20 12:38:30.013530] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa95880) on tqpair=0xa33690 00:24:24.370 [2024-11-20 12:38:30.013543] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.370 [2024-11-20 12:38:30.013548] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa33690) 00:24:24.370 [2024-11-20 12:38:30.013553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.370 [2024-11-20 12:38:30.013560] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.370 [2024-11-20 12:38:30.013563] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa33690) 00:24:24.370 [2024-11-20 12:38:30.013569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.370 [2024-11-20 12:38:30.013580] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.370 [2024-11-20 12:38:30.013583] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xa33690) 00:24:24.371 [2024-11-20 12:38:30.013589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.371 [2024-11-20 12:38:30.013595] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.371 [2024-11-20 12:38:30.013599] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xa33690) 00:24:24.371 [2024-11-20 12:38:30.013604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.371 [2024-11-20 12:38:30.013615] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa95880, cid 5, qid 0 00:24:24.371 [2024-11-20 12:38:30.013620] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa95700, cid 4, qid 0 00:24:24.371 [2024-11-20 12:38:30.013624] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa95a00, cid 6, qid 0 00:24:24.371 [2024-11-20 12:38:30.013629] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa95b80, cid 7, qid 0 00:24:24.371 [2024-11-20 12:38:30.013775] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:24.371 [2024-11-20 12:38:30.013782] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:24.371 [2024-11-20 12:38:30.013785] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:24.371 [2024-11-20 12:38:30.013788] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa33690): datao=0, datal=8192, cccid=5 00:24:24.371 [2024-11-20 12:38:30.013791] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa95880) on tqpair(0xa33690): expected_datao=0, payload_size=8192 00:24:24.371 [2024-11-20 12:38:30.013795] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.371 [2024-11-20 12:38:30.013806] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:24.371 [2024-11-20 12:38:30.013810] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:24.371 [2024-11-20 12:38:30.013818] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:24.371 [2024-11-20 12:38:30.013823] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:24.371 [2024-11-20 12:38:30.013826] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:24.371 [2024-11-20 12:38:30.013829] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa33690): datao=0, datal=512, cccid=4 00:24:24.371 [2024-11-20 12:38:30.013833] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa95700) on tqpair(0xa33690): expected_datao=0, payload_size=512 00:24:24.371 [2024-11-20 12:38:30.013836] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.371 [2024-11-20 12:38:30.013841] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:24.371 [2024-11-20 12:38:30.013845] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:24.371 [2024-11-20 12:38:30.013850] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:24.371 [2024-11-20 12:38:30.013855] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:24.371 [2024-11-20 12:38:30.013857] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:24.371 [2024-11-20 12:38:30.013860] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa33690): datao=0, datal=512, cccid=6 00:24:24.371 [2024-11-20 12:38:30.013864] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa95a00) on tqpair(0xa33690): expected_datao=0, payload_size=512 00:24:24.371 [2024-11-20 12:38:30.013868] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.371 [2024-11-20 12:38:30.013873] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:24.371 [2024-11-20 12:38:30.013876] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:24.371 [2024-11-20 12:38:30.013880] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:24.371 [2024-11-20 12:38:30.013885] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:24.371 [2024-11-20 12:38:30.013890] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:24.371 [2024-11-20 12:38:30.013893] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa33690): datao=0, datal=4096, cccid=7 00:24:24.371 [2024-11-20 12:38:30.013897] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa95b80) on tqpair(0xa33690): expected_datao=0, payload_size=4096 00:24:24.371 [2024-11-20 12:38:30.013900] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.371 [2024-11-20 12:38:30.013906] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:24.371 [2024-11-20 12:38:30.013909] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:24.371 [2024-11-20 12:38:30.013916] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.371 [2024-11-20 12:38:30.013921] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.371 [2024-11-20 12:38:30.013924] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.371 [2024-11-20 12:38:30.013927] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa95880) on tqpair=0xa33690 00:24:24.371 [2024-11-20 12:38:30.013938] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.371 [2024-11-20 12:38:30.013943] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.371 [2024-11-20 12:38:30.013946] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.371 [2024-11-20 12:38:30.013949] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa95700) on tqpair=0xa33690 00:24:24.371 [2024-11-20 12:38:30.013958] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.371 [2024-11-20 12:38:30.013963] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.371 [2024-11-20 12:38:30.013966] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.371 [2024-11-20 12:38:30.013969] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa95a00) on tqpair=0xa33690 00:24:24.371 [2024-11-20 12:38:30.013975] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.371 [2024-11-20 12:38:30.013979] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.371 [2024-11-20 12:38:30.013982] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.371 [2024-11-20 12:38:30.013986] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa95b80) on tqpair=0xa33690 00:24:24.371 ===================================================== 00:24:24.371 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:24.371 ===================================================== 00:24:24.371 Controller Capabilities/Features 00:24:24.371 ================================ 00:24:24.371 Vendor ID: 8086 00:24:24.371 Subsystem Vendor ID: 8086 00:24:24.371 Serial Number: SPDK00000000000001 00:24:24.371 Model Number: SPDK bdev Controller 00:24:24.371 Firmware Version: 25.01 00:24:24.371 Recommended Arb Burst: 6 00:24:24.371 IEEE OUI Identifier: e4 d2 5c 00:24:24.371 Multi-path I/O 00:24:24.371 May have multiple subsystem ports: Yes 00:24:24.371 May have multiple controllers: Yes 00:24:24.371 Associated with SR-IOV VF: No 00:24:24.371 Max Data Transfer Size: 131072 00:24:24.371 Max Number of Namespaces: 32 00:24:24.371 Max Number of I/O Queues: 127 00:24:24.371 NVMe Specification Version (VS): 1.3 00:24:24.371 NVMe Specification Version (Identify): 1.3 00:24:24.371 Maximum Queue Entries: 128 00:24:24.371 Contiguous Queues Required: Yes 00:24:24.371 Arbitration Mechanisms Supported 00:24:24.371 Weighted Round Robin: Not Supported 00:24:24.371 Vendor Specific: Not Supported 00:24:24.371 Reset Timeout: 15000 ms 00:24:24.371 Doorbell Stride: 4 bytes 00:24:24.371 NVM Subsystem Reset: Not Supported 00:24:24.371 Command Sets Supported 00:24:24.371 NVM Command Set: Supported 00:24:24.371 Boot Partition: Not Supported 00:24:24.371 Memory Page Size Minimum: 4096 bytes 00:24:24.371 Memory Page Size Maximum: 4096 bytes 00:24:24.371 Persistent Memory Region: Not Supported 00:24:24.371 Optional Asynchronous Events Supported 00:24:24.371 Namespace Attribute Notices: Supported 00:24:24.371 Firmware Activation Notices: Not Supported 00:24:24.371 ANA Change Notices: Not Supported 00:24:24.371 PLE Aggregate Log Change Notices: Not Supported 00:24:24.371 LBA Status Info Alert Notices: Not Supported 00:24:24.371 EGE Aggregate Log Change Notices: Not Supported 00:24:24.371 Normal NVM Subsystem Shutdown event: Not Supported 00:24:24.371 Zone Descriptor Change Notices: Not Supported 00:24:24.371 Discovery Log Change Notices: Not Supported 00:24:24.371 Controller Attributes 00:24:24.371 128-bit Host Identifier: Supported 00:24:24.371 Non-Operational Permissive Mode: Not Supported 00:24:24.371 NVM Sets: Not Supported 00:24:24.371 Read Recovery Levels: Not Supported 00:24:24.371 Endurance Groups: Not Supported 00:24:24.371 Predictable Latency Mode: Not Supported 00:24:24.371 Traffic Based Keep ALive: Not Supported 00:24:24.371 Namespace Granularity: Not Supported 00:24:24.371 SQ Associations: Not Supported 00:24:24.371 UUID List: Not Supported 00:24:24.371 Multi-Domain Subsystem: Not Supported 00:24:24.371 Fixed Capacity Management: Not Supported 00:24:24.371 Variable Capacity Management: Not Supported 00:24:24.371 Delete Endurance Group: Not Supported 00:24:24.371 Delete NVM Set: Not Supported 00:24:24.371 Extended LBA Formats Supported: Not Supported 00:24:24.371 Flexible Data Placement Supported: Not Supported 00:24:24.371 00:24:24.371 Controller Memory Buffer Support 00:24:24.371 ================================ 00:24:24.371 Supported: No 00:24:24.371 00:24:24.371 Persistent Memory Region Support 00:24:24.371 ================================ 00:24:24.371 Supported: No 00:24:24.371 00:24:24.371 Admin Command Set Attributes 00:24:24.371 ============================ 00:24:24.371 Security Send/Receive: Not Supported 00:24:24.371 Format NVM: Not Supported 00:24:24.371 Firmware Activate/Download: Not Supported 00:24:24.371 Namespace Management: Not Supported 00:24:24.371 Device Self-Test: Not Supported 00:24:24.371 Directives: Not Supported 00:24:24.371 NVMe-MI: Not Supported 00:24:24.371 Virtualization Management: Not Supported 00:24:24.371 Doorbell Buffer Config: Not Supported 00:24:24.371 Get LBA Status Capability: Not Supported 00:24:24.371 Command & Feature Lockdown Capability: Not Supported 00:24:24.371 Abort Command Limit: 4 00:24:24.371 Async Event Request Limit: 4 00:24:24.371 Number of Firmware Slots: N/A 00:24:24.371 Firmware Slot 1 Read-Only: N/A 00:24:24.371 Firmware Activation Without Reset: N/A 00:24:24.371 Multiple Update Detection Support: N/A 00:24:24.372 Firmware Update Granularity: No Information Provided 00:24:24.372 Per-Namespace SMART Log: No 00:24:24.372 Asymmetric Namespace Access Log Page: Not Supported 00:24:24.372 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:24.372 Command Effects Log Page: Supported 00:24:24.372 Get Log Page Extended Data: Supported 00:24:24.372 Telemetry Log Pages: Not Supported 00:24:24.372 Persistent Event Log Pages: Not Supported 00:24:24.372 Supported Log Pages Log Page: May Support 00:24:24.372 Commands Supported & Effects Log Page: Not Supported 00:24:24.372 Feature Identifiers & Effects Log Page:May Support 00:24:24.372 NVMe-MI Commands & Effects Log Page: May Support 00:24:24.372 Data Area 4 for Telemetry Log: Not Supported 00:24:24.372 Error Log Page Entries Supported: 128 00:24:24.372 Keep Alive: Supported 00:24:24.372 Keep Alive Granularity: 10000 ms 00:24:24.372 00:24:24.372 NVM Command Set Attributes 00:24:24.372 ========================== 00:24:24.372 Submission Queue Entry Size 00:24:24.372 Max: 64 00:24:24.372 Min: 64 00:24:24.372 Completion Queue Entry Size 00:24:24.372 Max: 16 00:24:24.372 Min: 16 00:24:24.372 Number of Namespaces: 32 00:24:24.372 Compare Command: Supported 00:24:24.372 Write Uncorrectable Command: Not Supported 00:24:24.372 Dataset Management Command: Supported 00:24:24.372 Write Zeroes Command: Supported 00:24:24.372 Set Features Save Field: Not Supported 00:24:24.372 Reservations: Supported 00:24:24.372 Timestamp: Not Supported 00:24:24.372 Copy: Supported 00:24:24.372 Volatile Write Cache: Present 00:24:24.372 Atomic Write Unit (Normal): 1 00:24:24.372 Atomic Write Unit (PFail): 1 00:24:24.372 Atomic Compare & Write Unit: 1 00:24:24.372 Fused Compare & Write: Supported 00:24:24.372 Scatter-Gather List 00:24:24.372 SGL Command Set: Supported 00:24:24.372 SGL Keyed: Supported 00:24:24.372 SGL Bit Bucket Descriptor: Not Supported 00:24:24.372 SGL Metadata Pointer: Not Supported 00:24:24.372 Oversized SGL: Not Supported 00:24:24.372 SGL Metadata Address: Not Supported 00:24:24.372 SGL Offset: Supported 00:24:24.372 Transport SGL Data Block: Not Supported 00:24:24.372 Replay Protected Memory Block: Not Supported 00:24:24.372 00:24:24.372 Firmware Slot Information 00:24:24.372 ========================= 00:24:24.372 Active slot: 1 00:24:24.372 Slot 1 Firmware Revision: 25.01 00:24:24.372 00:24:24.372 00:24:24.372 Commands Supported and Effects 00:24:24.372 ============================== 00:24:24.372 Admin Commands 00:24:24.372 -------------- 00:24:24.372 Get Log Page (02h): Supported 00:24:24.372 Identify (06h): Supported 00:24:24.372 Abort (08h): Supported 00:24:24.372 Set Features (09h): Supported 00:24:24.372 Get Features (0Ah): Supported 00:24:24.372 Asynchronous Event Request (0Ch): Supported 00:24:24.372 Keep Alive (18h): Supported 00:24:24.372 I/O Commands 00:24:24.372 ------------ 00:24:24.372 Flush (00h): Supported LBA-Change 00:24:24.372 Write (01h): Supported LBA-Change 00:24:24.372 Read (02h): Supported 00:24:24.372 Compare (05h): Supported 00:24:24.372 Write Zeroes (08h): Supported LBA-Change 00:24:24.372 Dataset Management (09h): Supported LBA-Change 00:24:24.372 Copy (19h): Supported LBA-Change 00:24:24.372 00:24:24.372 Error Log 00:24:24.372 ========= 00:24:24.372 00:24:24.372 Arbitration 00:24:24.372 =========== 00:24:24.372 Arbitration Burst: 1 00:24:24.372 00:24:24.372 Power Management 00:24:24.372 ================ 00:24:24.372 Number of Power States: 1 00:24:24.372 Current Power State: Power State #0 00:24:24.372 Power State #0: 00:24:24.372 Max Power: 0.00 W 00:24:24.372 Non-Operational State: Operational 00:24:24.372 Entry Latency: Not Reported 00:24:24.372 Exit Latency: Not Reported 00:24:24.372 Relative Read Throughput: 0 00:24:24.372 Relative Read Latency: 0 00:24:24.372 Relative Write Throughput: 0 00:24:24.372 Relative Write Latency: 0 00:24:24.372 Idle Power: Not Reported 00:24:24.372 Active Power: Not Reported 00:24:24.372 Non-Operational Permissive Mode: Not Supported 00:24:24.372 00:24:24.372 Health Information 00:24:24.372 ================== 00:24:24.372 Critical Warnings: 00:24:24.372 Available Spare Space: OK 00:24:24.372 Temperature: OK 00:24:24.372 Device Reliability: OK 00:24:24.372 Read Only: No 00:24:24.372 Volatile Memory Backup: OK 00:24:24.372 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:24.372 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:24.372 Available Spare: 0% 00:24:24.372 Available Spare Threshold: 0% 00:24:24.372 Life Percentage Used:[2024-11-20 12:38:30.014071] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.372 [2024-11-20 12:38:30.014076] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xa33690) 00:24:24.372 [2024-11-20 12:38:30.014082] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.372 [2024-11-20 12:38:30.014093] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa95b80, cid 7, qid 0 00:24:24.372 [2024-11-20 12:38:30.014174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.372 [2024-11-20 12:38:30.014180] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.372 [2024-11-20 12:38:30.014183] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.372 [2024-11-20 12:38:30.014186] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa95b80) on tqpair=0xa33690 00:24:24.372 [2024-11-20 12:38:30.014218] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:24:24.372 [2024-11-20 12:38:30.014227] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa95100) on tqpair=0xa33690 00:24:24.372 [2024-11-20 12:38:30.014233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.372 [2024-11-20 12:38:30.014238] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa95280) on tqpair=0xa33690 00:24:24.372 [2024-11-20 12:38:30.014242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.372 [2024-11-20 12:38:30.014246] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa95400) on tqpair=0xa33690 00:24:24.372 [2024-11-20 12:38:30.014250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.372 [2024-11-20 12:38:30.014258] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa95580) on tqpair=0xa33690 00:24:24.372 [2024-11-20 12:38:30.014262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.372 [2024-11-20 12:38:30.014269] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.372 [2024-11-20 12:38:30.014273] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.372 [2024-11-20 12:38:30.014276] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa33690) 00:24:24.372 [2024-11-20 12:38:30.014281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.372 [2024-11-20 12:38:30.014293] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa95580, cid 3, qid 0 00:24:24.372 [2024-11-20 12:38:30.014351] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.372 [2024-11-20 12:38:30.014357] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.372 [2024-11-20 12:38:30.014360] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.372 [2024-11-20 12:38:30.014363] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa95580) on tqpair=0xa33690 00:24:24.372 [2024-11-20 12:38:30.014369] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.372 [2024-11-20 12:38:30.014372] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.372 [2024-11-20 12:38:30.014375] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa33690) 00:24:24.372 [2024-11-20 12:38:30.014381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.372 [2024-11-20 12:38:30.014393] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa95580, cid 3, qid 0 00:24:24.372 [2024-11-20 12:38:30.014467] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.372 [2024-11-20 12:38:30.014473] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.372 [2024-11-20 12:38:30.014476] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.372 [2024-11-20 12:38:30.014479] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa95580) on tqpair=0xa33690 00:24:24.372 [2024-11-20 12:38:30.014483] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:24:24.372 [2024-11-20 12:38:30.014487] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:24:24.372 [2024-11-20 12:38:30.014495] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.372 [2024-11-20 12:38:30.014498] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.372 [2024-11-20 12:38:30.014501] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa33690) 00:24:24.372 [2024-11-20 12:38:30.014507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.372 [2024-11-20 12:38:30.014516] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa95580, cid 3, qid 0 00:24:24.372 [2024-11-20 12:38:30.014578] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.372 [2024-11-20 12:38:30.014584] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.372 [2024-11-20 12:38:30.014587] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.372 [2024-11-20 12:38:30.014590] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa95580) on tqpair=0xa33690 00:24:24.372 [2024-11-20 12:38:30.014599] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.372 [2024-11-20 12:38:30.014603] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.372 [2024-11-20 12:38:30.014606] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa33690) 00:24:24.373 [2024-11-20 12:38:30.014611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.373 [2024-11-20 12:38:30.014623] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa95580, cid 3, qid 0 00:24:24.373 [2024-11-20 12:38:30.014685] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.373 [2024-11-20 12:38:30.014691] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.373 [2024-11-20 12:38:30.014694] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.373 [2024-11-20 12:38:30.014697] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa95580) on tqpair=0xa33690 00:24:24.373 [2024-11-20 12:38:30.014705] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.373 [2024-11-20 12:38:30.014709] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.373 [2024-11-20 12:38:30.014712] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa33690) 00:24:24.373 [2024-11-20 12:38:30.014717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.373 [2024-11-20 12:38:30.014726] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa95580, cid 3, qid 0 00:24:24.373 [2024-11-20 12:38:30.014795] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.373 [2024-11-20 12:38:30.014800] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.373 [2024-11-20 12:38:30.014804] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.373 [2024-11-20 12:38:30.014807] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa95580) on tqpair=0xa33690 00:24:24.373 [2024-11-20 12:38:30.014815] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.373 [2024-11-20 12:38:30.014818] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.373 [2024-11-20 12:38:30.014821] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa33690) 00:24:24.373 [2024-11-20 12:38:30.014827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.373 [2024-11-20 12:38:30.014835] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa95580, cid 3, qid 0 00:24:24.373 [2024-11-20 12:38:30.014902] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.373 [2024-11-20 12:38:30.014908] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.373 [2024-11-20 12:38:30.014911] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.373 [2024-11-20 12:38:30.014914] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa95580) on tqpair=0xa33690 00:24:24.373 [2024-11-20 12:38:30.014922] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.373 [2024-11-20 12:38:30.014925] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.373 [2024-11-20 12:38:30.014928] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa33690) 00:24:24.373 [2024-11-20 12:38:30.014934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.373 [2024-11-20 12:38:30.014943] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa95580, cid 3, qid 0 00:24:24.373 [2024-11-20 12:38:30.015009] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.373 [2024-11-20 12:38:30.015014] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.373 [2024-11-20 12:38:30.015017] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.373 [2024-11-20 12:38:30.015021] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa95580) on tqpair=0xa33690 00:24:24.373 [2024-11-20 12:38:30.015028] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.373 [2024-11-20 12:38:30.015032] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.373 [2024-11-20 12:38:30.015035] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa33690) 00:24:24.373 [2024-11-20 12:38:30.015040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.373 [2024-11-20 12:38:30.015049] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa95580, cid 3, qid 0 00:24:24.373 [2024-11-20 12:38:30.015110] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.373 [2024-11-20 12:38:30.015116] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.373 [2024-11-20 12:38:30.015119] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.373 [2024-11-20 12:38:30.015122] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa95580) on tqpair=0xa33690 00:24:24.373 [2024-11-20 12:38:30.015130] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.373 [2024-11-20 12:38:30.015134] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.373 [2024-11-20 12:38:30.015137] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa33690) 00:24:24.373 [2024-11-20 12:38:30.015143] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.373 [2024-11-20 12:38:30.015152] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa95580, cid 3, qid 0 00:24:24.373 [2024-11-20 12:38:30.019210] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.373 [2024-11-20 12:38:30.019218] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.373 [2024-11-20 12:38:30.019221] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.373 [2024-11-20 12:38:30.019224] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa95580) on tqpair=0xa33690 00:24:24.373 [2024-11-20 12:38:30.019233] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.373 [2024-11-20 12:38:30.019237] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.373 [2024-11-20 12:38:30.019240] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa33690) 00:24:24.373 [2024-11-20 12:38:30.019246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.373 [2024-11-20 12:38:30.019257] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa95580, cid 3, qid 0 00:24:24.373 [2024-11-20 12:38:30.019404] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.373 [2024-11-20 12:38:30.019409] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.373 [2024-11-20 12:38:30.019412] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.373 [2024-11-20 12:38:30.019416] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa95580) on tqpair=0xa33690 00:24:24.373 [2024-11-20 12:38:30.019422] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:24:24.373 0% 00:24:24.373 Data Units Read: 0 00:24:24.373 Data Units Written: 0 00:24:24.373 Host Read Commands: 0 00:24:24.373 Host Write Commands: 0 00:24:24.373 Controller Busy Time: 0 minutes 00:24:24.373 Power Cycles: 0 00:24:24.373 Power On Hours: 0 hours 00:24:24.373 Unsafe Shutdowns: 0 00:24:24.373 Unrecoverable Media Errors: 0 00:24:24.373 Lifetime Error Log Entries: 0 00:24:24.373 Warning Temperature Time: 0 minutes 00:24:24.373 Critical Temperature Time: 0 minutes 00:24:24.373 00:24:24.373 Number of Queues 00:24:24.373 ================ 00:24:24.373 Number of I/O Submission Queues: 127 00:24:24.373 Number of I/O Completion Queues: 127 00:24:24.373 00:24:24.373 Active Namespaces 00:24:24.373 ================= 00:24:24.373 Namespace ID:1 00:24:24.373 Error Recovery Timeout: Unlimited 00:24:24.373 Command Set Identifier: NVM (00h) 00:24:24.373 Deallocate: Supported 00:24:24.373 Deallocated/Unwritten Error: Not Supported 00:24:24.373 Deallocated Read Value: Unknown 00:24:24.373 Deallocate in Write Zeroes: Not Supported 00:24:24.373 Deallocated Guard Field: 0xFFFF 00:24:24.373 Flush: Supported 00:24:24.373 Reservation: Supported 00:24:24.373 Namespace Sharing Capabilities: Multiple Controllers 00:24:24.373 Size (in LBAs): 131072 (0GiB) 00:24:24.373 Capacity (in LBAs): 131072 (0GiB) 00:24:24.373 Utilization (in LBAs): 131072 (0GiB) 00:24:24.373 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:24.373 EUI64: ABCDEF0123456789 00:24:24.373 UUID: 38dbae00-28ad-4eba-b848-6ccbf0d10fc0 00:24:24.373 Thin Provisioning: Not Supported 00:24:24.373 Per-NS Atomic Units: Yes 00:24:24.373 Atomic Boundary Size (Normal): 0 00:24:24.373 Atomic Boundary Size (PFail): 0 00:24:24.373 Atomic Boundary Offset: 0 00:24:24.373 Maximum Single Source Range Length: 65535 00:24:24.373 Maximum Copy Length: 65535 00:24:24.373 Maximum Source Range Count: 1 00:24:24.373 NGUID/EUI64 Never Reused: No 00:24:24.373 Namespace Write Protected: No 00:24:24.373 Number of LBA Formats: 1 00:24:24.373 Current LBA Format: LBA Format #00 00:24:24.373 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:24.373 00:24:24.373 12:38:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:24.373 12:38:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:24.373 12:38:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.373 12:38:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:24.374 12:38:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.374 12:38:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:24.374 12:38:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:24.374 12:38:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:24.374 12:38:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:24:24.374 12:38:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:24.374 12:38:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:24:24.374 12:38:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:24.374 12:38:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:24.374 rmmod nvme_tcp 00:24:24.374 rmmod nvme_fabrics 00:24:24.374 rmmod nvme_keyring 00:24:24.374 12:38:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:24.374 12:38:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:24:24.374 12:38:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:24:24.374 12:38:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 264304 ']' 00:24:24.374 12:38:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 264304 00:24:24.374 12:38:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 264304 ']' 00:24:24.374 12:38:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 264304 00:24:24.374 12:38:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:24:24.374 12:38:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:24.633 12:38:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 264304 00:24:24.633 12:38:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:24.633 12:38:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:24.633 12:38:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 264304' 00:24:24.633 killing process with pid 264304 00:24:24.633 12:38:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 264304 00:24:24.633 12:38:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 264304 00:24:24.633 12:38:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:24.633 12:38:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:24.633 12:38:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:24.633 12:38:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:24:24.633 12:38:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:24:24.633 12:38:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:24.633 12:38:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:24:24.633 12:38:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:24.633 12:38:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:24.633 12:38:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:24.633 12:38:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:24.633 12:38:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:27.170 12:38:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:27.170 00:24:27.170 real 0m9.933s 00:24:27.170 user 0m8.004s 00:24:27.170 sys 0m4.936s 00:24:27.170 12:38:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:27.170 12:38:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:27.170 ************************************ 00:24:27.170 END TEST nvmf_identify 00:24:27.170 ************************************ 00:24:27.170 12:38:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:27.170 12:38:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:27.170 12:38:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:27.170 12:38:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.170 ************************************ 00:24:27.170 START TEST nvmf_perf 00:24:27.170 ************************************ 00:24:27.170 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:27.170 * Looking for test storage... 00:24:27.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:27.170 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:27.170 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:24:27.170 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:27.170 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:27.170 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:27.170 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:27.170 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:27.170 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:24:27.170 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:24:27.170 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:24:27.170 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:24:27.170 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:24:27.170 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:24:27.170 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:24:27.170 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:27.170 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:24:27.170 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:24:27.170 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:27.170 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:27.170 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:24:27.170 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:24:27.170 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:27.170 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:24:27.170 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:27.170 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:24:27.170 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:27.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.171 --rc genhtml_branch_coverage=1 00:24:27.171 --rc genhtml_function_coverage=1 00:24:27.171 --rc genhtml_legend=1 00:24:27.171 --rc geninfo_all_blocks=1 00:24:27.171 --rc geninfo_unexecuted_blocks=1 00:24:27.171 00:24:27.171 ' 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:27.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.171 --rc genhtml_branch_coverage=1 00:24:27.171 --rc genhtml_function_coverage=1 00:24:27.171 --rc genhtml_legend=1 00:24:27.171 --rc geninfo_all_blocks=1 00:24:27.171 --rc geninfo_unexecuted_blocks=1 00:24:27.171 00:24:27.171 ' 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:27.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.171 --rc genhtml_branch_coverage=1 00:24:27.171 --rc genhtml_function_coverage=1 00:24:27.171 --rc genhtml_legend=1 00:24:27.171 --rc geninfo_all_blocks=1 00:24:27.171 --rc geninfo_unexecuted_blocks=1 00:24:27.171 00:24:27.171 ' 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:27.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.171 --rc genhtml_branch_coverage=1 00:24:27.171 --rc genhtml_function_coverage=1 00:24:27.171 --rc genhtml_legend=1 00:24:27.171 --rc geninfo_all_blocks=1 00:24:27.171 --rc geninfo_unexecuted_blocks=1 00:24:27.171 00:24:27.171 ' 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:27.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:27.171 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:27.172 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.172 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:27.172 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:27.172 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:27.172 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:27.172 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:27.172 12:38:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:32.579 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:32.579 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:32.838 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:32.838 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:32.838 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:32.838 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:32.838 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:32.838 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:24:32.838 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:32.838 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:24:32.838 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:24:32.838 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:24:32.838 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:24:32.838 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:24:32.838 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:32.838 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:32.838 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:32.838 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:32.838 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:32.838 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:32.838 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:32.838 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:32.838 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:32.838 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:32.838 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:32.838 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:32.838 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:32.838 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:32.838 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:32.838 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:32.838 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:32.838 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:32.838 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:32.839 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:32.839 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:32.839 Found net devices under 0000:86:00.0: cvl_0_0 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:32.839 Found net devices under 0000:86:00.1: cvl_0_1 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:32.839 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:33.098 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:33.098 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:33.098 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms 00:24:33.098 00:24:33.098 --- 10.0.0.2 ping statistics --- 00:24:33.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.098 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:24:33.098 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:33.098 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:33.098 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:24:33.098 00:24:33.098 --- 10.0.0.1 ping statistics --- 00:24:33.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.098 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:24:33.098 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:33.098 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:24:33.098 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:33.098 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:33.098 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:33.098 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:33.098 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:33.098 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:33.098 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:33.098 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:33.098 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:33.098 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:33.098 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:33.098 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=268099 00:24:33.098 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 268099 00:24:33.098 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:33.098 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 268099 ']' 00:24:33.098 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:33.098 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:33.098 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:33.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:33.098 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:33.098 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:33.098 [2024-11-20 12:38:38.723429] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:24:33.098 [2024-11-20 12:38:38.723480] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:33.098 [2024-11-20 12:38:38.791350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:33.098 [2024-11-20 12:38:38.836400] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:33.098 [2024-11-20 12:38:38.836436] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:33.098 [2024-11-20 12:38:38.836443] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:33.098 [2024-11-20 12:38:38.836449] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:33.098 [2024-11-20 12:38:38.836454] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:33.098 [2024-11-20 12:38:38.838040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:33.098 [2024-11-20 12:38:38.838147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:33.098 [2024-11-20 12:38:38.838191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:33.098 [2024-11-20 12:38:38.838191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:33.357 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:33.357 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:24:33.357 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:33.357 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:33.357 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:33.357 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:33.357 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:33.357 12:38:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:36.654 12:38:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:36.654 12:38:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:36.654 12:38:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:24:36.654 12:38:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:36.913 12:38:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:36.913 12:38:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:24:36.913 12:38:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:36.913 12:38:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:36.913 12:38:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:36.913 [2024-11-20 12:38:42.621295] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:36.913 12:38:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:37.171 12:38:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:37.171 12:38:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:37.429 12:38:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:37.429 12:38:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:37.688 12:38:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:37.688 [2024-11-20 12:38:43.437742] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:37.946 12:38:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:37.946 12:38:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:24:37.946 12:38:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:24:37.946 12:38:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:37.946 12:38:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:24:39.323 Initializing NVMe Controllers 00:24:39.323 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:24:39.323 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:24:39.323 Initialization complete. Launching workers. 00:24:39.323 ======================================================== 00:24:39.323 Latency(us) 00:24:39.323 Device Information : IOPS MiB/s Average min max 00:24:39.323 PCIE (0000:5e:00.0) NSID 1 from core 0: 98026.90 382.92 325.86 25.03 4425.42 00:24:39.323 ======================================================== 00:24:39.323 Total : 98026.90 382.92 325.86 25.03 4425.42 00:24:39.323 00:24:39.323 12:38:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:40.699 Initializing NVMe Controllers 00:24:40.699 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:40.699 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:40.699 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:40.699 Initialization complete. Launching workers. 00:24:40.699 ======================================================== 00:24:40.699 Latency(us) 00:24:40.699 Device Information : IOPS MiB/s Average min max 00:24:40.700 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 116.00 0.45 8869.92 107.79 45220.23 00:24:40.700 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.00 0.24 16454.81 6984.86 48846.03 00:24:40.700 ======================================================== 00:24:40.700 Total : 177.00 0.69 11483.92 107.79 48846.03 00:24:40.700 00:24:40.700 12:38:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:41.637 Initializing NVMe Controllers 00:24:41.637 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:41.637 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:41.637 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:41.637 Initialization complete. Launching workers. 00:24:41.637 ======================================================== 00:24:41.637 Latency(us) 00:24:41.637 Device Information : IOPS MiB/s Average min max 00:24:41.637 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11437.00 44.68 2797.84 401.29 7089.22 00:24:41.637 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3933.00 15.36 8186.55 6389.74 16110.79 00:24:41.637 ======================================================== 00:24:41.637 Total : 15370.00 60.04 4176.75 401.29 16110.79 00:24:41.637 00:24:41.895 12:38:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:41.895 12:38:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:41.895 12:38:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:44.427 Initializing NVMe Controllers 00:24:44.427 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:44.427 Controller IO queue size 128, less than required. 00:24:44.427 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:44.427 Controller IO queue size 128, less than required. 00:24:44.427 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:44.427 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:44.427 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:44.427 Initialization complete. Launching workers. 00:24:44.427 ======================================================== 00:24:44.427 Latency(us) 00:24:44.427 Device Information : IOPS MiB/s Average min max 00:24:44.427 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1807.97 451.99 71938.44 39630.40 111326.13 00:24:44.427 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 603.49 150.87 218295.71 86857.03 326333.14 00:24:44.427 ======================================================== 00:24:44.427 Total : 2411.46 602.86 108565.69 39630.40 326333.14 00:24:44.427 00:24:44.427 12:38:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:44.686 No valid NVMe controllers or AIO or URING devices found 00:24:44.686 Initializing NVMe Controllers 00:24:44.686 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:44.686 Controller IO queue size 128, less than required. 00:24:44.686 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:44.686 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:44.686 Controller IO queue size 128, less than required. 00:24:44.686 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:44.686 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:44.686 WARNING: Some requested NVMe devices were skipped 00:24:44.686 12:38:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:47.220 Initializing NVMe Controllers 00:24:47.220 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:47.220 Controller IO queue size 128, less than required. 00:24:47.220 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:47.220 Controller IO queue size 128, less than required. 00:24:47.220 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:47.220 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:47.220 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:47.220 Initialization complete. Launching workers. 00:24:47.220 00:24:47.220 ==================== 00:24:47.220 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:47.220 TCP transport: 00:24:47.220 polls: 11442 00:24:47.220 idle_polls: 8214 00:24:47.220 sock_completions: 3228 00:24:47.220 nvme_completions: 6183 00:24:47.220 submitted_requests: 9340 00:24:47.220 queued_requests: 1 00:24:47.220 00:24:47.220 ==================== 00:24:47.220 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:47.220 TCP transport: 00:24:47.220 polls: 11554 00:24:47.220 idle_polls: 7657 00:24:47.220 sock_completions: 3897 00:24:47.220 nvme_completions: 6651 00:24:47.220 submitted_requests: 9958 00:24:47.220 queued_requests: 1 00:24:47.220 ======================================================== 00:24:47.220 Latency(us) 00:24:47.220 Device Information : IOPS MiB/s Average min max 00:24:47.220 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1545.18 386.30 85111.20 47815.97 158754.25 00:24:47.220 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1662.16 415.54 77903.02 42310.21 132415.80 00:24:47.220 ======================================================== 00:24:47.220 Total : 3207.35 801.84 81375.67 42310.21 158754.25 00:24:47.220 00:24:47.221 12:38:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:47.221 12:38:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:47.479 12:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:47.479 12:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:47.479 12:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:47.479 12:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:47.479 12:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:24:47.479 12:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:47.479 12:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:24:47.479 12:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:47.479 12:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:47.479 rmmod nvme_tcp 00:24:47.479 rmmod nvme_fabrics 00:24:47.479 rmmod nvme_keyring 00:24:47.479 12:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:47.479 12:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:24:47.479 12:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:24:47.479 12:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 268099 ']' 00:24:47.479 12:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 268099 00:24:47.479 12:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 268099 ']' 00:24:47.479 12:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 268099 00:24:47.479 12:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:24:47.479 12:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:47.479 12:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 268099 00:24:47.479 12:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:47.479 12:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:47.479 12:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 268099' 00:24:47.479 killing process with pid 268099 00:24:47.479 12:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 268099 00:24:47.479 12:38:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 268099 00:24:49.384 12:38:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:49.384 12:38:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:49.384 12:38:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:49.384 12:38:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:24:49.384 12:38:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:24:49.384 12:38:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:49.384 12:38:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:24:49.384 12:38:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:49.384 12:38:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:49.384 12:38:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:49.384 12:38:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:49.384 12:38:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:51.922 12:38:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:51.922 00:24:51.922 real 0m24.687s 00:24:51.922 user 1m4.722s 00:24:51.922 sys 0m8.263s 00:24:51.922 12:38:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:51.922 12:38:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:51.922 ************************************ 00:24:51.922 END TEST nvmf_perf 00:24:51.922 ************************************ 00:24:51.922 12:38:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:51.922 12:38:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:51.922 12:38:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:51.922 12:38:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.922 ************************************ 00:24:51.922 START TEST nvmf_fio_host 00:24:51.922 ************************************ 00:24:51.922 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:51.922 * Looking for test storage... 00:24:51.922 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:51.922 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:51.922 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:24:51.922 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:51.922 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:51.922 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:51.922 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:51.922 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:51.922 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:51.922 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:51.922 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:51.922 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:51.922 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:51.922 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:51.922 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:51.922 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:51.922 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:24:51.922 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:24:51.922 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:51.922 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:51.922 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:24:51.922 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:24:51.922 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:51.922 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:24:51.922 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:51.922 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:24:51.922 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:24:51.922 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:51.922 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:24:51.922 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:51.922 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:51.922 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:51.922 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:24:51.922 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:51.922 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:51.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.922 --rc genhtml_branch_coverage=1 00:24:51.922 --rc genhtml_function_coverage=1 00:24:51.922 --rc genhtml_legend=1 00:24:51.922 --rc geninfo_all_blocks=1 00:24:51.922 --rc geninfo_unexecuted_blocks=1 00:24:51.922 00:24:51.922 ' 00:24:51.922 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:51.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.922 --rc genhtml_branch_coverage=1 00:24:51.922 --rc genhtml_function_coverage=1 00:24:51.922 --rc genhtml_legend=1 00:24:51.922 --rc geninfo_all_blocks=1 00:24:51.922 --rc geninfo_unexecuted_blocks=1 00:24:51.922 00:24:51.922 ' 00:24:51.922 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:51.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.922 --rc genhtml_branch_coverage=1 00:24:51.922 --rc genhtml_function_coverage=1 00:24:51.922 --rc genhtml_legend=1 00:24:51.922 --rc geninfo_all_blocks=1 00:24:51.922 --rc geninfo_unexecuted_blocks=1 00:24:51.922 00:24:51.922 ' 00:24:51.922 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:51.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.923 --rc genhtml_branch_coverage=1 00:24:51.923 --rc genhtml_function_coverage=1 00:24:51.923 --rc genhtml_legend=1 00:24:51.923 --rc geninfo_all_blocks=1 00:24:51.923 --rc geninfo_unexecuted_blocks=1 00:24:51.923 00:24:51.923 ' 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:51.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:51.923 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:58.495 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:58.495 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.495 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:58.495 Found net devices under 0000:86:00.0: cvl_0_0 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:58.496 Found net devices under 0000:86:00.1: cvl_0_1 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:58.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:58.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:24:58.496 00:24:58.496 --- 10.0.0.2 ping statistics --- 00:24:58.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.496 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:58.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:58.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:24:58.496 00:24:58.496 --- 10.0.0.1 ping statistics --- 00:24:58.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.496 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=274336 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 274336 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 274336 ']' 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.496 [2024-11-20 12:39:03.507954] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:24:58.496 [2024-11-20 12:39:03.507996] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:58.496 [2024-11-20 12:39:03.585852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:58.496 [2024-11-20 12:39:03.628683] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:58.496 [2024-11-20 12:39:03.628719] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:58.496 [2024-11-20 12:39:03.628726] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:58.496 [2024-11-20 12:39:03.628732] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:58.496 [2024-11-20 12:39:03.628736] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:58.496 [2024-11-20 12:39:03.630336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:58.496 [2024-11-20 12:39:03.630445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:58.496 [2024-11-20 12:39:03.630551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:58.496 [2024-11-20 12:39:03.630553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:58.496 [2024-11-20 12:39:03.891831] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.496 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:58.496 Malloc1 00:24:58.496 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:58.755 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:59.013 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:59.013 [2024-11-20 12:39:04.769582] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:59.271 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:59.271 12:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:59.271 12:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:59.271 12:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:59.271 12:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:59.271 12:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:59.271 12:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:59.271 12:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:59.271 12:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:59.271 12:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:59.271 12:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:59.271 12:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:59.271 12:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:59.271 12:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:59.271 12:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:59.271 12:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:59.271 12:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:59.271 12:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:59.271 12:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:59.271 12:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:59.534 12:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:59.534 12:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:59.534 12:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:59.534 12:39:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:59.792 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:59.792 fio-3.35 00:24:59.792 Starting 1 thread 00:25:02.320 00:25:02.320 test: (groupid=0, jobs=1): err= 0: pid=275044: Wed Nov 20 12:39:07 2024 00:25:02.320 read: IOPS=11.9k, BW=46.5MiB/s (48.7MB/s)(93.2MiB/2005msec) 00:25:02.320 slat (nsec): min=1533, max=263475, avg=1743.33, stdev=2400.63 00:25:02.320 clat (usec): min=2863, max=10179, avg=5941.96, stdev=442.10 00:25:02.320 lat (usec): min=2895, max=10180, avg=5943.71, stdev=442.05 00:25:02.320 clat percentiles (usec): 00:25:02.320 | 1.00th=[ 4883], 5.00th=[ 5211], 10.00th=[ 5407], 20.00th=[ 5604], 00:25:02.320 | 30.00th=[ 5735], 40.00th=[ 5866], 50.00th=[ 5932], 60.00th=[ 6063], 00:25:02.320 | 70.00th=[ 6194], 80.00th=[ 6325], 90.00th=[ 6456], 95.00th=[ 6652], 00:25:02.320 | 99.00th=[ 6915], 99.50th=[ 7046], 99.90th=[ 8291], 99.95th=[ 9110], 00:25:02.320 | 99.99th=[10159] 00:25:02.320 bw ( KiB/s): min=46528, max=48168, per=99.94%, avg=47550.00, stdev=757.89, samples=4 00:25:02.320 iops : min=11634, max=12042, avg=11888.00, stdev=188.57, samples=4 00:25:02.320 write: IOPS=11.8k, BW=46.2MiB/s (48.5MB/s)(92.7MiB/2005msec); 0 zone resets 00:25:02.320 slat (nsec): min=1587, max=230003, avg=1813.31, stdev=1674.92 00:25:02.320 clat (usec): min=2493, max=9549, avg=4809.75, stdev=375.56 00:25:02.320 lat (usec): min=2510, max=9551, avg=4811.57, stdev=375.59 00:25:02.320 clat percentiles (usec): 00:25:02.320 | 1.00th=[ 3949], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4490], 00:25:02.320 | 30.00th=[ 4621], 40.00th=[ 4752], 50.00th=[ 4817], 60.00th=[ 4883], 00:25:02.320 | 70.00th=[ 5014], 80.00th=[ 5080], 90.00th=[ 5276], 95.00th=[ 5342], 00:25:02.320 | 99.00th=[ 5669], 99.50th=[ 5800], 99.90th=[ 7701], 99.95th=[ 8455], 00:25:02.320 | 99.99th=[ 9503] 00:25:02.320 bw ( KiB/s): min=47000, max=47808, per=100.00%, avg=47360.00, stdev=385.39, samples=4 00:25:02.320 iops : min=11750, max=11952, avg=11840.00, stdev=96.35, samples=4 00:25:02.320 lat (msec) : 4=0.71%, 10=99.28%, 20=0.01% 00:25:02.320 cpu : usr=75.65%, sys=23.25%, ctx=95, majf=0, minf=3 00:25:02.320 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:25:02.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:02.320 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:02.320 issued rwts: total=23848,23735,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:02.320 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:02.320 00:25:02.320 Run status group 0 (all jobs): 00:25:02.320 READ: bw=46.5MiB/s (48.7MB/s), 46.5MiB/s-46.5MiB/s (48.7MB/s-48.7MB/s), io=93.2MiB (97.7MB), run=2005-2005msec 00:25:02.320 WRITE: bw=46.2MiB/s (48.5MB/s), 46.2MiB/s-46.2MiB/s (48.5MB/s-48.5MB/s), io=92.7MiB (97.2MB), run=2005-2005msec 00:25:02.320 12:39:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:02.320 12:39:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:02.320 12:39:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:02.320 12:39:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:02.320 12:39:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:02.320 12:39:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:02.320 12:39:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:25:02.320 12:39:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:02.320 12:39:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:02.320 12:39:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:02.320 12:39:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:25:02.320 12:39:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:02.320 12:39:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:02.320 12:39:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:02.320 12:39:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:02.320 12:39:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:02.320 12:39:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:02.320 12:39:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:02.320 12:39:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:02.320 12:39:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:02.320 12:39:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:02.320 12:39:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:02.320 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:02.320 fio-3.35 00:25:02.320 Starting 1 thread 00:25:04.848 00:25:04.848 test: (groupid=0, jobs=1): err= 0: pid=275678: Wed Nov 20 12:39:10 2024 00:25:04.848 read: IOPS=10.8k, BW=169MiB/s (177MB/s)(338MiB/2006msec) 00:25:04.848 slat (nsec): min=2484, max=86603, avg=2792.83, stdev=1246.46 00:25:04.848 clat (usec): min=1482, max=50002, avg=6894.41, stdev=3390.48 00:25:04.848 lat (usec): min=1485, max=50005, avg=6897.20, stdev=3390.52 00:25:04.848 clat percentiles (usec): 00:25:04.848 | 1.00th=[ 3654], 5.00th=[ 4293], 10.00th=[ 4752], 20.00th=[ 5342], 00:25:04.848 | 30.00th=[ 5800], 40.00th=[ 6194], 50.00th=[ 6652], 60.00th=[ 7111], 00:25:04.848 | 70.00th=[ 7504], 80.00th=[ 7898], 90.00th=[ 8586], 95.00th=[ 9372], 00:25:04.848 | 99.00th=[10945], 99.50th=[43254], 99.90th=[49021], 99.95th=[49546], 00:25:04.848 | 99.99th=[50070] 00:25:04.848 bw ( KiB/s): min=82432, max=94880, per=51.32%, avg=88640.00, stdev=5530.42, samples=4 00:25:04.848 iops : min= 5152, max= 5930, avg=5540.00, stdev=345.65, samples=4 00:25:04.848 write: IOPS=6504, BW=102MiB/s (107MB/s)(181MiB/1783msec); 0 zone resets 00:25:04.848 slat (usec): min=28, max=382, avg=31.43, stdev= 7.04 00:25:04.848 clat (usec): min=3585, max=15418, avg=8609.92, stdev=1536.79 00:25:04.848 lat (usec): min=3615, max=15447, avg=8641.35, stdev=1538.13 00:25:04.848 clat percentiles (usec): 00:25:04.848 | 1.00th=[ 5800], 5.00th=[ 6521], 10.00th=[ 6849], 20.00th=[ 7308], 00:25:04.848 | 30.00th=[ 7701], 40.00th=[ 8094], 50.00th=[ 8455], 60.00th=[ 8848], 00:25:04.848 | 70.00th=[ 9241], 80.00th=[ 9765], 90.00th=[10683], 95.00th=[11469], 00:25:04.849 | 99.00th=[13042], 99.50th=[14091], 99.90th=[14746], 99.95th=[15139], 00:25:04.849 | 99.99th=[15401] 00:25:04.849 bw ( KiB/s): min=86176, max=98656, per=88.53%, avg=92128.00, stdev=5423.47, samples=4 00:25:04.849 iops : min= 5386, max= 6166, avg=5758.00, stdev=338.97, samples=4 00:25:04.849 lat (msec) : 2=0.02%, 4=1.72%, 10=90.62%, 20=7.26%, 50=0.38% 00:25:04.849 lat (msec) : 100=0.01% 00:25:04.849 cpu : usr=86.08%, sys=13.32%, ctx=36, majf=0, minf=3 00:25:04.849 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:04.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:04.849 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:04.849 issued rwts: total=21655,11597,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:04.849 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:04.849 00:25:04.849 Run status group 0 (all jobs): 00:25:04.849 READ: bw=169MiB/s (177MB/s), 169MiB/s-169MiB/s (177MB/s-177MB/s), io=338MiB (355MB), run=2006-2006msec 00:25:04.849 WRITE: bw=102MiB/s (107MB/s), 102MiB/s-102MiB/s (107MB/s-107MB/s), io=181MiB (190MB), run=1783-1783msec 00:25:04.849 12:39:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:04.849 12:39:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:04.849 12:39:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:04.849 12:39:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:04.849 12:39:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:04.849 12:39:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:04.849 12:39:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:25:04.849 12:39:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:04.849 12:39:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:25:04.849 12:39:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:04.849 12:39:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:04.849 rmmod nvme_tcp 00:25:04.849 rmmod nvme_fabrics 00:25:05.108 rmmod nvme_keyring 00:25:05.108 12:39:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:05.108 12:39:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:25:05.108 12:39:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:25:05.108 12:39:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 274336 ']' 00:25:05.108 12:39:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 274336 00:25:05.108 12:39:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 274336 ']' 00:25:05.108 12:39:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 274336 00:25:05.108 12:39:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:25:05.108 12:39:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:05.108 12:39:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 274336 00:25:05.108 12:39:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:05.108 12:39:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:05.108 12:39:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 274336' 00:25:05.108 killing process with pid 274336 00:25:05.108 12:39:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 274336 00:25:05.108 12:39:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 274336 00:25:05.108 12:39:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:05.108 12:39:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:05.108 12:39:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:05.108 12:39:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:25:05.367 12:39:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:25:05.367 12:39:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:05.367 12:39:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:05.367 12:39:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:05.367 12:39:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:05.367 12:39:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:05.367 12:39:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:05.367 12:39:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:07.273 12:39:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:07.273 00:25:07.273 real 0m15.679s 00:25:07.273 user 0m45.562s 00:25:07.273 sys 0m6.466s 00:25:07.273 12:39:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:07.273 12:39:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.273 ************************************ 00:25:07.273 END TEST nvmf_fio_host 00:25:07.273 ************************************ 00:25:07.273 12:39:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:07.273 12:39:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:07.273 12:39:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:07.273 12:39:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.273 ************************************ 00:25:07.273 START TEST nvmf_failover 00:25:07.273 ************************************ 00:25:07.273 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:07.534 * Looking for test storage... 00:25:07.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:07.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.534 --rc genhtml_branch_coverage=1 00:25:07.534 --rc genhtml_function_coverage=1 00:25:07.534 --rc genhtml_legend=1 00:25:07.534 --rc geninfo_all_blocks=1 00:25:07.534 --rc geninfo_unexecuted_blocks=1 00:25:07.534 00:25:07.534 ' 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:07.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.534 --rc genhtml_branch_coverage=1 00:25:07.534 --rc genhtml_function_coverage=1 00:25:07.534 --rc genhtml_legend=1 00:25:07.534 --rc geninfo_all_blocks=1 00:25:07.534 --rc geninfo_unexecuted_blocks=1 00:25:07.534 00:25:07.534 ' 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:07.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.534 --rc genhtml_branch_coverage=1 00:25:07.534 --rc genhtml_function_coverage=1 00:25:07.534 --rc genhtml_legend=1 00:25:07.534 --rc geninfo_all_blocks=1 00:25:07.534 --rc geninfo_unexecuted_blocks=1 00:25:07.534 00:25:07.534 ' 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:07.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.534 --rc genhtml_branch_coverage=1 00:25:07.534 --rc genhtml_function_coverage=1 00:25:07.534 --rc genhtml_legend=1 00:25:07.534 --rc geninfo_all_blocks=1 00:25:07.534 --rc geninfo_unexecuted_blocks=1 00:25:07.534 00:25:07.534 ' 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:07.534 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:07.535 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:07.535 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:07.535 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:07.535 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:07.535 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:07.535 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:07.535 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:07.535 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:25:07.535 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:07.535 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:07.535 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:07.535 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.535 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.535 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.535 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:07.535 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.535 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:25:07.535 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:07.535 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:07.535 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:07.535 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:07.535 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:07.535 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:07.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:07.535 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:07.535 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:07.535 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:07.535 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:07.535 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:07.535 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:07.535 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:07.535 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:07.535 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:07.535 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:07.535 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:07.535 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:07.535 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:07.535 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:07.535 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:07.535 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:07.535 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:07.535 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:07.535 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:25:07.535 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:14.109 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:14.109 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:25:14.109 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:14.109 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:14.109 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:14.109 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:14.109 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:14.109 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:25:14.109 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:14.109 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:25:14.109 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:25:14.109 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:25:14.109 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:25:14.109 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:25:14.109 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:25:14.109 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:14.109 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:14.110 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:14.110 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:14.110 Found net devices under 0000:86:00.0: cvl_0_0 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:14.110 Found net devices under 0000:86:00.1: cvl_0_1 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:14.110 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:14.110 12:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:14.110 12:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:14.110 12:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:14.110 12:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:14.110 12:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:14.110 12:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:14.110 12:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:14.110 12:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:14.110 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:14.110 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.476 ms 00:25:14.110 00:25:14.110 --- 10.0.0.2 ping statistics --- 00:25:14.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.110 rtt min/avg/max/mdev = 0.476/0.476/0.476/0.000 ms 00:25:14.110 12:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:14.110 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:14.110 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:25:14.110 00:25:14.110 --- 10.0.0.1 ping statistics --- 00:25:14.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.110 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:25:14.110 12:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:14.111 12:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:25:14.111 12:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:14.111 12:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:14.111 12:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:14.111 12:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:14.111 12:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:14.111 12:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:14.111 12:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:14.111 12:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:14.111 12:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:14.111 12:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:14.111 12:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:14.111 12:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=279649 00:25:14.111 12:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 279649 00:25:14.111 12:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:14.111 12:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 279649 ']' 00:25:14.111 12:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:14.111 12:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:14.111 12:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:14.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:14.111 12:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:14.111 12:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:14.111 [2024-11-20 12:39:19.223656] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:25:14.111 [2024-11-20 12:39:19.223697] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:14.111 [2024-11-20 12:39:19.302550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:14.111 [2024-11-20 12:39:19.343528] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:14.111 [2024-11-20 12:39:19.343564] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:14.111 [2024-11-20 12:39:19.343570] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:14.111 [2024-11-20 12:39:19.343577] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:14.111 [2024-11-20 12:39:19.343582] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:14.111 [2024-11-20 12:39:19.344992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:14.111 [2024-11-20 12:39:19.345097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:14.111 [2024-11-20 12:39:19.345098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:14.111 12:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:14.111 12:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:14.111 12:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:14.111 12:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:14.111 12:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:14.111 12:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:14.111 12:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:14.111 [2024-11-20 12:39:19.640436] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:14.111 12:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:14.111 Malloc0 00:25:14.369 12:39:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:14.369 12:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:14.627 12:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:14.885 [2024-11-20 12:39:20.480038] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:14.885 12:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:15.143 [2024-11-20 12:39:20.668549] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:15.143 12:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:15.143 [2024-11-20 12:39:20.857158] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:15.143 12:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:15.143 12:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=279911 00:25:15.143 12:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:15.143 12:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 279911 /var/tmp/bdevperf.sock 00:25:15.143 12:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 279911 ']' 00:25:15.143 12:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:15.143 12:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:15.143 12:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:15.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:15.143 12:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:15.143 12:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:15.401 12:39:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:15.401 12:39:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:15.401 12:39:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:15.659 NVMe0n1 00:25:15.916 12:39:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:16.174 00:25:16.174 12:39:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=280137 00:25:16.174 12:39:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:16.174 12:39:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:17.547 12:39:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:17.547 [2024-11-20 12:39:23.068796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6dd2d0 is same with the state(6) to be set 00:25:17.547 [2024-11-20 12:39:23.068845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6dd2d0 is same with the state(6) to be set 00:25:17.547 [2024-11-20 12:39:23.068858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6dd2d0 is same with the state(6) to be set 00:25:17.547 [2024-11-20 12:39:23.068864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6dd2d0 is same with the state(6) to be set 00:25:17.547 [2024-11-20 12:39:23.068870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6dd2d0 is same with the state(6) to be set 00:25:17.547 [2024-11-20 12:39:23.068876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6dd2d0 is same with the state(6) to be set 00:25:17.547 12:39:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:20.828 12:39:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:20.828 00:25:20.828 12:39:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:21.086 [2024-11-20 12:39:26.687997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6de060 is same with the state(6) to be set 00:25:21.086 [2024-11-20 12:39:26.688036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6de060 is same with the state(6) to be set 00:25:21.087 12:39:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:24.370 12:39:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:24.370 [2024-11-20 12:39:29.915190] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:24.370 12:39:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:25.303 12:39:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:25.561 12:39:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 280137 00:25:32.279 { 00:25:32.279 "results": [ 00:25:32.279 { 00:25:32.279 "job": "NVMe0n1", 00:25:32.279 "core_mask": "0x1", 00:25:32.279 "workload": "verify", 00:25:32.279 "status": "finished", 00:25:32.279 "verify_range": { 00:25:32.279 "start": 0, 00:25:32.279 "length": 16384 00:25:32.279 }, 00:25:32.279 "queue_depth": 128, 00:25:32.279 "io_size": 4096, 00:25:32.279 "runtime": 15.001657, 00:25:32.279 "iops": 11329.74844045561, 00:25:32.279 "mibps": 44.25682984552973, 00:25:32.279 "io_failed": 7821, 00:25:32.279 "io_timeout": 0, 00:25:32.279 "avg_latency_us": 10779.268224558899, 00:25:32.279 "min_latency_us": 413.50095238095236, 00:25:32.279 "max_latency_us": 17850.758095238096 00:25:32.279 } 00:25:32.279 ], 00:25:32.279 "core_count": 1 00:25:32.279 } 00:25:32.279 12:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 279911 00:25:32.279 12:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 279911 ']' 00:25:32.279 12:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 279911 00:25:32.279 12:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:32.279 12:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:32.279 12:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 279911 00:25:32.279 12:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:32.279 12:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:32.279 12:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 279911' 00:25:32.279 killing process with pid 279911 00:25:32.279 12:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 279911 00:25:32.279 12:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 279911 00:25:32.279 12:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:32.279 [2024-11-20 12:39:20.922672] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:25:32.279 [2024-11-20 12:39:20.922725] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid279911 ] 00:25:32.279 [2024-11-20 12:39:20.995667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.279 [2024-11-20 12:39:21.037029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.279 Running I/O for 15 seconds... 00:25:32.279 11350.00 IOPS, 44.34 MiB/s [2024-11-20T11:39:38.045Z] [2024-11-20 12:39:23.069441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:101280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.279 [2024-11-20 12:39:23.069476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.279 [2024-11-20 12:39:23.069490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:101288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.279 [2024-11-20 12:39:23.069497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.279 [2024-11-20 12:39:23.069506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.280 [2024-11-20 12:39:23.069513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.280 [2024-11-20 12:39:23.069522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:101304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.280 [2024-11-20 12:39:23.069528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.280 [2024-11-20 12:39:23.069536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:101312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.280 [2024-11-20 12:39:23.069543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.280 [2024-11-20 12:39:23.069550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:101320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.280 [2024-11-20 12:39:23.069557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.280 [2024-11-20 12:39:23.069565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:101328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.280 [2024-11-20 12:39:23.069571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.280 [2024-11-20 12:39:23.069582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:101336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.280 [2024-11-20 12:39:23.069589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.280 [2024-11-20 12:39:23.069597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:101344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.280 [2024-11-20 12:39:23.069603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.280 [2024-11-20 12:39:23.069611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:101352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.280 [2024-11-20 12:39:23.069617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.280 [2024-11-20 12:39:23.069625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:101360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.280 [2024-11-20 12:39:23.069631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.280 [2024-11-20 12:39:23.069644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:101368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.280 [2024-11-20 12:39:23.069651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.280 [2024-11-20 12:39:23.069659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:101376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.280 [2024-11-20 12:39:23.069665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.280 [2024-11-20 12:39:23.069673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.280 [2024-11-20 12:39:23.069679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.280 [2024-11-20 12:39:23.069687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:101392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.280 [2024-11-20 12:39:23.069693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.280 [2024-11-20 12:39:23.069701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:101400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.280 [2024-11-20 12:39:23.069707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.280 [2024-11-20 12:39:23.069715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:101408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.280 [2024-11-20 12:39:23.069722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.280 [2024-11-20 12:39:23.069730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:101416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.280 [2024-11-20 12:39:23.069737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.280 [2024-11-20 12:39:23.069745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:101424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.280 [2024-11-20 12:39:23.069751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.280 [2024-11-20 12:39:23.069759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:101432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.280 [2024-11-20 12:39:23.069766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.280 [2024-11-20 12:39:23.069774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:101440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.280 [2024-11-20 12:39:23.069780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.280 [2024-11-20 12:39:23.069788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:101448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.280 [2024-11-20 12:39:23.069795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.280 [2024-11-20 12:39:23.069804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:101456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.280 [2024-11-20 12:39:23.069811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.280 [2024-11-20 12:39:23.069819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:101464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.280 [2024-11-20 12:39:23.069827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.280 [2024-11-20 12:39:23.069835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:101472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.280 [2024-11-20 12:39:23.069842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.280 [2024-11-20 12:39:23.069850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:101480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.280 [2024-11-20 12:39:23.069856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.280 [2024-11-20 12:39:23.069864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:101488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.280 [2024-11-20 12:39:23.069870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.280 [2024-11-20 12:39:23.069879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:101496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.280 [2024-11-20 12:39:23.069885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.280 [2024-11-20 12:39:23.069893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:101504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.280 [2024-11-20 12:39:23.069900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.280 [2024-11-20 12:39:23.069908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:101512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.280 [2024-11-20 12:39:23.069914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.280 [2024-11-20 12:39:23.069922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:101520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.280 [2024-11-20 12:39:23.069929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.280 [2024-11-20 12:39:23.069936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:101528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.280 [2024-11-20 12:39:23.069943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.280 [2024-11-20 12:39:23.069951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:101536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.280 [2024-11-20 12:39:23.069959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.280 [2024-11-20 12:39:23.069968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:101544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.280 [2024-11-20 12:39:23.069974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.280 [2024-11-20 12:39:23.069982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:101552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.280 [2024-11-20 12:39:23.069990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.280 [2024-11-20 12:39:23.069998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:101560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.280 [2024-11-20 12:39:23.070004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.280 [2024-11-20 12:39:23.070014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:101568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.280 [2024-11-20 12:39:23.070020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.280 [2024-11-20 12:39:23.070028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:101576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.280 [2024-11-20 12:39:23.070034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.280 [2024-11-20 12:39:23.070042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:101080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.280 [2024-11-20 12:39:23.070048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.280 [2024-11-20 12:39:23.070056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:101088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.280 [2024-11-20 12:39:23.070063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.280 [2024-11-20 12:39:23.070071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:101584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.280 [2024-11-20 12:39:23.070078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.280 [2024-11-20 12:39:23.070086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:101592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.280 [2024-11-20 12:39:23.070092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.281 [2024-11-20 12:39:23.070100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:101600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.281 [2024-11-20 12:39:23.070106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.281 [2024-11-20 12:39:23.070114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:101608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.281 [2024-11-20 12:39:23.070120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.281 [2024-11-20 12:39:23.070128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:101616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.281 [2024-11-20 12:39:23.070134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.281 [2024-11-20 12:39:23.070141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:101624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.281 [2024-11-20 12:39:23.070148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.281 [2024-11-20 12:39:23.070155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.281 [2024-11-20 12:39:23.070161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.281 [2024-11-20 12:39:23.070170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:101640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.281 [2024-11-20 12:39:23.070176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.281 [2024-11-20 12:39:23.070184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:101648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.281 [2024-11-20 12:39:23.070196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.281 [2024-11-20 12:39:23.070210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:101656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.281 [2024-11-20 12:39:23.070217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.281 [2024-11-20 12:39:23.070225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.281 [2024-11-20 12:39:23.070231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.281 [2024-11-20 12:39:23.070239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:101672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.281 [2024-11-20 12:39:23.070245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.281 [2024-11-20 12:39:23.070253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:101680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.281 [2024-11-20 12:39:23.070260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.281 [2024-11-20 12:39:23.070267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:101688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.281 [2024-11-20 12:39:23.070273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.281 [2024-11-20 12:39:23.070281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:101696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.281 [2024-11-20 12:39:23.070287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.281 [2024-11-20 12:39:23.070295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:101704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.281 [2024-11-20 12:39:23.070301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.281 [2024-11-20 12:39:23.070309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.281 [2024-11-20 12:39:23.070315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.281 [2024-11-20 12:39:23.070323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:101720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.281 [2024-11-20 12:39:23.070329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.281 [2024-11-20 12:39:23.070336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.281 [2024-11-20 12:39:23.070343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.281 [2024-11-20 12:39:23.070351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:101736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.281 [2024-11-20 12:39:23.070357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.281 [2024-11-20 12:39:23.070365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:101744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.281 [2024-11-20 12:39:23.070371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.281 [2024-11-20 12:39:23.070378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:101752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.281 [2024-11-20 12:39:23.070386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.281 [2024-11-20 12:39:23.070394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:101760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.281 [2024-11-20 12:39:23.070401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.281 [2024-11-20 12:39:23.070408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:101768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.281 [2024-11-20 12:39:23.070414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.281 [2024-11-20 12:39:23.070422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:101776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.281 [2024-11-20 12:39:23.070429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.281 [2024-11-20 12:39:23.070436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:101784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.281 [2024-11-20 12:39:23.070442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.281 [2024-11-20 12:39:23.070450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:101792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.281 [2024-11-20 12:39:23.070456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.281 [2024-11-20 12:39:23.070464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:101800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.281 [2024-11-20 12:39:23.070471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.281 [2024-11-20 12:39:23.070478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.281 [2024-11-20 12:39:23.070484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.281 [2024-11-20 12:39:23.070492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:101816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.281 [2024-11-20 12:39:23.070498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.281 [2024-11-20 12:39:23.070506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:101824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.281 [2024-11-20 12:39:23.070512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.281 [2024-11-20 12:39:23.070519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:101832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.281 [2024-11-20 12:39:23.070526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.281 [2024-11-20 12:39:23.070534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:101840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.281 [2024-11-20 12:39:23.070541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.281 [2024-11-20 12:39:23.070549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:101848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.281 [2024-11-20 12:39:23.070555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.281 [2024-11-20 12:39:23.070564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:101856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.281 [2024-11-20 12:39:23.070571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.281 [2024-11-20 12:39:23.070579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:101864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.281 [2024-11-20 12:39:23.070585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.281 [2024-11-20 12:39:23.070593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:101872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.281 [2024-11-20 12:39:23.070599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.281 [2024-11-20 12:39:23.070607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.281 [2024-11-20 12:39:23.070613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.281 [2024-11-20 12:39:23.070620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:101888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.282 [2024-11-20 12:39:23.070627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.282 [2024-11-20 12:39:23.070635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:101896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.282 [2024-11-20 12:39:23.070641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.282 [2024-11-20 12:39:23.070649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:101904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.282 [2024-11-20 12:39:23.070656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.282 [2024-11-20 12:39:23.070664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:101912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.282 [2024-11-20 12:39:23.070670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.282 [2024-11-20 12:39:23.070678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:101920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.282 [2024-11-20 12:39:23.070685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.282 [2024-11-20 12:39:23.070693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:101928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.282 [2024-11-20 12:39:23.070699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.282 [2024-11-20 12:39:23.070707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:101936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.282 [2024-11-20 12:39:23.070713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.282 [2024-11-20 12:39:23.070721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:101944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.282 [2024-11-20 12:39:23.070727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.282 [2024-11-20 12:39:23.070735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.282 [2024-11-20 12:39:23.070743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.282 [2024-11-20 12:39:23.070750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:101960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.282 [2024-11-20 12:39:23.070757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.282 [2024-11-20 12:39:23.070764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:101968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.282 [2024-11-20 12:39:23.070770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.282 [2024-11-20 12:39:23.070778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:101976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.282 [2024-11-20 12:39:23.070784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.282 [2024-11-20 12:39:23.070791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:101984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.282 [2024-11-20 12:39:23.070798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.282 [2024-11-20 12:39:23.070806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:101992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.282 [2024-11-20 12:39:23.070812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.282 [2024-11-20 12:39:23.070820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:102000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.282 [2024-11-20 12:39:23.070826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.282 [2024-11-20 12:39:23.070834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:102008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.282 [2024-11-20 12:39:23.070840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.282 [2024-11-20 12:39:23.070848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:102016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.282 [2024-11-20 12:39:23.070854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.282 [2024-11-20 12:39:23.070862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:102024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.282 [2024-11-20 12:39:23.070868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.282 [2024-11-20 12:39:23.070876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:102032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.282 [2024-11-20 12:39:23.070889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.282 [2024-11-20 12:39:23.070897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:102040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.282 [2024-11-20 12:39:23.070904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.282 [2024-11-20 12:39:23.070912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:102048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.282 [2024-11-20 12:39:23.070918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.282 [2024-11-20 12:39:23.070928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:102056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.282 [2024-11-20 12:39:23.070934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.282 [2024-11-20 12:39:23.070942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:102064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.282 [2024-11-20 12:39:23.070948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.282 [2024-11-20 12:39:23.070956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:102072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.282 [2024-11-20 12:39:23.070962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.282 [2024-11-20 12:39:23.070970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:102080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.282 [2024-11-20 12:39:23.070976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.282 [2024-11-20 12:39:23.070997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.282 [2024-11-20 12:39:23.071003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102088 len:8 PRP1 0x0 PRP2 0x0 00:25:32.282 [2024-11-20 12:39:23.071010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.282 [2024-11-20 12:39:23.071019] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.282 [2024-11-20 12:39:23.071024] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.282 [2024-11-20 12:39:23.071030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101096 len:8 PRP1 0x0 PRP2 0x0 00:25:32.282 [2024-11-20 12:39:23.071036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.282 [2024-11-20 12:39:23.071042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.282 [2024-11-20 12:39:23.071047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.282 [2024-11-20 12:39:23.071052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101104 len:8 PRP1 0x0 PRP2 0x0 00:25:32.282 [2024-11-20 12:39:23.071058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.282 [2024-11-20 12:39:23.071064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.282 [2024-11-20 12:39:23.071069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.282 [2024-11-20 12:39:23.071074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101112 len:8 PRP1 0x0 PRP2 0x0 00:25:32.282 [2024-11-20 12:39:23.071080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.282 [2024-11-20 12:39:23.071086] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.282 [2024-11-20 12:39:23.071091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.282 [2024-11-20 12:39:23.071096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101120 len:8 PRP1 0x0 PRP2 0x0 00:25:32.282 [2024-11-20 12:39:23.071103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.282 [2024-11-20 12:39:23.071110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.282 [2024-11-20 12:39:23.071115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.282 [2024-11-20 12:39:23.071122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101128 len:8 PRP1 0x0 PRP2 0x0 00:25:32.282 [2024-11-20 12:39:23.071129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.282 [2024-11-20 12:39:23.071135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.282 [2024-11-20 12:39:23.071140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.282 [2024-11-20 12:39:23.071145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101136 len:8 PRP1 0x0 PRP2 0x0 00:25:32.282 [2024-11-20 12:39:23.071151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.282 [2024-11-20 12:39:23.071157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.282 [2024-11-20 12:39:23.071162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.282 [2024-11-20 12:39:23.071167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101144 len:8 PRP1 0x0 PRP2 0x0 00:25:32.282 [2024-11-20 12:39:23.071173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.282 [2024-11-20 12:39:23.071180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.282 [2024-11-20 12:39:23.071185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.282 [2024-11-20 12:39:23.071190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101152 len:8 PRP1 0x0 PRP2 0x0 00:25:32.282 [2024-11-20 12:39:23.071197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.283 [2024-11-20 12:39:23.071207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.283 [2024-11-20 12:39:23.071212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.283 [2024-11-20 12:39:23.071218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101160 len:8 PRP1 0x0 PRP2 0x0 00:25:32.283 [2024-11-20 12:39:23.071223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.283 [2024-11-20 12:39:23.071230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.283 [2024-11-20 12:39:23.071235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.283 [2024-11-20 12:39:23.071240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101168 len:8 PRP1 0x0 PRP2 0x0 00:25:32.283 [2024-11-20 12:39:23.071246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.283 [2024-11-20 12:39:23.071252] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.283 [2024-11-20 12:39:23.071257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.283 [2024-11-20 12:39:23.071262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101176 len:8 PRP1 0x0 PRP2 0x0 00:25:32.283 [2024-11-20 12:39:23.071268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.283 [2024-11-20 12:39:23.071275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.283 [2024-11-20 12:39:23.071279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.283 [2024-11-20 12:39:23.071284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101184 len:8 PRP1 0x0 PRP2 0x0 00:25:32.283 [2024-11-20 12:39:23.071291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.283 [2024-11-20 12:39:23.071299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.283 [2024-11-20 12:39:23.071305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.283 [2024-11-20 12:39:23.071310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101192 len:8 PRP1 0x0 PRP2 0x0 00:25:32.283 [2024-11-20 12:39:23.071316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.283 [2024-11-20 12:39:23.071322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.283 [2024-11-20 12:39:23.071327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.283 [2024-11-20 12:39:23.071332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101200 len:8 PRP1 0x0 PRP2 0x0 00:25:32.283 [2024-11-20 12:39:23.071338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.283 [2024-11-20 12:39:23.071345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.283 [2024-11-20 12:39:23.071350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.283 [2024-11-20 12:39:23.071355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101208 len:8 PRP1 0x0 PRP2 0x0 00:25:32.283 [2024-11-20 12:39:23.071361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.283 [2024-11-20 12:39:23.071367] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.283 [2024-11-20 12:39:23.071372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.283 [2024-11-20 12:39:23.071377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102096 len:8 PRP1 0x0 PRP2 0x0 00:25:32.283 [2024-11-20 12:39:23.071382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.283 [2024-11-20 12:39:23.071389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.283 [2024-11-20 12:39:23.071394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.283 [2024-11-20 12:39:23.071399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101216 len:8 PRP1 0x0 PRP2 0x0 00:25:32.283 [2024-11-20 12:39:23.071405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.283 [2024-11-20 12:39:23.071411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.283 [2024-11-20 12:39:23.071416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.283 [2024-11-20 12:39:23.071421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101224 len:8 PRP1 0x0 PRP2 0x0 00:25:32.283 [2024-11-20 12:39:23.071427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.283 [2024-11-20 12:39:23.071433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.283 [2024-11-20 12:39:23.071438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.283 [2024-11-20 12:39:23.071442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101232 len:8 PRP1 0x0 PRP2 0x0 00:25:32.283 [2024-11-20 12:39:23.071449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.283 [2024-11-20 12:39:23.071456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.283 [2024-11-20 12:39:23.071461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.283 [2024-11-20 12:39:23.071466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101240 len:8 PRP1 0x0 PRP2 0x0 00:25:32.283 [2024-11-20 12:39:23.071472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.283 [2024-11-20 12:39:23.071481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.283 [2024-11-20 12:39:23.071485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.283 [2024-11-20 12:39:23.071490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101248 len:8 PRP1 0x0 PRP2 0x0 00:25:32.283 [2024-11-20 12:39:23.071496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.283 [2024-11-20 12:39:23.071503] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.283 [2024-11-20 12:39:23.071508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.283 [2024-11-20 12:39:23.071513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101256 len:8 PRP1 0x0 PRP2 0x0 00:25:32.283 [2024-11-20 12:39:23.071519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.283 [2024-11-20 12:39:23.071525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.283 [2024-11-20 12:39:23.071530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.283 [2024-11-20 12:39:23.071535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101264 len:8 PRP1 0x0 PRP2 0x0 00:25:32.283 [2024-11-20 12:39:23.071541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.283 [2024-11-20 12:39:23.071547] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.283 [2024-11-20 12:39:23.071552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.283 [2024-11-20 12:39:23.071557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101272 len:8 PRP1 0x0 PRP2 0x0 00:25:32.283 [2024-11-20 12:39:23.071563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.283 [2024-11-20 12:39:23.071605] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:32.283 [2024-11-20 12:39:23.071628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.283 [2024-11-20 12:39:23.071635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.283 [2024-11-20 12:39:23.071642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.283 [2024-11-20 12:39:23.083169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.283 [2024-11-20 12:39:23.083185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.283 [2024-11-20 12:39:23.083194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.283 [2024-11-20 12:39:23.083208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.283 [2024-11-20 12:39:23.083217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.283 [2024-11-20 12:39:23.083226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:32.283 [2024-11-20 12:39:23.083261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b04340 (9): Bad file descriptor 00:25:32.283 [2024-11-20 12:39:23.086985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:32.283 [2024-11-20 12:39:23.109513] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:25:32.283 11173.50 IOPS, 43.65 MiB/s [2024-11-20T11:39:38.049Z] 11285.33 IOPS, 44.08 MiB/s [2024-11-20T11:39:38.049Z] 11306.25 IOPS, 44.17 MiB/s [2024-11-20T11:39:38.049Z] [2024-11-20 12:39:26.688384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.283 [2024-11-20 12:39:26.688418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.283 [2024-11-20 12:39:26.688428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.283 [2024-11-20 12:39:26.688435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.284 [2024-11-20 12:39:26.688442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.284 [2024-11-20 12:39:26.688449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.284 [2024-11-20 12:39:26.688456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.284 [2024-11-20 12:39:26.688462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.284 [2024-11-20 12:39:26.688469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b04340 is same with the state(6) to be set 00:25:32.284 [2024-11-20 12:39:26.688522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:43424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.284 [2024-11-20 12:39:26.688530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.284 [2024-11-20 12:39:26.688542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:43432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.284 [2024-11-20 12:39:26.688549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.284 [2024-11-20 12:39:26.688558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:43440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.284 [2024-11-20 12:39:26.688564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.284 [2024-11-20 12:39:26.688573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:43448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.284 [2024-11-20 12:39:26.688579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.284 [2024-11-20 12:39:26.688587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:43456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.284 [2024-11-20 12:39:26.688593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.284 [2024-11-20 12:39:26.688601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:43464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.284 [2024-11-20 12:39:26.688608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.284 [2024-11-20 12:39:26.688616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:43472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.284 [2024-11-20 12:39:26.688623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.284 [2024-11-20 12:39:26.688631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:43480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.284 [2024-11-20 12:39:26.688642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.284 [2024-11-20 12:39:26.688650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:43488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.284 [2024-11-20 12:39:26.688657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.284 [2024-11-20 12:39:26.688665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:43496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.284 [2024-11-20 12:39:26.688672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.284 [2024-11-20 12:39:26.688680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:43504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.284 [2024-11-20 12:39:26.688687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.284 [2024-11-20 12:39:26.688695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:43512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.284 [2024-11-20 12:39:26.688701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.284 [2024-11-20 12:39:26.688710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:43520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.284 [2024-11-20 12:39:26.688716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.284 [2024-11-20 12:39:26.688724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:43528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.284 [2024-11-20 12:39:26.688730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.284 [2024-11-20 12:39:26.688738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:43536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.284 [2024-11-20 12:39:26.688744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.284 [2024-11-20 12:39:26.688752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:43544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.284 [2024-11-20 12:39:26.688758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.284 [2024-11-20 12:39:26.688766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:43552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.284 [2024-11-20 12:39:26.688772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.284 [2024-11-20 12:39:26.688780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:43560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.284 [2024-11-20 12:39:26.688786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.284 [2024-11-20 12:39:26.688794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:43568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.284 [2024-11-20 12:39:26.688800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.284 [2024-11-20 12:39:26.688808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:43576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.284 [2024-11-20 12:39:26.688814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.284 [2024-11-20 12:39:26.688824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:43584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.284 [2024-11-20 12:39:26.688830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.284 [2024-11-20 12:39:26.688838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:43592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.284 [2024-11-20 12:39:26.688844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.284 [2024-11-20 12:39:26.688852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:43600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.284 [2024-11-20 12:39:26.688858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.284 [2024-11-20 12:39:26.688866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:43608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.284 [2024-11-20 12:39:26.688872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.284 [2024-11-20 12:39:26.688880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:43616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.284 [2024-11-20 12:39:26.688887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.284 [2024-11-20 12:39:26.688895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:43624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.284 [2024-11-20 12:39:26.688901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.284 [2024-11-20 12:39:26.688909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:43632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.284 [2024-11-20 12:39:26.688915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.284 [2024-11-20 12:39:26.688923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:43640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.284 [2024-11-20 12:39:26.688930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.284 [2024-11-20 12:39:26.688938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:43648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.284 [2024-11-20 12:39:26.688945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.284 [2024-11-20 12:39:26.688953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:43656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.284 [2024-11-20 12:39:26.688959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.284 [2024-11-20 12:39:26.688967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:43664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.285 [2024-11-20 12:39:26.688974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.285 [2024-11-20 12:39:26.688982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:43672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.285 [2024-11-20 12:39:26.688988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.285 [2024-11-20 12:39:26.688996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:43944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.285 [2024-11-20 12:39:26.689003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.285 [2024-11-20 12:39:26.689012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:43680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.285 [2024-11-20 12:39:26.689018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.285 [2024-11-20 12:39:26.689026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:43688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.285 [2024-11-20 12:39:26.689032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.285 [2024-11-20 12:39:26.689040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:43696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.285 [2024-11-20 12:39:26.689046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.285 [2024-11-20 12:39:26.689054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:43704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.285 [2024-11-20 12:39:26.689061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.285 [2024-11-20 12:39:26.689068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:43712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.285 [2024-11-20 12:39:26.689075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.285 [2024-11-20 12:39:26.689082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:43720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.285 [2024-11-20 12:39:26.689089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.285 [2024-11-20 12:39:26.689098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:43728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.285 [2024-11-20 12:39:26.689104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.285 [2024-11-20 12:39:26.689112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:43736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.285 [2024-11-20 12:39:26.689118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.285 [2024-11-20 12:39:26.689126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:43744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.285 [2024-11-20 12:39:26.689132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.285 [2024-11-20 12:39:26.689140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:43752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.285 [2024-11-20 12:39:26.689146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.285 [2024-11-20 12:39:26.689154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:43760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.285 [2024-11-20 12:39:26.689160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.285 [2024-11-20 12:39:26.689168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:43768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.285 [2024-11-20 12:39:26.689174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.285 [2024-11-20 12:39:26.689181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:43776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.285 [2024-11-20 12:39:26.689189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.285 [2024-11-20 12:39:26.689197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:43784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.285 [2024-11-20 12:39:26.689209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.285 [2024-11-20 12:39:26.689217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:43792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.285 [2024-11-20 12:39:26.689224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.285 [2024-11-20 12:39:26.689231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:43800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.285 [2024-11-20 12:39:26.689238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.285 [2024-11-20 12:39:26.689246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:43808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.285 [2024-11-20 12:39:26.689253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.285 [2024-11-20 12:39:26.689260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:43816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.285 [2024-11-20 12:39:26.689267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.285 [2024-11-20 12:39:26.689275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:43824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.285 [2024-11-20 12:39:26.689281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.285 [2024-11-20 12:39:26.689288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:43832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.285 [2024-11-20 12:39:26.689295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.285 [2024-11-20 12:39:26.689303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:43840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.285 [2024-11-20 12:39:26.689310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.285 [2024-11-20 12:39:26.689318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:43848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.285 [2024-11-20 12:39:26.689324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.285 [2024-11-20 12:39:26.689331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:43856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.285 [2024-11-20 12:39:26.689338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.285 [2024-11-20 12:39:26.689345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:43864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.285 [2024-11-20 12:39:26.689351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.285 [2024-11-20 12:39:26.689359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:43872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.285 [2024-11-20 12:39:26.689365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.285 [2024-11-20 12:39:26.689375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:43880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.285 [2024-11-20 12:39:26.689381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.285 [2024-11-20 12:39:26.689389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:43888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.285 [2024-11-20 12:39:26.689395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.285 [2024-11-20 12:39:26.689403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:43896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.285 [2024-11-20 12:39:26.689410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.285 [2024-11-20 12:39:26.689418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:43904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.285 [2024-11-20 12:39:26.689424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.285 [2024-11-20 12:39:26.689432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:43912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.285 [2024-11-20 12:39:26.689438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.285 [2024-11-20 12:39:26.689445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:43920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.285 [2024-11-20 12:39:26.689452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.285 [2024-11-20 12:39:26.689459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:43952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.285 [2024-11-20 12:39:26.689466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.285 [2024-11-20 12:39:26.689474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.285 [2024-11-20 12:39:26.689480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.285 [2024-11-20 12:39:26.689488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:43968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.285 [2024-11-20 12:39:26.689495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.285 [2024-11-20 12:39:26.689502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:43976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.285 [2024-11-20 12:39:26.689508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.285 [2024-11-20 12:39:26.689516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:43984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.285 [2024-11-20 12:39:26.689522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.285 [2024-11-20 12:39:26.689530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:43992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.285 [2024-11-20 12:39:26.689536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.285 [2024-11-20 12:39:26.689544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:44000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.286 [2024-11-20 12:39:26.689551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.286 [2024-11-20 12:39:26.689559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:44008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.286 [2024-11-20 12:39:26.689566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.286 [2024-11-20 12:39:26.689573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.286 [2024-11-20 12:39:26.689580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.286 [2024-11-20 12:39:26.689587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.286 [2024-11-20 12:39:26.689594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.286 [2024-11-20 12:39:26.689602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.286 [2024-11-20 12:39:26.689608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.286 [2024-11-20 12:39:26.689615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:44040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.286 [2024-11-20 12:39:26.689622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.286 [2024-11-20 12:39:26.689630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:44048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.286 [2024-11-20 12:39:26.689637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.286 [2024-11-20 12:39:26.689644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:44056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.286 [2024-11-20 12:39:26.689650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.286 [2024-11-20 12:39:26.689658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:44064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.286 [2024-11-20 12:39:26.689664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.286 [2024-11-20 12:39:26.689672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:44072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.286 [2024-11-20 12:39:26.689678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.286 [2024-11-20 12:39:26.689686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.286 [2024-11-20 12:39:26.689692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.286 [2024-11-20 12:39:26.689700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:44088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.286 [2024-11-20 12:39:26.689706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.286 [2024-11-20 12:39:26.689713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.286 [2024-11-20 12:39:26.689719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.286 [2024-11-20 12:39:26.689727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:44104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.286 [2024-11-20 12:39:26.689734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.286 [2024-11-20 12:39:26.689742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:44112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.286 [2024-11-20 12:39:26.689748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.286 [2024-11-20 12:39:26.689756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.286 [2024-11-20 12:39:26.689762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.286 [2024-11-20 12:39:26.689769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:44128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.286 [2024-11-20 12:39:26.689775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.286 [2024-11-20 12:39:26.689783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:44136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.286 [2024-11-20 12:39:26.689789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.286 [2024-11-20 12:39:26.689796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:44144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.286 [2024-11-20 12:39:26.689803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.286 [2024-11-20 12:39:26.689810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:44152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.286 [2024-11-20 12:39:26.689817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.286 [2024-11-20 12:39:26.689825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:44160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.286 [2024-11-20 12:39:26.689831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.286 [2024-11-20 12:39:26.689839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:44168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.286 [2024-11-20 12:39:26.689845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.286 [2024-11-20 12:39:26.689855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:44176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.286 [2024-11-20 12:39:26.689862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.286 [2024-11-20 12:39:26.689869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:44184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.286 [2024-11-20 12:39:26.689875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.286 [2024-11-20 12:39:26.689883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.286 [2024-11-20 12:39:26.689889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.286 [2024-11-20 12:39:26.689897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:44200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.286 [2024-11-20 12:39:26.689903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.286 [2024-11-20 12:39:26.689913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:44208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.286 [2024-11-20 12:39:26.689919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.286 [2024-11-20 12:39:26.689927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:44216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.286 [2024-11-20 12:39:26.689933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.286 [2024-11-20 12:39:26.689941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:44224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.286 [2024-11-20 12:39:26.689947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.286 [2024-11-20 12:39:26.689955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:44232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.286 [2024-11-20 12:39:26.689961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.286 [2024-11-20 12:39:26.689968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:44240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.286 [2024-11-20 12:39:26.689975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.286 [2024-11-20 12:39:26.689982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:44248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.286 [2024-11-20 12:39:26.689989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.286 [2024-11-20 12:39:26.689996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:44256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.286 [2024-11-20 12:39:26.690002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.286 [2024-11-20 12:39:26.690010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:44264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.286 [2024-11-20 12:39:26.690017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.286 [2024-11-20 12:39:26.690025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:44272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.286 [2024-11-20 12:39:26.690031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.286 [2024-11-20 12:39:26.690039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:44280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.286 [2024-11-20 12:39:26.690045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.286 [2024-11-20 12:39:26.690053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.286 [2024-11-20 12:39:26.690060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.286 [2024-11-20 12:39:26.690067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:44296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.286 [2024-11-20 12:39:26.690073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.286 [2024-11-20 12:39:26.690083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:44304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.286 [2024-11-20 12:39:26.690091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.286 [2024-11-20 12:39:26.690098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:44312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.286 [2024-11-20 12:39:26.690104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.286 [2024-11-20 12:39:26.690112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:44320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.287 [2024-11-20 12:39:26.690118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.287 [2024-11-20 12:39:26.690126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:44328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.287 [2024-11-20 12:39:26.690132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.287 [2024-11-20 12:39:26.690140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:44336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.287 [2024-11-20 12:39:26.690146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.287 [2024-11-20 12:39:26.690154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:44344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.287 [2024-11-20 12:39:26.690160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.287 [2024-11-20 12:39:26.690167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.287 [2024-11-20 12:39:26.690174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.287 [2024-11-20 12:39:26.690182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:44360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.287 [2024-11-20 12:39:26.690188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.287 [2024-11-20 12:39:26.690195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:44368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.287 [2024-11-20 12:39:26.690206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.287 [2024-11-20 12:39:26.690214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:44376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.287 [2024-11-20 12:39:26.690220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.287 [2024-11-20 12:39:26.690227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:44384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.287 [2024-11-20 12:39:26.690234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.287 [2024-11-20 12:39:26.690242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:44392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.287 [2024-11-20 12:39:26.690248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.287 [2024-11-20 12:39:26.690256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:44400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.287 [2024-11-20 12:39:26.690262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.287 [2024-11-20 12:39:26.690272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:44408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.287 [2024-11-20 12:39:26.690278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.287 [2024-11-20 12:39:26.690285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:44416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.287 [2024-11-20 12:39:26.690291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.287 [2024-11-20 12:39:26.690299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:44424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.287 [2024-11-20 12:39:26.690305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.287 [2024-11-20 12:39:26.690314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:44432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.287 [2024-11-20 12:39:26.690320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.287 [2024-11-20 12:39:26.690328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:44440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.287 [2024-11-20 12:39:26.690334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.287 [2024-11-20 12:39:26.690342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:43928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.287 [2024-11-20 12:39:26.690349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.287 [2024-11-20 12:39:26.690380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.287 [2024-11-20 12:39:26.690387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.287 [2024-11-20 12:39:26.690392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43936 len:8 PRP1 0x0 PRP2 0x0 00:25:32.287 [2024-11-20 12:39:26.690399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.287 [2024-11-20 12:39:26.690441] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:32.287 [2024-11-20 12:39:26.690451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:25:32.287 [2024-11-20 12:39:26.693199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:25:32.287 [2024-11-20 12:39:26.693230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b04340 (9): Bad file descriptor 00:25:32.287 [2024-11-20 12:39:26.797565] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:25:32.287 11071.60 IOPS, 43.25 MiB/s [2024-11-20T11:39:38.053Z] 11135.50 IOPS, 43.50 MiB/s [2024-11-20T11:39:38.053Z] 11189.57 IOPS, 43.71 MiB/s [2024-11-20T11:39:38.053Z] 11238.00 IOPS, 43.90 MiB/s [2024-11-20T11:39:38.053Z] 11255.89 IOPS, 43.97 MiB/s [2024-11-20T11:39:38.053Z] [2024-11-20 12:39:31.140080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.287 [2024-11-20 12:39:31.140119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.287 [2024-11-20 12:39:31.140130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.287 [2024-11-20 12:39:31.140139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.287 [2024-11-20 12:39:31.140146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.287 [2024-11-20 12:39:31.140158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.287 [2024-11-20 12:39:31.140165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.287 [2024-11-20 12:39:31.140171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.287 [2024-11-20 12:39:31.140178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b04340 is same with the state(6) to be set 00:25:32.287 [2024-11-20 12:39:31.142319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:90304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.287 [2024-11-20 12:39:31.142332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.287 [2024-11-20 12:39:31.142344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.287 [2024-11-20 12:39:31.142351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.287 [2024-11-20 12:39:31.142359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:90320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.287 [2024-11-20 12:39:31.142366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.287 [2024-11-20 12:39:31.142373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:90328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.287 [2024-11-20 12:39:31.142380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.287 [2024-11-20 12:39:31.142388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:90336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.287 [2024-11-20 12:39:31.142394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.287 [2024-11-20 12:39:31.142402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:90344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.287 [2024-11-20 12:39:31.142408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.287 [2024-11-20 12:39:31.142416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:89352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.287 [2024-11-20 12:39:31.142422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.287 [2024-11-20 12:39:31.142430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:89360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.287 [2024-11-20 12:39:31.142437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.287 [2024-11-20 12:39:31.142444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:89368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.287 [2024-11-20 12:39:31.142451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.287 [2024-11-20 12:39:31.142459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:89376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.287 [2024-11-20 12:39:31.142465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.287 [2024-11-20 12:39:31.142473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:89384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.287 [2024-11-20 12:39:31.142482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.287 [2024-11-20 12:39:31.142490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:89392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.287 [2024-11-20 12:39:31.142497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.287 [2024-11-20 12:39:31.142505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:89400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.287 [2024-11-20 12:39:31.142512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.287 [2024-11-20 12:39:31.142520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:89408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.287 [2024-11-20 12:39:31.142526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.288 [2024-11-20 12:39:31.142534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:89416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.288 [2024-11-20 12:39:31.142540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.288 [2024-11-20 12:39:31.142548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:89424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.288 [2024-11-20 12:39:31.142554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.288 [2024-11-20 12:39:31.142562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:89432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.288 [2024-11-20 12:39:31.142568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.288 [2024-11-20 12:39:31.142576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:89440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.288 [2024-11-20 12:39:31.142583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.288 [2024-11-20 12:39:31.142591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:89448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.288 [2024-11-20 12:39:31.142598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.288 [2024-11-20 12:39:31.142608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:89456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.288 [2024-11-20 12:39:31.142614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.288 [2024-11-20 12:39:31.142622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:89464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.288 [2024-11-20 12:39:31.142628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.288 [2024-11-20 12:39:31.142636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:90352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.288 [2024-11-20 12:39:31.142643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.288 [2024-11-20 12:39:31.142651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:89472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.288 [2024-11-20 12:39:31.142657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.288 [2024-11-20 12:39:31.142666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:89480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.288 [2024-11-20 12:39:31.142673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.288 [2024-11-20 12:39:31.142680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:89488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.288 [2024-11-20 12:39:31.142686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.288 [2024-11-20 12:39:31.142694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:89496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.288 [2024-11-20 12:39:31.142701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.288 [2024-11-20 12:39:31.142709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:89504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.288 [2024-11-20 12:39:31.142716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.288 [2024-11-20 12:39:31.142723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:89512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.288 [2024-11-20 12:39:31.142730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.288 [2024-11-20 12:39:31.142738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:89520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.288 [2024-11-20 12:39:31.142744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.288 [2024-11-20 12:39:31.142752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:89528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.288 [2024-11-20 12:39:31.142759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.288 [2024-11-20 12:39:31.142768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:89536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.288 [2024-11-20 12:39:31.142774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.288 [2024-11-20 12:39:31.142782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:89544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.288 [2024-11-20 12:39:31.142788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.288 [2024-11-20 12:39:31.142796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:89552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.288 [2024-11-20 12:39:31.142803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.288 [2024-11-20 12:39:31.142811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.288 [2024-11-20 12:39:31.142817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.288 [2024-11-20 12:39:31.142825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:89568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.288 [2024-11-20 12:39:31.142831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.288 [2024-11-20 12:39:31.142839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:89576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.288 [2024-11-20 12:39:31.142845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.288 [2024-11-20 12:39:31.142854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:89584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.288 [2024-11-20 12:39:31.142862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.288 [2024-11-20 12:39:31.142870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:89592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.288 [2024-11-20 12:39:31.142876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.288 [2024-11-20 12:39:31.142884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:89600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.288 [2024-11-20 12:39:31.142890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.288 [2024-11-20 12:39:31.142899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:89608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.288 [2024-11-20 12:39:31.142906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.288 [2024-11-20 12:39:31.142914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:89616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.288 [2024-11-20 12:39:31.142920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.288 [2024-11-20 12:39:31.142929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:89624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.288 [2024-11-20 12:39:31.142935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.288 [2024-11-20 12:39:31.142943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:89632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.288 [2024-11-20 12:39:31.142949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.288 [2024-11-20 12:39:31.142957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:89640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.288 [2024-11-20 12:39:31.142965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.288 [2024-11-20 12:39:31.142972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:89648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.288 [2024-11-20 12:39:31.142979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.288 [2024-11-20 12:39:31.142987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:89656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.288 [2024-11-20 12:39:31.142992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.288 [2024-11-20 12:39:31.143000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:89664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.288 [2024-11-20 12:39:31.143007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.288 [2024-11-20 12:39:31.143014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.289 [2024-11-20 12:39:31.143021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.289 [2024-11-20 12:39:31.143029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:89680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.289 [2024-11-20 12:39:31.143037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.289 [2024-11-20 12:39:31.143045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:89688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.289 [2024-11-20 12:39:31.143052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.289 [2024-11-20 12:39:31.143060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:89696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.289 [2024-11-20 12:39:31.143066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.289 [2024-11-20 12:39:31.143075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:89704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.289 [2024-11-20 12:39:31.143081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.289 [2024-11-20 12:39:31.143089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.289 [2024-11-20 12:39:31.143095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.289 [2024-11-20 12:39:31.143103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:89720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.289 [2024-11-20 12:39:31.143109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.289 [2024-11-20 12:39:31.143117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:89728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.289 [2024-11-20 12:39:31.143124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.289 [2024-11-20 12:39:31.143133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:89736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.289 [2024-11-20 12:39:31.143139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.289 [2024-11-20 12:39:31.143147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:89744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.289 [2024-11-20 12:39:31.143153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.289 [2024-11-20 12:39:31.143161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:89752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.289 [2024-11-20 12:39:31.143168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.289 [2024-11-20 12:39:31.143176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.289 [2024-11-20 12:39:31.143182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.289 [2024-11-20 12:39:31.143190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:89768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.289 [2024-11-20 12:39:31.143197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.289 [2024-11-20 12:39:31.143210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:89776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.289 [2024-11-20 12:39:31.143216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.289 [2024-11-20 12:39:31.143226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:89784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.289 [2024-11-20 12:39:31.143232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.289 [2024-11-20 12:39:31.143240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:89792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.289 [2024-11-20 12:39:31.143246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.289 [2024-11-20 12:39:31.143254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:89800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.289 [2024-11-20 12:39:31.143260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.289 [2024-11-20 12:39:31.143268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:89808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.289 [2024-11-20 12:39:31.143274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.289 [2024-11-20 12:39:31.143282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:89816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.289 [2024-11-20 12:39:31.143288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.289 [2024-11-20 12:39:31.143296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:89824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.289 [2024-11-20 12:39:31.143303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.289 [2024-11-20 12:39:31.143311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:89832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.289 [2024-11-20 12:39:31.143318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.289 [2024-11-20 12:39:31.143326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.289 [2024-11-20 12:39:31.143332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.289 [2024-11-20 12:39:31.143340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:89848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.289 [2024-11-20 12:39:31.143346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.289 [2024-11-20 12:39:31.143354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:89856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.289 [2024-11-20 12:39:31.143360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.289 [2024-11-20 12:39:31.143368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:89864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.289 [2024-11-20 12:39:31.143374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.289 [2024-11-20 12:39:31.143382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:89872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.289 [2024-11-20 12:39:31.143388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.289 [2024-11-20 12:39:31.143395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:89880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.289 [2024-11-20 12:39:31.143403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.289 [2024-11-20 12:39:31.143411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:89888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.289 [2024-11-20 12:39:31.143418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.289 [2024-11-20 12:39:31.143426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:89896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.289 [2024-11-20 12:39:31.143432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.289 [2024-11-20 12:39:31.143439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:89904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.289 [2024-11-20 12:39:31.143445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.289 [2024-11-20 12:39:31.143453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.289 [2024-11-20 12:39:31.143460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.289 [2024-11-20 12:39:31.143468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:89920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.289 [2024-11-20 12:39:31.143474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.289 [2024-11-20 12:39:31.143482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:89928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.289 [2024-11-20 12:39:31.143488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.289 [2024-11-20 12:39:31.143496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:89936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.289 [2024-11-20 12:39:31.143502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.289 [2024-11-20 12:39:31.143509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:89944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.289 [2024-11-20 12:39:31.143516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.289 [2024-11-20 12:39:31.143524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:89952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.289 [2024-11-20 12:39:31.143531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.289 [2024-11-20 12:39:31.143539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:89960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.289 [2024-11-20 12:39:31.143545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.289 [2024-11-20 12:39:31.143553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:89968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.289 [2024-11-20 12:39:31.143559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.289 [2024-11-20 12:39:31.143567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:89976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.289 [2024-11-20 12:39:31.143574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.289 [2024-11-20 12:39:31.143583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:89984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.289 [2024-11-20 12:39:31.143590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.289 [2024-11-20 12:39:31.143597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:89992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.290 [2024-11-20 12:39:31.143604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.290 [2024-11-20 12:39:31.143611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:90000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.290 [2024-11-20 12:39:31.143617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.290 [2024-11-20 12:39:31.143625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:90008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.290 [2024-11-20 12:39:31.143632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.290 [2024-11-20 12:39:31.143639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:90016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.290 [2024-11-20 12:39:31.143646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.290 [2024-11-20 12:39:31.143653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:90024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.290 [2024-11-20 12:39:31.143660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.290 [2024-11-20 12:39:31.143667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:90032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.290 [2024-11-20 12:39:31.143674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.290 [2024-11-20 12:39:31.143682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:90040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.290 [2024-11-20 12:39:31.143689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.290 [2024-11-20 12:39:31.143697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:90048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.290 [2024-11-20 12:39:31.143703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.290 [2024-11-20 12:39:31.143711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:90056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.290 [2024-11-20 12:39:31.143717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.290 [2024-11-20 12:39:31.143725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:90064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.290 [2024-11-20 12:39:31.143731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.290 [2024-11-20 12:39:31.143739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:90072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.290 [2024-11-20 12:39:31.143746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.290 [2024-11-20 12:39:31.143753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:90080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.290 [2024-11-20 12:39:31.143759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.290 [2024-11-20 12:39:31.143771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:90088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.290 [2024-11-20 12:39:31.143777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.290 [2024-11-20 12:39:31.143785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:90096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.290 [2024-11-20 12:39:31.143792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.290 [2024-11-20 12:39:31.143800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:90104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.290 [2024-11-20 12:39:31.143806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.290 [2024-11-20 12:39:31.143814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:90360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.290 [2024-11-20 12:39:31.143820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.290 [2024-11-20 12:39:31.143827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:90368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.290 [2024-11-20 12:39:31.143834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.290 [2024-11-20 12:39:31.143842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:90112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.290 [2024-11-20 12:39:31.143848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.290 [2024-11-20 12:39:31.143856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:90120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.290 [2024-11-20 12:39:31.143862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.290 [2024-11-20 12:39:31.143871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:90128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.290 [2024-11-20 12:39:31.143878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.290 [2024-11-20 12:39:31.143885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:90136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.290 [2024-11-20 12:39:31.143892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.290 [2024-11-20 12:39:31.143900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:90144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.290 [2024-11-20 12:39:31.143906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.290 [2024-11-20 12:39:31.143914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:90152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.290 [2024-11-20 12:39:31.143920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.290 [2024-11-20 12:39:31.143928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:90160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.290 [2024-11-20 12:39:31.143935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.290 [2024-11-20 12:39:31.143943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:90168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.290 [2024-11-20 12:39:31.143954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.290 [2024-11-20 12:39:31.143962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:90176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.290 [2024-11-20 12:39:31.143968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.290 [2024-11-20 12:39:31.143976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:90184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.290 [2024-11-20 12:39:31.143983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.290 [2024-11-20 12:39:31.143991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:90192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.290 [2024-11-20 12:39:31.143998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.290 [2024-11-20 12:39:31.144007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:90200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.290 [2024-11-20 12:39:31.144014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.290 [2024-11-20 12:39:31.144022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:90208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.290 [2024-11-20 12:39:31.144029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.290 [2024-11-20 12:39:31.144036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:90216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.290 [2024-11-20 12:39:31.144043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.290 [2024-11-20 12:39:31.144051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:90224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.290 [2024-11-20 12:39:31.144057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.290 [2024-11-20 12:39:31.144066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:90232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.290 [2024-11-20 12:39:31.144072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.290 [2024-11-20 12:39:31.144080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:90240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.290 [2024-11-20 12:39:31.144087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.290 [2024-11-20 12:39:31.144095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:90248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.290 [2024-11-20 12:39:31.144102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.290 [2024-11-20 12:39:31.144109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:90256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.290 [2024-11-20 12:39:31.144116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.290 [2024-11-20 12:39:31.144124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:90264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.290 [2024-11-20 12:39:31.144131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.290 [2024-11-20 12:39:31.144140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:90272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.290 [2024-11-20 12:39:31.144146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.290 [2024-11-20 12:39:31.144155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:90280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.290 [2024-11-20 12:39:31.144162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.290 [2024-11-20 12:39:31.144170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:90288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.290 [2024-11-20 12:39:31.144177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.291 [2024-11-20 12:39:31.144184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5f2d0 is same with the state(6) to be set 00:25:32.291 [2024-11-20 12:39:31.144193] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.291 [2024-11-20 12:39:31.144198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.291 [2024-11-20 12:39:31.144210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90296 len:8 PRP1 0x0 PRP2 0x0 00:25:32.291 [2024-11-20 12:39:31.144217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.291 [2024-11-20 12:39:31.144259] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:32.291 [2024-11-20 12:39:31.144270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:25:32.291 [2024-11-20 12:39:31.147036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:25:32.291 [2024-11-20 12:39:31.147066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b04340 (9): Bad file descriptor 00:25:32.291 [2024-11-20 12:39:31.173180] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:25:32.291 11246.00 IOPS, 43.93 MiB/s [2024-11-20T11:39:38.057Z] 11276.18 IOPS, 44.05 MiB/s [2024-11-20T11:39:38.057Z] 11288.92 IOPS, 44.10 MiB/s [2024-11-20T11:39:38.057Z] 11305.85 IOPS, 44.16 MiB/s [2024-11-20T11:39:38.057Z] 11327.71 IOPS, 44.25 MiB/s 00:25:32.291 Latency(us) 00:25:32.291 [2024-11-20T11:39:38.057Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:32.291 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:32.291 Verification LBA range: start 0x0 length 0x4000 00:25:32.291 NVMe0n1 : 15.00 11329.75 44.26 521.34 0.00 10779.27 413.50 17850.76 00:25:32.291 [2024-11-20T11:39:38.057Z] =================================================================================================================== 00:25:32.291 [2024-11-20T11:39:38.057Z] Total : 11329.75 44.26 521.34 0.00 10779.27 413.50 17850.76 00:25:32.291 Received shutdown signal, test time was about 15.000000 seconds 00:25:32.291 00:25:32.291 Latency(us) 00:25:32.291 [2024-11-20T11:39:38.057Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:32.291 [2024-11-20T11:39:38.057Z] =================================================================================================================== 00:25:32.291 [2024-11-20T11:39:38.057Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:32.291 12:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:32.291 12:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:32.291 12:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:32.291 12:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=282666 00:25:32.291 12:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:32.291 12:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 282666 /var/tmp/bdevperf.sock 00:25:32.291 12:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 282666 ']' 00:25:32.291 12:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:32.291 12:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:32.291 12:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:32.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:32.291 12:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:32.291 12:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:32.291 12:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:32.291 12:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:32.291 12:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:32.291 [2024-11-20 12:39:37.659115] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:32.291 12:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:32.291 [2024-11-20 12:39:37.847639] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:32.291 12:39:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:32.549 NVMe0n1 00:25:32.549 12:39:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:32.807 00:25:32.807 12:39:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:33.065 00:25:33.324 12:39:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:33.324 12:39:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:33.324 12:39:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:33.592 12:39:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:36.873 12:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:36.873 12:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:36.874 12:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=283371 00:25:36.874 12:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:36.874 12:39:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 283371 00:25:37.809 { 00:25:37.809 "results": [ 00:25:37.809 { 00:25:37.809 "job": "NVMe0n1", 00:25:37.809 "core_mask": "0x1", 00:25:37.809 "workload": "verify", 00:25:37.809 "status": "finished", 00:25:37.809 "verify_range": { 00:25:37.809 "start": 0, 00:25:37.809 "length": 16384 00:25:37.809 }, 00:25:37.809 "queue_depth": 128, 00:25:37.809 "io_size": 4096, 00:25:37.809 "runtime": 1.011285, 00:25:37.809 "iops": 11805.771864509015, 00:25:37.809 "mibps": 46.11629634573834, 00:25:37.809 "io_failed": 0, 00:25:37.809 "io_timeout": 0, 00:25:37.809 "avg_latency_us": 10797.529054598974, 00:25:37.809 "min_latency_us": 2356.175238095238, 00:25:37.809 "max_latency_us": 9986.438095238096 00:25:37.809 } 00:25:37.809 ], 00:25:37.809 "core_count": 1 00:25:37.809 } 00:25:37.809 12:39:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:37.809 [2024-11-20 12:39:37.286529] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:25:37.809 [2024-11-20 12:39:37.286583] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid282666 ] 00:25:37.809 [2024-11-20 12:39:37.359549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.809 [2024-11-20 12:39:37.396940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:37.809 [2024-11-20 12:39:39.201402] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:37.809 [2024-11-20 12:39:39.201447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.809 [2024-11-20 12:39:39.201458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.809 [2024-11-20 12:39:39.201467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.809 [2024-11-20 12:39:39.201474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.809 [2024-11-20 12:39:39.201481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.809 [2024-11-20 12:39:39.201487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.809 [2024-11-20 12:39:39.201494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.809 [2024-11-20 12:39:39.201500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.809 [2024-11-20 12:39:39.201507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:25:37.809 [2024-11-20 12:39:39.201531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:25:37.809 [2024-11-20 12:39:39.201546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ad340 (9): Bad file descriptor 00:25:37.809 [2024-11-20 12:39:39.206418] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:25:37.809 Running I/O for 1 seconds... 00:25:37.809 11811.00 IOPS, 46.14 MiB/s 00:25:37.809 Latency(us) 00:25:37.809 [2024-11-20T11:39:43.575Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:37.809 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:37.809 Verification LBA range: start 0x0 length 0x4000 00:25:37.809 NVMe0n1 : 1.01 11805.77 46.12 0.00 0.00 10797.53 2356.18 9986.44 00:25:37.809 [2024-11-20T11:39:43.575Z] =================================================================================================================== 00:25:37.809 [2024-11-20T11:39:43.575Z] Total : 11805.77 46.12 0.00 0.00 10797.53 2356.18 9986.44 00:25:37.809 12:39:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:37.809 12:39:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:38.067 12:39:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:38.325 12:39:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:38.325 12:39:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:38.584 12:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:38.842 12:39:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:42.125 12:39:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:42.125 12:39:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:42.125 12:39:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 282666 00:25:42.125 12:39:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 282666 ']' 00:25:42.125 12:39:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 282666 00:25:42.125 12:39:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:42.125 12:39:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:42.125 12:39:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 282666 00:25:42.125 12:39:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:42.125 12:39:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:42.125 12:39:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 282666' 00:25:42.125 killing process with pid 282666 00:25:42.125 12:39:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 282666 00:25:42.125 12:39:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 282666 00:25:42.125 12:39:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:42.125 12:39:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:42.382 12:39:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:42.382 12:39:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:42.382 12:39:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:42.382 12:39:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:42.382 12:39:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:25:42.382 12:39:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:42.382 12:39:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:25:42.382 12:39:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:42.382 12:39:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:42.382 rmmod nvme_tcp 00:25:42.382 rmmod nvme_fabrics 00:25:42.382 rmmod nvme_keyring 00:25:42.382 12:39:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:42.382 12:39:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:25:42.382 12:39:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:25:42.382 12:39:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 279649 ']' 00:25:42.383 12:39:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 279649 00:25:42.383 12:39:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 279649 ']' 00:25:42.383 12:39:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 279649 00:25:42.383 12:39:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:42.383 12:39:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:42.383 12:39:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 279649 00:25:42.383 12:39:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:42.383 12:39:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:42.383 12:39:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 279649' 00:25:42.383 killing process with pid 279649 00:25:42.383 12:39:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 279649 00:25:42.383 12:39:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 279649 00:25:42.641 12:39:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:42.641 12:39:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:42.641 12:39:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:42.642 12:39:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:25:42.642 12:39:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:25:42.642 12:39:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:42.642 12:39:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:25:42.642 12:39:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:42.642 12:39:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:42.642 12:39:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.642 12:39:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:42.642 12:39:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:45.178 12:39:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:45.178 00:25:45.178 real 0m37.349s 00:25:45.178 user 1m58.134s 00:25:45.178 sys 0m7.928s 00:25:45.178 12:39:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:45.178 12:39:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:45.178 ************************************ 00:25:45.178 END TEST nvmf_failover 00:25:45.178 ************************************ 00:25:45.178 12:39:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:45.178 12:39:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:45.178 12:39:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:45.178 12:39:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.178 ************************************ 00:25:45.179 START TEST nvmf_host_discovery 00:25:45.179 ************************************ 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:45.179 * Looking for test storage... 00:25:45.179 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:45.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.179 --rc genhtml_branch_coverage=1 00:25:45.179 --rc genhtml_function_coverage=1 00:25:45.179 --rc genhtml_legend=1 00:25:45.179 --rc geninfo_all_blocks=1 00:25:45.179 --rc geninfo_unexecuted_blocks=1 00:25:45.179 00:25:45.179 ' 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:45.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.179 --rc genhtml_branch_coverage=1 00:25:45.179 --rc genhtml_function_coverage=1 00:25:45.179 --rc genhtml_legend=1 00:25:45.179 --rc geninfo_all_blocks=1 00:25:45.179 --rc geninfo_unexecuted_blocks=1 00:25:45.179 00:25:45.179 ' 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:45.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.179 --rc genhtml_branch_coverage=1 00:25:45.179 --rc genhtml_function_coverage=1 00:25:45.179 --rc genhtml_legend=1 00:25:45.179 --rc geninfo_all_blocks=1 00:25:45.179 --rc geninfo_unexecuted_blocks=1 00:25:45.179 00:25:45.179 ' 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:45.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.179 --rc genhtml_branch_coverage=1 00:25:45.179 --rc genhtml_function_coverage=1 00:25:45.179 --rc genhtml_legend=1 00:25:45.179 --rc geninfo_all_blocks=1 00:25:45.179 --rc geninfo_unexecuted_blocks=1 00:25:45.179 00:25:45.179 ' 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:45.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:45.179 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:45.180 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:45.180 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:45.180 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:45.180 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:45.180 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:45.180 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:45.180 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:45.180 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:45.180 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:45.180 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:45.180 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:45.180 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:45.180 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:45.180 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:45.180 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:45.180 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:45.180 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:45.180 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:25:45.180 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.751 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:51.751 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:25:51.751 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:51.751 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:51.751 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:51.751 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:51.751 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:51.751 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:25:51.751 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:51.751 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:25:51.751 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:25:51.751 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:25:51.751 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:25:51.751 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:25:51.751 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:25:51.751 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:51.751 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:51.751 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:51.751 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:51.751 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:51.751 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:51.751 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:51.751 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:51.751 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:51.751 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:51.751 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:51.751 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:51.751 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:51.751 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:51.751 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:51.751 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:51.752 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:51.752 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:51.752 Found net devices under 0000:86:00.0: cvl_0_0 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:51.752 Found net devices under 0000:86:00.1: cvl_0_1 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:51.752 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:51.752 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:25:51.752 00:25:51.752 --- 10.0.0.2 ping statistics --- 00:25:51.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:51.752 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:51.752 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:51.752 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:25:51.752 00:25:51.752 --- 10.0.0.1 ping statistics --- 00:25:51.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:51.752 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=287822 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 287822 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 287822 ']' 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:51.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:51.752 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.752 [2024-11-20 12:39:56.660681] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:25:51.752 [2024-11-20 12:39:56.660722] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:51.752 [2024-11-20 12:39:56.718105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.752 [2024-11-20 12:39:56.756615] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:51.752 [2024-11-20 12:39:56.756644] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:51.752 [2024-11-20 12:39:56.756651] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:51.752 [2024-11-20 12:39:56.756656] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:51.752 [2024-11-20 12:39:56.756661] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:51.752 [2024-11-20 12:39:56.757220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:51.753 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:51.753 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:51.753 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:51.753 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:51.753 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.753 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:51.753 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:51.753 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.753 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.753 [2024-11-20 12:39:56.902791] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:51.753 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.753 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:51.753 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.753 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.753 [2024-11-20 12:39:56.914979] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:51.753 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.753 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:51.753 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.753 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.753 null0 00:25:51.753 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.753 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:51.753 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.753 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.753 null1 00:25:51.753 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.753 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:51.753 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.753 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.753 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.753 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=287880 00:25:51.753 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:51.753 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 287880 /tmp/host.sock 00:25:51.753 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 287880 ']' 00:25:51.753 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:25:51.753 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:51.753 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:51.753 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:51.753 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:51.753 12:39:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.753 [2024-11-20 12:39:56.992054] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:25:51.753 [2024-11-20 12:39:56.992096] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid287880 ] 00:25:51.753 [2024-11-20 12:39:57.065079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.753 [2024-11-20 12:39:57.107314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.753 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:51.754 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:51.754 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:51.754 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:51.754 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.754 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:51.754 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.754 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:51.754 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.013 [2024-11-20 12:39:57.520542] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:25:52.013 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:52.581 [2024-11-20 12:39:58.260704] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:52.581 [2024-11-20 12:39:58.260721] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:52.581 [2024-11-20 12:39:58.260733] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:52.840 [2024-11-20 12:39:58.387123] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:52.840 [2024-11-20 12:39:58.563158] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:52.840 [2024-11-20 12:39:58.563969] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x20dfdd0:1 started. 00:25:52.840 [2024-11-20 12:39:58.565327] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:52.840 [2024-11-20 12:39:58.565344] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:52.840 [2024-11-20 12:39:58.569816] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x20dfdd0 was disconnected and freed. delete nvme_qpair. 00:25:53.099 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:53.099 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:53.099 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:53.099 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:53.099 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:53.099 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.099 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:53.099 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.099 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:53.099 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.099 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.099 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:53.099 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:53.099 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:53.099 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:53.099 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:53.099 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:53.099 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:53.099 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:53.099 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:53.099 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.099 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:53.099 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.099 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:53.099 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.099 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:53.099 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:53.099 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:53.099 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:53.099 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:53.099 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:53.099 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:53.099 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:53.099 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:53.099 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:53.099 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:53.099 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:53.099 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.099 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.099 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.359 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:25:53.359 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:53.359 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:53.359 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:53.359 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:53.359 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:53.359 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:53.359 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:53.359 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:53.359 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:53.359 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:53.359 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:53.359 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.359 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.359 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.359 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:53.360 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:53.360 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:53.360 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:53.360 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:53.360 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.360 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.360 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.360 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:53.360 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:53.360 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:53.360 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:53.360 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:53.360 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:53.360 [2024-11-20 12:39:58.925643] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x20e01a0:1 started. 00:25:53.360 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:53.360 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:53.360 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:53.360 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.360 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:53.360 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.360 [2024-11-20 12:39:58.930514] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x20e01a0 was disconnected and freed. delete nvme_qpair. 00:25:53.360 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.360 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:53.360 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:53.360 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:53.360 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:53.360 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:53.360 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:53.360 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:53.360 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:53.360 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:53.360 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:53.360 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:53.360 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:53.360 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.360 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.360 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.360 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:53.360 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:53.360 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:53.360 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:53.360 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:53.360 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.360 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.360 [2024-11-20 12:39:59.024575] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:53.360 [2024-11-20 12:39:59.025390] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:53.360 [2024-11-20 12:39:59.025410] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:53.360 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.360 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:53.360 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:53.360 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:53.360 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:53.360 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:53.360 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:53.360 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:53.360 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.360 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:53.360 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.360 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:53.360 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:53.360 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.360 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.360 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:53.360 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:53.360 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:53.360 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:53.360 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:53.360 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:53.360 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:53.360 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:53.360 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:53.360 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.360 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:53.360 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.360 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:53.360 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.360 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:53.360 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:53.621 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:53.621 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:53.621 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:53.621 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:53.621 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:53.621 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:53.621 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:53.621 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:53.621 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:53.621 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.621 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:53.621 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.621 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.621 [2024-11-20 12:39:59.151796] bdev_nvme.c:7403:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:53.621 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:53.621 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:53.880 [2024-11-20 12:39:59.457082] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:25:53.880 [2024-11-20 12:39:59.457114] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:53.880 [2024-11-20 12:39:59.457121] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:53.880 [2024-11-20 12:39:59.457125] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:54.448 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:54.448 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:54.448 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:54.448 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:54.448 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:54.448 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.448 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:54.448 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.448 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:54.448 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.708 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:54.708 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:54.708 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:54.708 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:54.708 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:54.708 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:54.709 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:54.709 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:54.709 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:54.709 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:54.709 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:54.709 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:54.709 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.709 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.709 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.709 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:54.709 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:54.709 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:54.709 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:54.709 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:54.709 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.709 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.709 [2024-11-20 12:40:00.276520] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:54.709 [2024-11-20 12:40:00.276543] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:54.709 [2024-11-20 12:40:00.277485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:54.709 [2024-11-20 12:40:00.277501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.709 [2024-11-20 12:40:00.277509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:54.709 [2024-11-20 12:40:00.277516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.709 [2024-11-20 12:40:00.277540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:54.709 [2024-11-20 12:40:00.277547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.709 [2024-11-20 12:40:00.277554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:54.709 [2024-11-20 12:40:00.277565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:54.709 [2024-11-20 12:40:00.277572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b0390 is same with the state(6) to be set 00:25:54.709 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.709 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:54.709 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:54.709 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:54.709 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:54.709 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:54.709 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:54.709 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:54.709 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:54.709 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.709 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:54.709 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.709 [2024-11-20 12:40:00.287495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b0390 (9): Bad file descriptor 00:25:54.709 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:54.709 [2024-11-20 12:40:00.297528] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:54.709 [2024-11-20 12:40:00.297541] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:54.709 [2024-11-20 12:40:00.297546] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:54.709 [2024-11-20 12:40:00.297550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:54.709 [2024-11-20 12:40:00.297568] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:54.709 [2024-11-20 12:40:00.297775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.709 [2024-11-20 12:40:00.297789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20b0390 with addr=10.0.0.2, port=4420 00:25:54.709 [2024-11-20 12:40:00.297797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b0390 is same with the state(6) to be set 00:25:54.709 [2024-11-20 12:40:00.297809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b0390 (9): Bad file descriptor 00:25:54.709 [2024-11-20 12:40:00.297826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:54.709 [2024-11-20 12:40:00.297833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:54.709 [2024-11-20 12:40:00.297841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:54.709 [2024-11-20 12:40:00.297847] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:54.709 [2024-11-20 12:40:00.297852] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:54.709 [2024-11-20 12:40:00.297856] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:54.709 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.709 [2024-11-20 12:40:00.307599] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:54.709 [2024-11-20 12:40:00.307611] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:54.709 [2024-11-20 12:40:00.307615] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:54.709 [2024-11-20 12:40:00.307619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:54.709 [2024-11-20 12:40:00.307632] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:54.709 [2024-11-20 12:40:00.307742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.709 [2024-11-20 12:40:00.307754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20b0390 with addr=10.0.0.2, port=4420 00:25:54.709 [2024-11-20 12:40:00.307761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b0390 is same with the state(6) to be set 00:25:54.709 [2024-11-20 12:40:00.307771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b0390 (9): Bad file descriptor 00:25:54.709 [2024-11-20 12:40:00.307780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:54.709 [2024-11-20 12:40:00.307786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:54.709 [2024-11-20 12:40:00.307793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:54.709 [2024-11-20 12:40:00.307798] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:54.709 [2024-11-20 12:40:00.307802] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:54.709 [2024-11-20 12:40:00.307806] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:54.709 [2024-11-20 12:40:00.317662] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:54.709 [2024-11-20 12:40:00.317680] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:54.709 [2024-11-20 12:40:00.317684] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:54.709 [2024-11-20 12:40:00.317688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:54.709 [2024-11-20 12:40:00.317702] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:54.709 [2024-11-20 12:40:00.317972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.709 [2024-11-20 12:40:00.317984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20b0390 with addr=10.0.0.2, port=4420 00:25:54.709 [2024-11-20 12:40:00.317991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b0390 is same with the state(6) to be set 00:25:54.709 [2024-11-20 12:40:00.318001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b0390 (9): Bad file descriptor 00:25:54.709 [2024-11-20 12:40:00.318018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:54.709 [2024-11-20 12:40:00.318024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:54.709 [2024-11-20 12:40:00.318031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:54.709 [2024-11-20 12:40:00.318036] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:54.709 [2024-11-20 12:40:00.318041] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:54.709 [2024-11-20 12:40:00.318048] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:54.709 [2024-11-20 12:40:00.327733] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:54.709 [2024-11-20 12:40:00.327747] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:54.709 [2024-11-20 12:40:00.327751] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:54.709 [2024-11-20 12:40:00.327755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:54.709 [2024-11-20 12:40:00.327768] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:54.710 [2024-11-20 12:40:00.327960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.710 [2024-11-20 12:40:00.327973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20b0390 with addr=10.0.0.2, port=4420 00:25:54.710 [2024-11-20 12:40:00.327980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b0390 is same with the state(6) to be set 00:25:54.710 [2024-11-20 12:40:00.327990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b0390 (9): Bad file descriptor 00:25:54.710 [2024-11-20 12:40:00.328000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:54.710 [2024-11-20 12:40:00.328006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:54.710 [2024-11-20 12:40:00.328013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:54.710 [2024-11-20 12:40:00.328018] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:54.710 [2024-11-20 12:40:00.328022] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:54.710 [2024-11-20 12:40:00.328026] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:54.710 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.710 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:54.710 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:54.710 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:54.710 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:54.710 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:54.710 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:54.710 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:54.710 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:54.710 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:54.710 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:54.710 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.710 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:54.710 [2024-11-20 12:40:00.337799] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:54.710 [2024-11-20 12:40:00.337811] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:54.710 [2024-11-20 12:40:00.337815] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:54.710 [2024-11-20 12:40:00.337822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:54.710 [2024-11-20 12:40:00.337834] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:54.710 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.710 [2024-11-20 12:40:00.338021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.710 [2024-11-20 12:40:00.338035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20b0390 with addr=10.0.0.2, port=4420 00:25:54.710 [2024-11-20 12:40:00.338043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b0390 is same with the state(6) to be set 00:25:54.710 [2024-11-20 12:40:00.338053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b0390 (9): Bad file descriptor 00:25:54.710 [2024-11-20 12:40:00.338063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:54.710 [2024-11-20 12:40:00.338070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:54.710 [2024-11-20 12:40:00.338077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:54.710 [2024-11-20 12:40:00.338083] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:54.710 [2024-11-20 12:40:00.338088] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:54.710 [2024-11-20 12:40:00.338091] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:54.710 [2024-11-20 12:40:00.347866] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:54.710 [2024-11-20 12:40:00.347880] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:54.710 [2024-11-20 12:40:00.347884] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:54.710 [2024-11-20 12:40:00.347888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:54.710 [2024-11-20 12:40:00.347901] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:54.710 [2024-11-20 12:40:00.348178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.710 [2024-11-20 12:40:00.348190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20b0390 with addr=10.0.0.2, port=4420 00:25:54.710 [2024-11-20 12:40:00.348197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b0390 is same with the state(6) to be set 00:25:54.710 [2024-11-20 12:40:00.348212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b0390 (9): Bad file descriptor 00:25:54.710 [2024-11-20 12:40:00.348228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:54.710 [2024-11-20 12:40:00.348235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:54.710 [2024-11-20 12:40:00.348242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:54.710 [2024-11-20 12:40:00.348247] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:54.710 [2024-11-20 12:40:00.348252] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:54.710 [2024-11-20 12:40:00.348255] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:54.710 [2024-11-20 12:40:00.357932] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:54.710 [2024-11-20 12:40:00.357946] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:54.710 [2024-11-20 12:40:00.357951] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:54.710 [2024-11-20 12:40:00.357955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:54.710 [2024-11-20 12:40:00.357967] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:54.710 [2024-11-20 12:40:00.358132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.710 [2024-11-20 12:40:00.358143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20b0390 with addr=10.0.0.2, port=4420 00:25:54.710 [2024-11-20 12:40:00.358150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b0390 is same with the state(6) to be set 00:25:54.710 [2024-11-20 12:40:00.358159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b0390 (9): Bad file descriptor 00:25:54.710 [2024-11-20 12:40:00.358169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:54.710 [2024-11-20 12:40:00.358175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:54.710 [2024-11-20 12:40:00.358181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:54.710 [2024-11-20 12:40:00.358187] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:54.710 [2024-11-20 12:40:00.358191] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:54.710 [2024-11-20 12:40:00.358195] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:54.710 [2024-11-20 12:40:00.363278] bdev_nvme.c:7266:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:54.710 [2024-11-20 12:40:00.363294] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:54.710 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.710 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:54.710 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:54.710 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:54.710 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:54.710 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:54.710 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:54.710 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:54.710 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:54.710 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:54.710 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:54.710 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.710 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:54.710 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.710 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:54.710 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.710 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:25:54.710 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:54.710 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:54.710 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:54.710 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:54.711 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:54.711 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:54.711 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:54.711 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:54.711 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:54.711 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:54.711 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:54.711 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.711 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.711 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.969 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:54.969 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:54.969 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:54.969 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:54.969 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:54.969 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.969 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.969 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.969 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:54.969 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:54.969 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:54.969 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:54.969 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:54.969 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:54.969 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:54.969 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.969 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.969 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:54.969 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:54.969 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:54.969 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.969 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:54.969 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:54.969 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:54.969 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:54.969 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:54.969 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:54.969 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:54.970 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:54.970 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:54.970 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:54.970 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:54.970 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.970 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:54.970 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.970 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.970 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:54.970 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:54.970 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:54.970 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:54.970 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:54.970 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:54.970 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:54.970 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:54.970 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:54.970 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:54.970 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:54.970 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:54.970 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.970 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.970 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.970 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:54.970 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:54.970 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:54.970 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:54.970 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:54.970 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.970 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.905 [2024-11-20 12:40:01.648946] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:55.905 [2024-11-20 12:40:01.648963] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:55.905 [2024-11-20 12:40:01.648973] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:56.163 [2024-11-20 12:40:01.735396] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:56.422 [2024-11-20 12:40:01.998657] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:25:56.422 [2024-11-20 12:40:01.999249] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x20e61b0:1 started. 00:25:56.422 [2024-11-20 12:40:02.000839] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:56.422 [2024-11-20 12:40:02.000862] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:56.422 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.422 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:56.422 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:56.422 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:56.422 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:56.422 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:56.422 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:56.422 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:56.422 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:56.422 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.422 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.422 [2024-11-20 12:40:02.008925] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x20e61b0 was disconnected and freed. delete nvme_qpair. 00:25:56.422 request: 00:25:56.422 { 00:25:56.422 "name": "nvme", 00:25:56.422 "trtype": "tcp", 00:25:56.422 "traddr": "10.0.0.2", 00:25:56.422 "adrfam": "ipv4", 00:25:56.422 "trsvcid": "8009", 00:25:56.422 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:56.422 "wait_for_attach": true, 00:25:56.422 "method": "bdev_nvme_start_discovery", 00:25:56.422 "req_id": 1 00:25:56.422 } 00:25:56.422 Got JSON-RPC error response 00:25:56.422 response: 00:25:56.422 { 00:25:56.422 "code": -17, 00:25:56.422 "message": "File exists" 00:25:56.422 } 00:25:56.422 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:56.422 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:56.422 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:56.422 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:56.422 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:56.422 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:56.422 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:56.422 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:56.422 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.422 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:56.422 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.422 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:56.422 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.422 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:56.422 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:56.422 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:56.422 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:56.422 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.422 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:56.422 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:56.422 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.422 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.422 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:56.422 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:56.422 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:56.422 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:56.422 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:56.422 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:56.422 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:56.423 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:56.423 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:56.423 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.423 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.423 request: 00:25:56.423 { 00:25:56.423 "name": "nvme_second", 00:25:56.423 "trtype": "tcp", 00:25:56.423 "traddr": "10.0.0.2", 00:25:56.423 "adrfam": "ipv4", 00:25:56.423 "trsvcid": "8009", 00:25:56.423 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:56.423 "wait_for_attach": true, 00:25:56.423 "method": "bdev_nvme_start_discovery", 00:25:56.423 "req_id": 1 00:25:56.423 } 00:25:56.423 Got JSON-RPC error response 00:25:56.423 response: 00:25:56.423 { 00:25:56.423 "code": -17, 00:25:56.423 "message": "File exists" 00:25:56.423 } 00:25:56.423 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:56.423 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:56.423 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:56.423 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:56.423 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:56.423 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:56.423 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:56.423 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:56.423 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.423 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:56.423 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.423 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:56.423 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.423 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:56.423 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:56.423 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:56.423 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:56.423 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.423 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:56.423 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.423 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:56.682 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.682 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:56.682 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:56.682 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:56.682 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:56.682 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:56.682 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:56.682 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:56.682 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:56.682 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:56.682 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.682 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.617 [2024-11-20 12:40:03.228332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.617 [2024-11-20 12:40:03.228358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e1180 with addr=10.0.0.2, port=8010 00:25:57.617 [2024-11-20 12:40:03.228370] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:57.617 [2024-11-20 12:40:03.228376] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:57.617 [2024-11-20 12:40:03.228382] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:58.553 [2024-11-20 12:40:04.230701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.553 [2024-11-20 12:40:04.230725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c77f0 with addr=10.0.0.2, port=8010 00:25:58.553 [2024-11-20 12:40:04.230736] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:58.553 [2024-11-20 12:40:04.230742] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:58.553 [2024-11-20 12:40:04.230748] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:59.490 [2024-11-20 12:40:05.232929] bdev_nvme.c:7522:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:59.490 request: 00:25:59.490 { 00:25:59.490 "name": "nvme_second", 00:25:59.490 "trtype": "tcp", 00:25:59.490 "traddr": "10.0.0.2", 00:25:59.490 "adrfam": "ipv4", 00:25:59.490 "trsvcid": "8010", 00:25:59.490 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:59.490 "wait_for_attach": false, 00:25:59.490 "attach_timeout_ms": 3000, 00:25:59.490 "method": "bdev_nvme_start_discovery", 00:25:59.490 "req_id": 1 00:25:59.490 } 00:25:59.490 Got JSON-RPC error response 00:25:59.490 response: 00:25:59.490 { 00:25:59.490 "code": -110, 00:25:59.490 "message": "Connection timed out" 00:25:59.490 } 00:25:59.490 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:59.490 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:59.490 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:59.490 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:59.490 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:59.490 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:59.490 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:59.490 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.490 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.490 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:59.490 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:59.490 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:59.749 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.749 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:59.749 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:59.749 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 287880 00:25:59.749 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:59.749 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:59.749 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:25:59.749 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:59.749 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:25:59.749 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:59.749 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:59.749 rmmod nvme_tcp 00:25:59.749 rmmod nvme_fabrics 00:25:59.749 rmmod nvme_keyring 00:25:59.749 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:59.749 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:25:59.749 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:25:59.749 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 287822 ']' 00:25:59.749 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 287822 00:25:59.749 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 287822 ']' 00:25:59.749 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 287822 00:25:59.749 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:25:59.749 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:59.749 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 287822 00:25:59.749 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:59.749 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:59.749 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 287822' 00:25:59.749 killing process with pid 287822 00:25:59.749 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 287822 00:25:59.749 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 287822 00:26:00.009 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:00.010 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:00.010 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:00.010 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:26:00.010 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:26:00.010 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:00.010 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:26:00.010 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:00.010 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:00.010 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:00.010 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:00.010 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:01.915 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:01.915 00:26:01.915 real 0m17.193s 00:26:01.915 user 0m20.495s 00:26:01.915 sys 0m5.805s 00:26:01.915 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:01.915 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.915 ************************************ 00:26:01.915 END TEST nvmf_host_discovery 00:26:01.915 ************************************ 00:26:01.915 12:40:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:01.915 12:40:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:01.915 12:40:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:01.915 12:40:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.174 ************************************ 00:26:02.174 START TEST nvmf_host_multipath_status 00:26:02.174 ************************************ 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:02.174 * Looking for test storage... 00:26:02.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:02.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.174 --rc genhtml_branch_coverage=1 00:26:02.174 --rc genhtml_function_coverage=1 00:26:02.174 --rc genhtml_legend=1 00:26:02.174 --rc geninfo_all_blocks=1 00:26:02.174 --rc geninfo_unexecuted_blocks=1 00:26:02.174 00:26:02.174 ' 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:02.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.174 --rc genhtml_branch_coverage=1 00:26:02.174 --rc genhtml_function_coverage=1 00:26:02.174 --rc genhtml_legend=1 00:26:02.174 --rc geninfo_all_blocks=1 00:26:02.174 --rc geninfo_unexecuted_blocks=1 00:26:02.174 00:26:02.174 ' 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:02.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.174 --rc genhtml_branch_coverage=1 00:26:02.174 --rc genhtml_function_coverage=1 00:26:02.174 --rc genhtml_legend=1 00:26:02.174 --rc geninfo_all_blocks=1 00:26:02.174 --rc geninfo_unexecuted_blocks=1 00:26:02.174 00:26:02.174 ' 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:02.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.174 --rc genhtml_branch_coverage=1 00:26:02.174 --rc genhtml_function_coverage=1 00:26:02.174 --rc genhtml_legend=1 00:26:02.174 --rc geninfo_all_blocks=1 00:26:02.174 --rc geninfo_unexecuted_blocks=1 00:26:02.174 00:26:02.174 ' 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:02.174 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:02.175 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:02.175 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.175 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.175 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.175 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:02.175 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.175 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:26:02.175 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:02.175 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:02.175 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:02.175 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:02.175 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:02.175 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:02.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:02.175 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:02.175 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:02.175 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:02.175 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:02.175 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:02.175 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:02.175 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:02.175 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:02.175 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:02.175 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:02.175 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:02.175 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:02.175 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:02.175 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:02.175 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:02.175 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.175 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:02.175 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.175 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:02.175 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:02.175 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:26:02.175 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:08.742 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:08.742 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:08.742 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:08.743 Found net devices under 0000:86:00.0: cvl_0_0 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:08.743 Found net devices under 0000:86:00.1: cvl_0_1 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:08.743 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:08.743 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms 00:26:08.743 00:26:08.743 --- 10.0.0.2 ping statistics --- 00:26:08.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:08.743 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:08.743 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:08.743 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:26:08.743 00:26:08.743 --- 10.0.0.1 ping statistics --- 00:26:08.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:08.743 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=292917 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 292917 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 292917 ']' 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:08.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:08.743 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:08.743 [2024-11-20 12:40:13.904906] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:26:08.743 [2024-11-20 12:40:13.904957] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:08.743 [2024-11-20 12:40:13.984174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:08.743 [2024-11-20 12:40:14.025345] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:08.743 [2024-11-20 12:40:14.025389] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:08.743 [2024-11-20 12:40:14.025396] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:08.743 [2024-11-20 12:40:14.025403] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:08.743 [2024-11-20 12:40:14.025408] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:08.743 [2024-11-20 12:40:14.026632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:08.743 [2024-11-20 12:40:14.026633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:09.002 12:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:09.002 12:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:26:09.002 12:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:09.002 12:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:09.002 12:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:09.260 12:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:09.260 12:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=292917 00:26:09.260 12:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:09.260 [2024-11-20 12:40:14.932303] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:09.260 12:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:09.519 Malloc0 00:26:09.519 12:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:09.777 12:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:10.037 12:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:10.037 [2024-11-20 12:40:15.745455] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:10.037 12:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:10.296 [2024-11-20 12:40:15.950064] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:10.296 12:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=293390 00:26:10.296 12:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:10.296 12:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:10.296 12:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 293390 /var/tmp/bdevperf.sock 00:26:10.296 12:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 293390 ']' 00:26:10.296 12:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:10.296 12:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:10.296 12:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:10.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:10.296 12:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:10.296 12:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:10.555 12:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:10.555 12:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:26:10.555 12:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:10.814 12:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:11.073 Nvme0n1 00:26:11.073 12:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:11.641 Nvme0n1 00:26:11.641 12:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:11.641 12:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:13.552 12:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:13.552 12:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:13.811 12:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:13.811 12:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:15.188 12:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:15.188 12:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:15.188 12:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.188 12:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:15.188 12:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.188 12:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:15.188 12:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.188 12:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:15.447 12:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:15.447 12:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:15.447 12:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.447 12:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:15.706 12:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.706 12:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:15.706 12:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.706 12:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:15.706 12:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.706 12:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:15.706 12:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.706 12:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:15.964 12:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.964 12:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:15.964 12:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.964 12:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:16.223 12:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.223 12:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:16.223 12:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:16.482 12:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:16.740 12:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:17.677 12:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:17.677 12:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:17.677 12:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.677 12:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:17.936 12:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:17.936 12:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:17.936 12:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.936 12:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:17.936 12:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.936 12:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:17.936 12:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.936 12:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:18.195 12:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.195 12:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:18.195 12:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.195 12:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:18.454 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.454 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:18.454 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.454 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:18.713 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.713 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:18.713 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.713 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:18.971 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.971 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:18.971 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:18.971 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:19.230 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:20.608 12:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:20.608 12:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:20.608 12:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.608 12:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:20.608 12:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.608 12:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:20.608 12:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:20.608 12:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.608 12:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:20.608 12:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:20.608 12:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.608 12:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:20.867 12:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.867 12:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:20.867 12:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:20.867 12:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.126 12:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.126 12:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:21.126 12:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.126 12:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:21.386 12:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.386 12:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:21.386 12:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.386 12:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:21.645 12:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.645 12:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:21.645 12:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:21.645 12:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:21.903 12:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:22.872 12:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:22.872 12:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:22.872 12:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.872 12:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:23.177 12:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.177 12:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:23.177 12:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.177 12:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:23.462 12:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:23.462 12:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:23.462 12:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.462 12:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:23.462 12:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.462 12:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:23.721 12:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.721 12:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:23.721 12:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.721 12:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:23.721 12:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.721 12:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:23.980 12:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.980 12:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:23.980 12:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.980 12:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:24.240 12:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:24.240 12:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:24.240 12:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:24.498 12:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:24.498 12:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:25.887 12:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:25.887 12:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:25.887 12:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.887 12:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:25.887 12:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:25.887 12:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:25.887 12:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.887 12:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:25.887 12:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:25.887 12:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:25.887 12:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.887 12:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:26.146 12:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.146 12:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:26.146 12:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.146 12:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:26.404 12:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.404 12:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:26.404 12:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.404 12:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:26.663 12:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:26.663 12:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:26.663 12:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.663 12:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:26.663 12:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:26.663 12:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:26.663 12:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:26.922 12:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:27.179 12:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:28.115 12:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:28.115 12:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:28.115 12:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.115 12:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:28.375 12:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:28.375 12:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:28.375 12:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.375 12:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:28.634 12:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.634 12:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:28.634 12:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.634 12:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:28.892 12:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.892 12:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:28.892 12:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.892 12:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:28.892 12:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.892 12:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:28.892 12:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.892 12:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:29.151 12:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:29.151 12:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:29.151 12:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.151 12:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:29.409 12:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:29.409 12:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:29.668 12:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:29.668 12:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:29.927 12:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:30.186 12:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:31.123 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:31.123 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:31.123 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.123 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:31.382 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:31.382 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:31.382 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.382 12:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:31.382 12:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:31.382 12:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:31.641 12:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.641 12:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:31.641 12:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:31.641 12:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:31.641 12:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.641 12:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:31.898 12:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:31.898 12:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:31.898 12:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.898 12:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:32.157 12:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.157 12:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:32.157 12:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.157 12:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:32.416 12:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.416 12:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:32.416 12:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:32.675 12:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:32.675 12:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:34.054 12:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:34.054 12:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:34.054 12:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.054 12:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:34.054 12:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:34.054 12:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:34.054 12:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.054 12:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:34.314 12:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.314 12:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:34.314 12:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.314 12:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:34.314 12:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.314 12:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:34.314 12:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.314 12:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:34.573 12:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.573 12:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:34.573 12:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.573 12:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:34.832 12:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.832 12:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:34.832 12:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.832 12:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:35.092 12:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.092 12:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:35.092 12:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:35.092 12:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:35.351 12:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:36.730 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:36.730 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:36.730 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:36.730 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:36.730 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:36.730 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:36.730 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:36.730 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:36.989 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:36.989 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:36.989 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:36.989 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:36.989 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:36.989 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:36.989 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:36.989 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:37.248 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.248 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:37.248 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.248 12:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:37.507 12:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.507 12:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:37.507 12:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.507 12:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:37.766 12:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.766 12:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:37.766 12:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:38.025 12:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:38.284 12:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:39.221 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:39.221 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:39.221 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:39.221 12:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.480 12:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:39.480 12:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:39.480 12:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:39.480 12:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.480 12:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:39.480 12:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:39.480 12:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.480 12:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:39.739 12:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:39.739 12:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:39.739 12:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.739 12:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:39.998 12:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:39.998 12:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:39.998 12:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.998 12:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:40.257 12:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.257 12:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:40.257 12:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.257 12:40:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:40.516 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:40.516 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 293390 00:26:40.516 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 293390 ']' 00:26:40.516 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 293390 00:26:40.517 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:26:40.517 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:40.517 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 293390 00:26:40.517 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:26:40.517 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:26:40.517 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 293390' 00:26:40.517 killing process with pid 293390 00:26:40.517 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 293390 00:26:40.517 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 293390 00:26:40.517 { 00:26:40.517 "results": [ 00:26:40.517 { 00:26:40.517 "job": "Nvme0n1", 00:26:40.517 "core_mask": "0x4", 00:26:40.517 "workload": "verify", 00:26:40.517 "status": "terminated", 00:26:40.517 "verify_range": { 00:26:40.517 "start": 0, 00:26:40.517 "length": 16384 00:26:40.517 }, 00:26:40.517 "queue_depth": 128, 00:26:40.517 "io_size": 4096, 00:26:40.517 "runtime": 28.855366, 00:26:40.517 "iops": 10770.301787196184, 00:26:40.517 "mibps": 42.07149135623509, 00:26:40.517 "io_failed": 0, 00:26:40.517 "io_timeout": 0, 00:26:40.517 "avg_latency_us": 11864.406647694495, 00:26:40.517 "min_latency_us": 140.43428571428572, 00:26:40.517 "max_latency_us": 3019898.88 00:26:40.517 } 00:26:40.517 ], 00:26:40.517 "core_count": 1 00:26:40.517 } 00:26:40.779 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 293390 00:26:40.779 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:40.779 [2024-11-20 12:40:16.008418] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:26:40.779 [2024-11-20 12:40:16.008467] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid293390 ] 00:26:40.779 [2024-11-20 12:40:16.083027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.779 [2024-11-20 12:40:16.124379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:40.779 Running I/O for 90 seconds... 00:26:40.779 11594.00 IOPS, 45.29 MiB/s [2024-11-20T11:40:46.545Z] 11551.00 IOPS, 45.12 MiB/s [2024-11-20T11:40:46.545Z] 11581.33 IOPS, 45.24 MiB/s [2024-11-20T11:40:46.545Z] 11563.00 IOPS, 45.17 MiB/s [2024-11-20T11:40:46.545Z] 11598.20 IOPS, 45.31 MiB/s [2024-11-20T11:40:46.545Z] 11579.83 IOPS, 45.23 MiB/s [2024-11-20T11:40:46.545Z] 11590.29 IOPS, 45.27 MiB/s [2024-11-20T11:40:46.546Z] 11579.12 IOPS, 45.23 MiB/s [2024-11-20T11:40:46.546Z] 11582.67 IOPS, 45.24 MiB/s [2024-11-20T11:40:46.546Z] 11580.00 IOPS, 45.23 MiB/s [2024-11-20T11:40:46.546Z] 11578.00 IOPS, 45.23 MiB/s [2024-11-20T11:40:46.546Z] 11588.58 IOPS, 45.27 MiB/s [2024-11-20T11:40:46.546Z] [2024-11-20 12:40:30.024393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-11-20 12:40:30.024430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:40.780 [2024-11-20 12:40:30.024468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-11-20 12:40:30.024478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:40.780 [2024-11-20 12:40:30.024491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-11-20 12:40:30.024499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:40.780 [2024-11-20 12:40:30.024512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-11-20 12:40:30.024518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:40.780 [2024-11-20 12:40:30.024531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-11-20 12:40:30.024539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:40.780 [2024-11-20 12:40:30.024551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-11-20 12:40:30.024558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:40.780 [2024-11-20 12:40:30.024570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-11-20 12:40:30.024578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:40.780 [2024-11-20 12:40:30.024591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-11-20 12:40:30.024598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:40.780 [2024-11-20 12:40:30.024610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-11-20 12:40:30.024617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:40.780 [2024-11-20 12:40:30.024630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-11-20 12:40:30.024642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:40.780 [2024-11-20 12:40:30.024655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-11-20 12:40:30.024662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:40.780 [2024-11-20 12:40:30.024674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-11-20 12:40:30.024681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:40.780 [2024-11-20 12:40:30.024693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-11-20 12:40:30.024700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:40.780 [2024-11-20 12:40:30.024712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-11-20 12:40:30.024719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.780 [2024-11-20 12:40:30.024732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:2640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-11-20 12:40:30.024738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:40.780 [2024-11-20 12:40:30.024891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-11-20 12:40:30.024901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:40.780 [2024-11-20 12:40:30.024915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-11-20 12:40:30.024922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:40.780 [2024-11-20 12:40:30.024936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-11-20 12:40:30.024943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:40.780 [2024-11-20 12:40:30.024956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-11-20 12:40:30.024963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:40.780 [2024-11-20 12:40:30.024976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-11-20 12:40:30.024984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:40.780 [2024-11-20 12:40:30.024997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-11-20 12:40:30.025004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:40.780 [2024-11-20 12:40:30.025017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-11-20 12:40:30.025027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:40.780 [2024-11-20 12:40:30.025040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-11-20 12:40:30.025047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.780 [2024-11-20 12:40:30.025060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-11-20 12:40:30.025067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.780 [2024-11-20 12:40:30.025080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-11-20 12:40:30.025086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:40.780 [2024-11-20 12:40:30.025099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-11-20 12:40:30.025105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:40.780 [2024-11-20 12:40:30.025118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-11-20 12:40:30.025125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:40.780 [2024-11-20 12:40:30.025137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-11-20 12:40:30.025144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:40.780 [2024-11-20 12:40:30.025157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-11-20 12:40:30.025163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:40.780 [2024-11-20 12:40:30.025176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-11-20 12:40:30.025183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:40.780 [2024-11-20 12:40:30.025196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-11-20 12:40:30.025209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:40.780 [2024-11-20 12:40:30.025223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-11-20 12:40:30.025230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:40.780 [2024-11-20 12:40:30.025243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-11-20 12:40:30.025249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:40.780 [2024-11-20 12:40:30.025263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-11-20 12:40:30.025270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:40.780 [2024-11-20 12:40:30.025284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-11-20 12:40:30.025291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:40.780 [2024-11-20 12:40:30.025304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.780 [2024-11-20 12:40:30.025311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:40.780 [2024-11-20 12:40:30.025324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-11-20 12:40:30.025331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:40.781 [2024-11-20 12:40:30.025344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-11-20 12:40:30.025350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:40.781 [2024-11-20 12:40:30.025364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-11-20 12:40:30.025371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:40.781 [2024-11-20 12:40:30.025384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-11-20 12:40:30.025390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:40.781 [2024-11-20 12:40:30.025403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-11-20 12:40:30.025409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:40.781 [2024-11-20 12:40:30.025422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-11-20 12:40:30.025429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:40.781 [2024-11-20 12:40:30.025441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-11-20 12:40:30.025448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:40.781 [2024-11-20 12:40:30.025460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-11-20 12:40:30.025466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:40.781 [2024-11-20 12:40:30.025495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-11-20 12:40:30.025502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:40.781 [2024-11-20 12:40:30.025518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-11-20 12:40:30.025525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:40.781 [2024-11-20 12:40:30.025540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-11-20 12:40:30.025548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:40.781 [2024-11-20 12:40:30.025561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-11-20 12:40:30.025568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:40.781 [2024-11-20 12:40:30.025582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-11-20 12:40:30.025589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:40.781 [2024-11-20 12:40:30.025604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-11-20 12:40:30.025611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:40.781 [2024-11-20 12:40:30.025625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-11-20 12:40:30.025633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:40.781 [2024-11-20 12:40:30.025647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-11-20 12:40:30.025654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:40.781 [2024-11-20 12:40:30.025668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-11-20 12:40:30.025675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.781 [2024-11-20 12:40:30.025688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-11-20 12:40:30.025695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.781 [2024-11-20 12:40:30.025709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-11-20 12:40:30.025716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.781 [2024-11-20 12:40:30.025730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-11-20 12:40:30.025738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.781 [2024-11-20 12:40:30.025751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-11-20 12:40:30.025759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:40.781 [2024-11-20 12:40:30.025773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-11-20 12:40:30.025779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:40.781 [2024-11-20 12:40:30.025793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-11-20 12:40:30.025802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:40.781 [2024-11-20 12:40:30.025816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-11-20 12:40:30.025823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:40.781 [2024-11-20 12:40:30.025836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-11-20 12:40:30.025843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:40.781 [2024-11-20 12:40:30.025857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-11-20 12:40:30.025864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:40.781 [2024-11-20 12:40:30.025878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.781 [2024-11-20 12:40:30.025886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:40.781 [2024-11-20 12:40:30.025899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-11-20 12:40:30.025906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:40.781 [2024-11-20 12:40:30.025920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-11-20 12:40:30.025927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:40.781 [2024-11-20 12:40:30.025942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-11-20 12:40:30.025949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:40.781 [2024-11-20 12:40:30.025963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-11-20 12:40:30.025970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:40.781 [2024-11-20 12:40:30.025984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-11-20 12:40:30.025992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:40.781 [2024-11-20 12:40:30.026005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-11-20 12:40:30.026012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:40.781 [2024-11-20 12:40:30.026026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-11-20 12:40:30.026033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:40.781 [2024-11-20 12:40:30.026046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-11-20 12:40:30.026053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:40.781 [2024-11-20 12:40:30.026068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-11-20 12:40:30.026075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:40.781 [2024-11-20 12:40:30.026089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-11-20 12:40:30.026096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:40.781 [2024-11-20 12:40:30.026109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-11-20 12:40:30.026116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:40.781 [2024-11-20 12:40:30.026130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.781 [2024-11-20 12:40:30.026137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:40.781 [2024-11-20 12:40:30.026150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.782 [2024-11-20 12:40:30.026157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:40.782 [2024-11-20 12:40:30.026170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.782 [2024-11-20 12:40:30.026177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:40.782 [2024-11-20 12:40:30.026191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.782 [2024-11-20 12:40:30.026198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:40.782 [2024-11-20 12:40:30.026361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.782 [2024-11-20 12:40:30.026372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:40.782 [2024-11-20 12:40:30.026391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.782 [2024-11-20 12:40:30.026399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:40.782 [2024-11-20 12:40:30.026417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.782 [2024-11-20 12:40:30.026425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:40.782 [2024-11-20 12:40:30.026443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:3176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.782 [2024-11-20 12:40:30.026451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:40.782 [2024-11-20 12:40:30.026469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.782 [2024-11-20 12:40:30.026477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:40.782 [2024-11-20 12:40:30.026497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.782 [2024-11-20 12:40:30.026504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:40.782 [2024-11-20 12:40:30.026522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:3200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.782 [2024-11-20 12:40:30.026530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:40.782 [2024-11-20 12:40:30.026548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:3208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.782 [2024-11-20 12:40:30.026556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:40.782 [2024-11-20 12:40:30.026575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.782 [2024-11-20 12:40:30.026582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.782 [2024-11-20 12:40:30.026600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.782 [2024-11-20 12:40:30.026607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.782 [2024-11-20 12:40:30.026625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.782 [2024-11-20 12:40:30.026633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:40.782 [2024-11-20 12:40:30.026651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:3240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.782 [2024-11-20 12:40:30.026658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:40.782 [2024-11-20 12:40:30.026676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.782 [2024-11-20 12:40:30.026683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:40.782 [2024-11-20 12:40:30.026701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.782 [2024-11-20 12:40:30.026709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:40.782 [2024-11-20 12:40:30.026727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:3264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.782 [2024-11-20 12:40:30.026734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:40.782 [2024-11-20 12:40:30.026752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.782 [2024-11-20 12:40:30.026760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:40.782 [2024-11-20 12:40:30.026778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.782 [2024-11-20 12:40:30.026785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:40.782 [2024-11-20 12:40:30.026803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.782 [2024-11-20 12:40:30.026812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:40.782 [2024-11-20 12:40:30.026830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.782 [2024-11-20 12:40:30.026838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:40.782 [2024-11-20 12:40:30.026856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.782 [2024-11-20 12:40:30.026863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:40.782 [2024-11-20 12:40:30.026881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:3312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.782 [2024-11-20 12:40:30.026889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:40.782 [2024-11-20 12:40:30.026907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.782 [2024-11-20 12:40:30.026914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:40.782 [2024-11-20 12:40:30.026932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.782 [2024-11-20 12:40:30.026939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:40.782 [2024-11-20 12:40:30.026958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.782 [2024-11-20 12:40:30.026965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:40.782 [2024-11-20 12:40:30.026983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.782 [2024-11-20 12:40:30.026990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:40.782 [2024-11-20 12:40:30.027008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:3352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.782 [2024-11-20 12:40:30.027016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:40.782 [2024-11-20 12:40:30.027035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.782 [2024-11-20 12:40:30.027043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:40.782 [2024-11-20 12:40:30.027061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.782 [2024-11-20 12:40:30.027069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:40.782 [2024-11-20 12:40:30.027088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.782 [2024-11-20 12:40:30.027096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:40.782 [2024-11-20 12:40:30.027114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.782 [2024-11-20 12:40:30.027126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:40.782 [2024-11-20 12:40:30.027144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.782 [2024-11-20 12:40:30.027152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:40.782 [2024-11-20 12:40:30.027170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.782 [2024-11-20 12:40:30.027179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:40.782 [2024-11-20 12:40:30.027197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:3408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.782 [2024-11-20 12:40:30.027211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:40.782 [2024-11-20 12:40:30.027229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.782 [2024-11-20 12:40:30.027237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:40.782 [2024-11-20 12:40:30.027255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.782 [2024-11-20 12:40:30.027262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:40.782 [2024-11-20 12:40:30.027281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:3432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.782 [2024-11-20 12:40:30.027288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:40.782 [2024-11-20 12:40:30.027307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.783 [2024-11-20 12:40:30.027314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:40.783 [2024-11-20 12:40:30.027332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.783 [2024-11-20 12:40:30.027340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:40.783 [2024-11-20 12:40:30.027360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.783 [2024-11-20 12:40:30.027367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:40.783 [2024-11-20 12:40:30.027385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.783 [2024-11-20 12:40:30.027393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:40.783 [2024-11-20 12:40:30.027411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.783 [2024-11-20 12:40:30.027418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.783 [2024-11-20 12:40:30.027438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.783 [2024-11-20 12:40:30.027448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.783 [2024-11-20 12:40:30.027478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.783 [2024-11-20 12:40:30.027499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:40.783 [2024-11-20 12:40:30.027526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.783 [2024-11-20 12:40:30.027540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:40.783 [2024-11-20 12:40:30.027563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.783 [2024-11-20 12:40:30.027579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:40.783 [2024-11-20 12:40:30.027602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.783 [2024-11-20 12:40:30.027614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:40.783 [2024-11-20 12:40:30.027644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.783 [2024-11-20 12:40:30.027657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:40.783 [2024-11-20 12:40:30.027685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.783 [2024-11-20 12:40:30.027716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:40.783 [2024-11-20 12:40:30.027742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.783 [2024-11-20 12:40:30.027757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:40.783 [2024-11-20 12:40:30.027780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:3544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.783 [2024-11-20 12:40:30.027792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:40.783 11367.08 IOPS, 44.40 MiB/s [2024-11-20T11:40:46.549Z] 10555.14 IOPS, 41.23 MiB/s [2024-11-20T11:40:46.549Z] 9851.47 IOPS, 38.48 MiB/s [2024-11-20T11:40:46.549Z] 9402.62 IOPS, 36.73 MiB/s [2024-11-20T11:40:46.549Z] 9524.59 IOPS, 37.21 MiB/s [2024-11-20T11:40:46.549Z] 9635.67 IOPS, 37.64 MiB/s [2024-11-20T11:40:46.549Z] 9835.00 IOPS, 38.42 MiB/s [2024-11-20T11:40:46.549Z] 10031.60 IOPS, 39.19 MiB/s [2024-11-20T11:40:46.549Z] 10219.19 IOPS, 39.92 MiB/s [2024-11-20T11:40:46.549Z] 10276.14 IOPS, 40.14 MiB/s [2024-11-20T11:40:46.549Z] 10323.09 IOPS, 40.32 MiB/s [2024-11-20T11:40:46.549Z] 10390.67 IOPS, 40.59 MiB/s [2024-11-20T11:40:46.549Z] 10523.56 IOPS, 41.11 MiB/s [2024-11-20T11:40:46.549Z] 10653.38 IOPS, 41.61 MiB/s [2024-11-20T11:40:46.549Z] [2024-11-20 12:40:43.771741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:40096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.783 [2024-11-20 12:40:43.771782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:40.783 [2024-11-20 12:40:43.771815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:40112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.783 [2024-11-20 12:40:43.771823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:40.783 [2024-11-20 12:40:43.771836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:40128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.783 [2024-11-20 12:40:43.771849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:40.783 [2024-11-20 12:40:43.771862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:40144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.783 [2024-11-20 12:40:43.771869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:40.783 [2024-11-20 12:40:43.771881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:40160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.783 [2024-11-20 12:40:43.771888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:40.783 [2024-11-20 12:40:43.771900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:40176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.783 [2024-11-20 12:40:43.771906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:40.783 [2024-11-20 12:40:43.771919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:40192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.783 [2024-11-20 12:40:43.771925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:40.783 [2024-11-20 12:40:43.771937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:40208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.783 [2024-11-20 12:40:43.771944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:40.783 [2024-11-20 12:40:43.771956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:40224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.783 [2024-11-20 12:40:43.771963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:40.783 [2024-11-20 12:40:43.771975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:40240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.783 [2024-11-20 12:40:43.771982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:40.783 [2024-11-20 12:40:43.771994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:40256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.783 [2024-11-20 12:40:43.772000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:40.783 [2024-11-20 12:40:43.772012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.783 [2024-11-20 12:40:43.772019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:40.783 [2024-11-20 12:40:43.772031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:40288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.783 [2024-11-20 12:40:43.772037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:40.783 [2024-11-20 12:40:43.772049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:40304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.783 [2024-11-20 12:40:43.772056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:40.783 [2024-11-20 12:40:43.772068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:40320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.783 [2024-11-20 12:40:43.772074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:40.783 [2024-11-20 12:40:43.772088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:40336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.783 [2024-11-20 12:40:43.772095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:40.783 [2024-11-20 12:40:43.772107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:40352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.783 [2024-11-20 12:40:43.772113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:40.783 [2024-11-20 12:40:43.772127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:40368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.783 [2024-11-20 12:40:43.772134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:40.783 [2024-11-20 12:40:43.772146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:40384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.783 [2024-11-20 12:40:43.772153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:40.783 [2024-11-20 12:40:43.772165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.783 [2024-11-20 12:40:43.772172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:40.783 [2024-11-20 12:40:43.772184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:40416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.783 [2024-11-20 12:40:43.772190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:40.783 [2024-11-20 12:40:43.772208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:40432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.783 [2024-11-20 12:40:43.772217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:40.783 [2024-11-20 12:40:43.772229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:40448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.784 [2024-11-20 12:40:43.772236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:40.784 [2024-11-20 12:40:43.772248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:39984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.784 [2024-11-20 12:40:43.772255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:40.784 [2024-11-20 12:40:43.772267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:40016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.784 [2024-11-20 12:40:43.772274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:40.784 [2024-11-20 12:40:43.773543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:40048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.784 [2024-11-20 12:40:43.773564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.784 [2024-11-20 12:40:43.773580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:40464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.784 [2024-11-20 12:40:43.773587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.784 [2024-11-20 12:40:43.773602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:40480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.784 [2024-11-20 12:40:43.773609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:40.784 [2024-11-20 12:40:43.773621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:40496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.784 [2024-11-20 12:40:43.773628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:40.784 [2024-11-20 12:40:43.773640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:40512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.784 [2024-11-20 12:40:43.773647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:40.784 [2024-11-20 12:40:43.773659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:40528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.784 [2024-11-20 12:40:43.773665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:40.784 [2024-11-20 12:40:43.773677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:40544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.784 [2024-11-20 12:40:43.773684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:40.784 [2024-11-20 12:40:43.773696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:40560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.784 [2024-11-20 12:40:43.773703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:40.784 [2024-11-20 12:40:43.773716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:40576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.784 [2024-11-20 12:40:43.773722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:40.784 [2024-11-20 12:40:43.773734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:40592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.784 [2024-11-20 12:40:43.773741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:40.784 [2024-11-20 12:40:43.773753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:40608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.784 [2024-11-20 12:40:43.773759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:40.784 [2024-11-20 12:40:43.773771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.784 [2024-11-20 12:40:43.773778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:40.784 [2024-11-20 12:40:43.773791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:40640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.784 [2024-11-20 12:40:43.773797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:40.784 [2024-11-20 12:40:43.773809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:40656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.784 [2024-11-20 12:40:43.773816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:40.784 [2024-11-20 12:40:43.773828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:40672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.784 [2024-11-20 12:40:43.773836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:40.784 [2024-11-20 12:40:43.773848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:40688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.784 [2024-11-20 12:40:43.773855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:40.784 [2024-11-20 12:40:43.773867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.784 [2024-11-20 12:40:43.773873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:40.784 [2024-11-20 12:40:43.773885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:40720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.784 [2024-11-20 12:40:43.773892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:40.784 [2024-11-20 12:40:43.773904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:40736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.784 [2024-11-20 12:40:43.773910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:40.784 [2024-11-20 12:40:43.773922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:40752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.784 [2024-11-20 12:40:43.773929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:40.784 [2024-11-20 12:40:43.773941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:40768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.784 [2024-11-20 12:40:43.773947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:40.784 [2024-11-20 12:40:43.773959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:40784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.784 [2024-11-20 12:40:43.773966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:40.784 [2024-11-20 12:40:43.773978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:40800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.784 [2024-11-20 12:40:43.773984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:40.784 [2024-11-20 12:40:43.773996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:40816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.784 [2024-11-20 12:40:43.774002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.784 [2024-11-20 12:40:43.774015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:40832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.784 [2024-11-20 12:40:43.774022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:40.784 [2024-11-20 12:40:43.774034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:39992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.784 [2024-11-20 12:40:43.774040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:40.784 [2024-11-20 12:40:43.774052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.784 [2024-11-20 12:40:43.774060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:40.784 [2024-11-20 12:40:43.774073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:40056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.784 [2024-11-20 12:40:43.774080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:40.784 [2024-11-20 12:40:43.774092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.784 [2024-11-20 12:40:43.774099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:40.784 [2024-11-20 12:40:43.774656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.784 [2024-11-20 12:40:43.774672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:40.785 [2024-11-20 12:40:43.774687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.785 [2024-11-20 12:40:43.774694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:40.785 [2024-11-20 12:40:43.774707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:40888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.785 [2024-11-20 12:40:43.774714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:40.785 [2024-11-20 12:40:43.774726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:40904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.785 [2024-11-20 12:40:43.774732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.785 [2024-11-20 12:40:43.774745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:40920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.785 [2024-11-20 12:40:43.774751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.785 10728.67 IOPS, 41.91 MiB/s [2024-11-20T11:40:46.551Z] 10753.50 IOPS, 42.01 MiB/s [2024-11-20T11:40:46.551Z] Received shutdown signal, test time was about 28.856033 seconds 00:26:40.785 00:26:40.785 Latency(us) 00:26:40.785 [2024-11-20T11:40:46.551Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:40.785 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:40.785 Verification LBA range: start 0x0 length 0x4000 00:26:40.785 Nvme0n1 : 28.86 10770.30 42.07 0.00 0.00 11864.41 140.43 3019898.88 00:26:40.785 [2024-11-20T11:40:46.551Z] =================================================================================================================== 00:26:40.785 [2024-11-20T11:40:46.551Z] Total : 10770.30 42.07 0.00 0.00 11864.41 140.43 3019898.88 00:26:40.785 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:40.785 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:40.785 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:40.785 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:40.785 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:40.785 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:26:40.785 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:40.785 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:26:40.785 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:40.785 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:40.785 rmmod nvme_tcp 00:26:40.785 rmmod nvme_fabrics 00:26:40.785 rmmod nvme_keyring 00:26:41.044 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:41.044 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:26:41.044 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:26:41.044 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 292917 ']' 00:26:41.044 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 292917 00:26:41.044 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 292917 ']' 00:26:41.044 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 292917 00:26:41.044 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:26:41.044 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:41.044 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 292917 00:26:41.044 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:41.044 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:41.044 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 292917' 00:26:41.044 killing process with pid 292917 00:26:41.044 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 292917 00:26:41.044 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 292917 00:26:41.044 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:41.044 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:41.044 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:41.044 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:26:41.044 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:26:41.044 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:26:41.044 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:41.044 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:41.044 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:41.044 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:41.044 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:41.044 12:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:43.582 12:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:43.582 00:26:43.582 real 0m41.155s 00:26:43.582 user 1m51.051s 00:26:43.582 sys 0m11.752s 00:26:43.582 12:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:43.582 12:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:43.582 ************************************ 00:26:43.582 END TEST nvmf_host_multipath_status 00:26:43.582 ************************************ 00:26:43.582 12:40:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:43.582 12:40:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:43.582 12:40:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:43.582 12:40:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.582 ************************************ 00:26:43.582 START TEST nvmf_discovery_remove_ifc 00:26:43.582 ************************************ 00:26:43.582 12:40:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:43.582 * Looking for test storage... 00:26:43.582 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:43.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:43.582 --rc genhtml_branch_coverage=1 00:26:43.582 --rc genhtml_function_coverage=1 00:26:43.582 --rc genhtml_legend=1 00:26:43.582 --rc geninfo_all_blocks=1 00:26:43.582 --rc geninfo_unexecuted_blocks=1 00:26:43.582 00:26:43.582 ' 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:43.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:43.582 --rc genhtml_branch_coverage=1 00:26:43.582 --rc genhtml_function_coverage=1 00:26:43.582 --rc genhtml_legend=1 00:26:43.582 --rc geninfo_all_blocks=1 00:26:43.582 --rc geninfo_unexecuted_blocks=1 00:26:43.582 00:26:43.582 ' 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:43.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:43.582 --rc genhtml_branch_coverage=1 00:26:43.582 --rc genhtml_function_coverage=1 00:26:43.582 --rc genhtml_legend=1 00:26:43.582 --rc geninfo_all_blocks=1 00:26:43.582 --rc geninfo_unexecuted_blocks=1 00:26:43.582 00:26:43.582 ' 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:43.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:43.582 --rc genhtml_branch_coverage=1 00:26:43.582 --rc genhtml_function_coverage=1 00:26:43.582 --rc genhtml_legend=1 00:26:43.582 --rc geninfo_all_blocks=1 00:26:43.582 --rc geninfo_unexecuted_blocks=1 00:26:43.582 00:26:43.582 ' 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:43.582 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:43.583 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.583 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.583 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.583 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:43.583 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.583 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:26:43.583 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:43.583 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:43.583 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:43.583 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:43.583 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:43.583 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:43.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:43.583 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:43.583 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:43.583 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:43.583 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:43.583 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:43.583 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:43.583 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:43.583 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:43.583 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:43.583 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:43.583 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:43.583 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:43.583 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:43.583 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:43.583 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:43.583 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:43.583 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:43.583 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:43.583 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:43.583 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:43.583 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:26:43.583 12:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:50.157 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:50.157 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:26:50.157 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:50.157 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:50.157 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:50.157 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:50.157 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:50.157 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:26:50.157 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:50.157 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:26:50.157 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:26:50.157 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:26:50.157 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:26:50.157 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:26:50.157 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:26:50.157 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:50.157 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:50.157 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:50.157 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:50.157 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:50.157 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:50.157 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:50.157 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:50.157 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:50.157 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:50.158 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:50.158 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:50.158 Found net devices under 0000:86:00.0: cvl_0_0 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:50.158 Found net devices under 0000:86:00.1: cvl_0_1 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:50.158 12:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:50.158 12:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:50.158 12:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:50.158 12:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:50.158 12:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:50.158 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:50.158 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.371 ms 00:26:50.158 00:26:50.158 --- 10.0.0.2 ping statistics --- 00:26:50.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.158 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:26:50.158 12:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:50.158 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:50.158 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:26:50.158 00:26:50.158 --- 10.0.0.1 ping statistics --- 00:26:50.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.158 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:26:50.158 12:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:50.158 12:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:26:50.158 12:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:50.158 12:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:50.158 12:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:50.158 12:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:50.158 12:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:50.158 12:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:50.158 12:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:50.158 12:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:50.158 12:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:50.158 12:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:50.158 12:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:50.158 12:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=301935 00:26:50.158 12:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:50.158 12:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 301935 00:26:50.158 12:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 301935 ']' 00:26:50.158 12:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:50.158 12:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:50.158 12:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:50.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:50.158 12:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:50.158 12:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:50.158 [2024-11-20 12:40:55.159361] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:26:50.159 [2024-11-20 12:40:55.159409] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:50.159 [2024-11-20 12:40:55.239436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:50.159 [2024-11-20 12:40:55.281530] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:50.159 [2024-11-20 12:40:55.281566] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:50.159 [2024-11-20 12:40:55.281573] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:50.159 [2024-11-20 12:40:55.281578] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:50.159 [2024-11-20 12:40:55.281583] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:50.159 [2024-11-20 12:40:55.282142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:50.418 12:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:50.418 12:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:26:50.418 12:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:50.418 12:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:50.418 12:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:50.418 12:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:50.418 12:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:50.418 12:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.418 12:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:50.418 [2024-11-20 12:40:56.046296] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:50.418 [2024-11-20 12:40:56.054488] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:50.418 null0 00:26:50.418 [2024-11-20 12:40:56.086456] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:50.418 12:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.418 12:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=302182 00:26:50.418 12:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:50.418 12:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 302182 /tmp/host.sock 00:26:50.418 12:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 302182 ']' 00:26:50.418 12:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:26:50.418 12:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:50.418 12:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:50.418 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:50.418 12:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:50.418 12:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:50.418 [2024-11-20 12:40:56.155912] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:26:50.418 [2024-11-20 12:40:56.155955] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid302182 ] 00:26:50.677 [2024-11-20 12:40:56.232692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:50.678 [2024-11-20 12:40:56.274790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:50.678 12:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:50.678 12:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:26:50.678 12:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:50.678 12:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:50.678 12:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.678 12:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:50.678 12:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.678 12:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:50.678 12:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.678 12:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:50.678 12:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.678 12:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:50.678 12:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.678 12:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:52.055 [2024-11-20 12:40:57.453376] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:52.055 [2024-11-20 12:40:57.453395] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:52.055 [2024-11-20 12:40:57.453413] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:52.055 [2024-11-20 12:40:57.541679] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:52.055 [2024-11-20 12:40:57.765783] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:52.055 [2024-11-20 12:40:57.766540] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x7389f0:1 started. 00:26:52.055 [2024-11-20 12:40:57.767865] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:52.055 [2024-11-20 12:40:57.767901] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:52.055 [2024-11-20 12:40:57.767919] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:52.055 [2024-11-20 12:40:57.767931] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:52.055 [2024-11-20 12:40:57.767948] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:52.055 12:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.055 12:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:52.055 [2024-11-20 12:40:57.772072] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x7389f0 was disconnected and freed. delete nvme_qpair. 00:26:52.055 12:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:52.055 12:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:52.055 12:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:52.055 12:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.055 12:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:52.055 12:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:52.055 12:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:52.055 12:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.314 12:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:52.314 12:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:52.314 12:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:52.314 12:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:52.314 12:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:52.314 12:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:52.314 12:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:52.314 12:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.314 12:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:52.314 12:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:52.314 12:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:52.314 12:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.314 12:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:52.314 12:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:53.249 12:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:53.249 12:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:53.249 12:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:53.249 12:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.249 12:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:53.249 12:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:53.249 12:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:53.249 12:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.507 12:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:53.507 12:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:54.441 12:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:54.441 12:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:54.441 12:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:54.441 12:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.441 12:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:54.441 12:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:54.441 12:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:54.441 12:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.441 12:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:54.441 12:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:55.375 12:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:55.375 12:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:55.375 12:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:55.375 12:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.375 12:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:55.376 12:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:55.376 12:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:55.376 12:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.376 12:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:55.376 12:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:56.752 12:41:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:56.752 12:41:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:56.752 12:41:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:56.752 12:41:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.752 12:41:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:56.752 12:41:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:56.752 12:41:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:56.752 12:41:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.752 12:41:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:56.752 12:41:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:57.689 12:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:57.689 12:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:57.689 12:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:57.689 12:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.689 12:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:57.689 12:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:57.689 12:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:57.689 12:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.689 [2024-11-20 12:41:03.209465] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:57.689 [2024-11-20 12:41:03.209502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.689 [2024-11-20 12:41:03.209531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.689 [2024-11-20 12:41:03.209541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.689 [2024-11-20 12:41:03.209548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.689 [2024-11-20 12:41:03.209556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.689 [2024-11-20 12:41:03.209567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.689 [2024-11-20 12:41:03.209574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.689 [2024-11-20 12:41:03.209581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.689 [2024-11-20 12:41:03.209588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.689 [2024-11-20 12:41:03.209595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.689 [2024-11-20 12:41:03.209602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x715220 is same with the state(6) to be set 00:26:57.689 12:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:57.689 12:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:57.689 [2024-11-20 12:41:03.219487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x715220 (9): Bad file descriptor 00:26:57.689 [2024-11-20 12:41:03.229520] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:57.689 [2024-11-20 12:41:03.229531] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:57.689 [2024-11-20 12:41:03.229535] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:57.689 [2024-11-20 12:41:03.229540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:57.689 [2024-11-20 12:41:03.229560] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:58.625 12:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:58.625 12:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:58.625 12:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:58.625 12:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.625 12:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:58.625 12:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:58.625 12:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:58.625 [2024-11-20 12:41:04.263296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:58.625 [2024-11-20 12:41:04.263370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x715220 with addr=10.0.0.2, port=4420 00:26:58.625 [2024-11-20 12:41:04.263402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x715220 is same with the state(6) to be set 00:26:58.625 [2024-11-20 12:41:04.263455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x715220 (9): Bad file descriptor 00:26:58.625 [2024-11-20 12:41:04.264411] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:26:58.625 [2024-11-20 12:41:04.264475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:58.625 [2024-11-20 12:41:04.264499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:58.625 [2024-11-20 12:41:04.264523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:58.625 [2024-11-20 12:41:04.264543] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:58.626 [2024-11-20 12:41:04.264559] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:58.626 [2024-11-20 12:41:04.264581] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:58.626 [2024-11-20 12:41:04.264603] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:58.626 [2024-11-20 12:41:04.264618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:58.626 12:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.626 12:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:58.626 12:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:59.562 [2024-11-20 12:41:05.267131] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:59.562 [2024-11-20 12:41:05.267152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:59.563 [2024-11-20 12:41:05.267163] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:59.563 [2024-11-20 12:41:05.267170] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:59.563 [2024-11-20 12:41:05.267176] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:26:59.563 [2024-11-20 12:41:05.267182] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:59.563 [2024-11-20 12:41:05.267186] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:59.563 [2024-11-20 12:41:05.267191] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:59.563 [2024-11-20 12:41:05.267214] bdev_nvme.c:7230:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:59.563 [2024-11-20 12:41:05.267234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:59.563 [2024-11-20 12:41:05.267243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.563 [2024-11-20 12:41:05.267252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:59.563 [2024-11-20 12:41:05.267259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.563 [2024-11-20 12:41:05.267265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:59.563 [2024-11-20 12:41:05.267272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.563 [2024-11-20 12:41:05.267278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:59.563 [2024-11-20 12:41:05.267285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.563 [2024-11-20 12:41:05.267293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:59.563 [2024-11-20 12:41:05.267299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.563 [2024-11-20 12:41:05.267305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:26:59.563 [2024-11-20 12:41:05.267693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x704900 (9): Bad file descriptor 00:26:59.563 [2024-11-20 12:41:05.268704] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:59.563 [2024-11-20 12:41:05.268719] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:26:59.563 12:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:59.563 12:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:59.563 12:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:59.563 12:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.563 12:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:59.563 12:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:59.563 12:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:59.563 12:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.822 12:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:59.822 12:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:59.822 12:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:59.822 12:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:59.822 12:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:59.822 12:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:59.822 12:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:59.822 12:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.822 12:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:59.822 12:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:59.822 12:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:59.822 12:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.822 12:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:59.822 12:41:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:00.759 12:41:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:00.759 12:41:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:00.759 12:41:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:00.759 12:41:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.759 12:41:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:00.759 12:41:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:00.759 12:41:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:00.759 12:41:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.759 12:41:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:00.759 12:41:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:01.695 [2024-11-20 12:41:07.318353] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:01.695 [2024-11-20 12:41:07.318369] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:01.695 [2024-11-20 12:41:07.318387] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:01.695 [2024-11-20 12:41:07.406645] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:01.955 [2024-11-20 12:41:07.508339] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:27:01.955 [2024-11-20 12:41:07.508946] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x709760:1 started. 00:27:01.955 [2024-11-20 12:41:07.509948] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:01.955 [2024-11-20 12:41:07.509977] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:01.955 [2024-11-20 12:41:07.509993] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:01.955 [2024-11-20 12:41:07.510005] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:01.955 [2024-11-20 12:41:07.510011] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:01.955 [2024-11-20 12:41:07.517145] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x709760 was disconnected and freed. delete nvme_qpair. 00:27:01.955 12:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:01.955 12:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:01.955 12:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:01.955 12:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.955 12:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:01.955 12:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:01.955 12:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:01.955 12:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.955 12:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:01.955 12:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:01.955 12:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 302182 00:27:01.955 12:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 302182 ']' 00:27:01.955 12:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 302182 00:27:01.955 12:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:27:01.955 12:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:01.955 12:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 302182 00:27:01.955 12:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:01.955 12:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:01.955 12:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 302182' 00:27:01.955 killing process with pid 302182 00:27:01.955 12:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 302182 00:27:01.955 12:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 302182 00:27:02.215 12:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:02.215 12:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:02.215 12:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:27:02.215 12:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:02.215 12:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:27:02.215 12:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:02.215 12:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:02.215 rmmod nvme_tcp 00:27:02.215 rmmod nvme_fabrics 00:27:02.215 rmmod nvme_keyring 00:27:02.215 12:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:02.215 12:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:27:02.215 12:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:27:02.215 12:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 301935 ']' 00:27:02.215 12:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 301935 00:27:02.215 12:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 301935 ']' 00:27:02.215 12:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 301935 00:27:02.215 12:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:27:02.215 12:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:02.215 12:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 301935 00:27:02.215 12:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:02.215 12:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:02.215 12:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 301935' 00:27:02.215 killing process with pid 301935 00:27:02.215 12:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 301935 00:27:02.215 12:41:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 301935 00:27:02.474 12:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:02.474 12:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:02.474 12:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:02.474 12:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:27:02.474 12:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:27:02.474 12:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:02.474 12:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:27:02.474 12:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:02.474 12:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:02.474 12:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:02.474 12:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:02.474 12:41:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:04.378 12:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:04.378 00:27:04.378 real 0m21.177s 00:27:04.378 user 0m25.592s 00:27:04.378 sys 0m5.881s 00:27:04.378 12:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:04.378 12:41:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:04.378 ************************************ 00:27:04.378 END TEST nvmf_discovery_remove_ifc 00:27:04.378 ************************************ 00:27:04.638 12:41:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:04.638 12:41:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:04.638 12:41:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:04.638 12:41:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.638 ************************************ 00:27:04.638 START TEST nvmf_identify_kernel_target 00:27:04.638 ************************************ 00:27:04.638 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:04.638 * Looking for test storage... 00:27:04.638 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:04.638 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:04.638 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:27:04.638 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:04.638 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:04.638 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:04.638 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:04.638 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:04.638 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:27:04.638 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:27:04.638 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:27:04.638 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:27:04.638 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:27:04.638 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:27:04.638 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:27:04.638 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:04.638 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:27:04.638 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:27:04.638 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:04.638 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:04.638 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:27:04.638 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:27:04.638 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:04.638 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:27:04.638 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:04.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.639 --rc genhtml_branch_coverage=1 00:27:04.639 --rc genhtml_function_coverage=1 00:27:04.639 --rc genhtml_legend=1 00:27:04.639 --rc geninfo_all_blocks=1 00:27:04.639 --rc geninfo_unexecuted_blocks=1 00:27:04.639 00:27:04.639 ' 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:04.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.639 --rc genhtml_branch_coverage=1 00:27:04.639 --rc genhtml_function_coverage=1 00:27:04.639 --rc genhtml_legend=1 00:27:04.639 --rc geninfo_all_blocks=1 00:27:04.639 --rc geninfo_unexecuted_blocks=1 00:27:04.639 00:27:04.639 ' 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:04.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.639 --rc genhtml_branch_coverage=1 00:27:04.639 --rc genhtml_function_coverage=1 00:27:04.639 --rc genhtml_legend=1 00:27:04.639 --rc geninfo_all_blocks=1 00:27:04.639 --rc geninfo_unexecuted_blocks=1 00:27:04.639 00:27:04.639 ' 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:04.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.639 --rc genhtml_branch_coverage=1 00:27:04.639 --rc genhtml_function_coverage=1 00:27:04.639 --rc genhtml_legend=1 00:27:04.639 --rc geninfo_all_blocks=1 00:27:04.639 --rc geninfo_unexecuted_blocks=1 00:27:04.639 00:27:04.639 ' 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:04.639 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:27:04.639 12:41:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:11.207 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:11.207 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:27:11.207 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:11.207 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:11.207 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:11.207 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:11.207 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:11.207 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:27:11.207 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:11.207 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:27:11.207 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:27:11.207 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:27:11.207 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:27:11.207 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:27:11.207 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:27:11.207 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:11.207 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:11.207 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:11.207 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:11.207 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:11.207 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:11.207 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:11.207 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:11.207 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:11.207 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:11.207 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:11.207 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:11.207 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:11.207 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:11.207 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:11.207 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:11.207 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:11.207 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:11.207 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:11.207 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:11.207 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:11.207 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:11.208 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:11.208 Found net devices under 0000:86:00.0: cvl_0_0 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:11.208 Found net devices under 0000:86:00.1: cvl_0_1 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:11.208 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:11.208 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.486 ms 00:27:11.208 00:27:11.208 --- 10.0.0.2 ping statistics --- 00:27:11.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:11.208 rtt min/avg/max/mdev = 0.486/0.486/0.486/0.000 ms 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:11.208 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:11.208 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:27:11.208 00:27:11.208 --- 10.0.0.1 ping statistics --- 00:27:11.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:11.208 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:11.208 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:11.209 12:41:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:13.863 Waiting for block devices as requested 00:27:13.863 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:27:13.863 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:13.863 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:13.863 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:13.863 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:13.863 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:14.122 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:14.122 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:14.122 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:14.122 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:14.382 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:14.382 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:14.382 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:14.642 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:14.642 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:14.642 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:14.901 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:14.901 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:14.901 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:14.901 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:14.901 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:27:14.901 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:14.901 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:14.901 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:14.901 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:14.901 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:14.901 No valid GPT data, bailing 00:27:14.901 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:14.901 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:27:14.901 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:27:14.901 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:14.901 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:14.901 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:14.901 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:14.901 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:14.901 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:14.901 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:27:14.901 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:14.901 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:27:14.901 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:14.901 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:27:14.901 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:27:14.901 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:27:14.901 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:14.901 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:27:15.161 00:27:15.161 Discovery Log Number of Records 2, Generation counter 2 00:27:15.161 =====Discovery Log Entry 0====== 00:27:15.161 trtype: tcp 00:27:15.161 adrfam: ipv4 00:27:15.161 subtype: current discovery subsystem 00:27:15.161 treq: not specified, sq flow control disable supported 00:27:15.161 portid: 1 00:27:15.161 trsvcid: 4420 00:27:15.161 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:15.161 traddr: 10.0.0.1 00:27:15.161 eflags: none 00:27:15.161 sectype: none 00:27:15.161 =====Discovery Log Entry 1====== 00:27:15.161 trtype: tcp 00:27:15.161 adrfam: ipv4 00:27:15.161 subtype: nvme subsystem 00:27:15.161 treq: not specified, sq flow control disable supported 00:27:15.161 portid: 1 00:27:15.161 trsvcid: 4420 00:27:15.161 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:15.161 traddr: 10.0.0.1 00:27:15.161 eflags: none 00:27:15.161 sectype: none 00:27:15.161 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:15.161 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:15.161 ===================================================== 00:27:15.161 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:15.161 ===================================================== 00:27:15.161 Controller Capabilities/Features 00:27:15.161 ================================ 00:27:15.161 Vendor ID: 0000 00:27:15.161 Subsystem Vendor ID: 0000 00:27:15.161 Serial Number: 34123a208bffeffd38a3 00:27:15.161 Model Number: Linux 00:27:15.161 Firmware Version: 6.8.9-20 00:27:15.161 Recommended Arb Burst: 0 00:27:15.161 IEEE OUI Identifier: 00 00 00 00:27:15.161 Multi-path I/O 00:27:15.161 May have multiple subsystem ports: No 00:27:15.161 May have multiple controllers: No 00:27:15.161 Associated with SR-IOV VF: No 00:27:15.161 Max Data Transfer Size: Unlimited 00:27:15.161 Max Number of Namespaces: 0 00:27:15.161 Max Number of I/O Queues: 1024 00:27:15.161 NVMe Specification Version (VS): 1.3 00:27:15.161 NVMe Specification Version (Identify): 1.3 00:27:15.161 Maximum Queue Entries: 1024 00:27:15.161 Contiguous Queues Required: No 00:27:15.161 Arbitration Mechanisms Supported 00:27:15.161 Weighted Round Robin: Not Supported 00:27:15.161 Vendor Specific: Not Supported 00:27:15.161 Reset Timeout: 7500 ms 00:27:15.161 Doorbell Stride: 4 bytes 00:27:15.161 NVM Subsystem Reset: Not Supported 00:27:15.161 Command Sets Supported 00:27:15.161 NVM Command Set: Supported 00:27:15.161 Boot Partition: Not Supported 00:27:15.161 Memory Page Size Minimum: 4096 bytes 00:27:15.161 Memory Page Size Maximum: 4096 bytes 00:27:15.161 Persistent Memory Region: Not Supported 00:27:15.161 Optional Asynchronous Events Supported 00:27:15.161 Namespace Attribute Notices: Not Supported 00:27:15.161 Firmware Activation Notices: Not Supported 00:27:15.161 ANA Change Notices: Not Supported 00:27:15.161 PLE Aggregate Log Change Notices: Not Supported 00:27:15.161 LBA Status Info Alert Notices: Not Supported 00:27:15.161 EGE Aggregate Log Change Notices: Not Supported 00:27:15.161 Normal NVM Subsystem Shutdown event: Not Supported 00:27:15.161 Zone Descriptor Change Notices: Not Supported 00:27:15.161 Discovery Log Change Notices: Supported 00:27:15.162 Controller Attributes 00:27:15.162 128-bit Host Identifier: Not Supported 00:27:15.162 Non-Operational Permissive Mode: Not Supported 00:27:15.162 NVM Sets: Not Supported 00:27:15.162 Read Recovery Levels: Not Supported 00:27:15.162 Endurance Groups: Not Supported 00:27:15.162 Predictable Latency Mode: Not Supported 00:27:15.162 Traffic Based Keep ALive: Not Supported 00:27:15.162 Namespace Granularity: Not Supported 00:27:15.162 SQ Associations: Not Supported 00:27:15.162 UUID List: Not Supported 00:27:15.162 Multi-Domain Subsystem: Not Supported 00:27:15.162 Fixed Capacity Management: Not Supported 00:27:15.162 Variable Capacity Management: Not Supported 00:27:15.162 Delete Endurance Group: Not Supported 00:27:15.162 Delete NVM Set: Not Supported 00:27:15.162 Extended LBA Formats Supported: Not Supported 00:27:15.162 Flexible Data Placement Supported: Not Supported 00:27:15.162 00:27:15.162 Controller Memory Buffer Support 00:27:15.162 ================================ 00:27:15.162 Supported: No 00:27:15.162 00:27:15.162 Persistent Memory Region Support 00:27:15.162 ================================ 00:27:15.162 Supported: No 00:27:15.162 00:27:15.162 Admin Command Set Attributes 00:27:15.162 ============================ 00:27:15.162 Security Send/Receive: Not Supported 00:27:15.162 Format NVM: Not Supported 00:27:15.162 Firmware Activate/Download: Not Supported 00:27:15.162 Namespace Management: Not Supported 00:27:15.162 Device Self-Test: Not Supported 00:27:15.162 Directives: Not Supported 00:27:15.162 NVMe-MI: Not Supported 00:27:15.162 Virtualization Management: Not Supported 00:27:15.162 Doorbell Buffer Config: Not Supported 00:27:15.162 Get LBA Status Capability: Not Supported 00:27:15.162 Command & Feature Lockdown Capability: Not Supported 00:27:15.162 Abort Command Limit: 1 00:27:15.162 Async Event Request Limit: 1 00:27:15.162 Number of Firmware Slots: N/A 00:27:15.162 Firmware Slot 1 Read-Only: N/A 00:27:15.162 Firmware Activation Without Reset: N/A 00:27:15.162 Multiple Update Detection Support: N/A 00:27:15.162 Firmware Update Granularity: No Information Provided 00:27:15.162 Per-Namespace SMART Log: No 00:27:15.162 Asymmetric Namespace Access Log Page: Not Supported 00:27:15.162 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:15.162 Command Effects Log Page: Not Supported 00:27:15.162 Get Log Page Extended Data: Supported 00:27:15.162 Telemetry Log Pages: Not Supported 00:27:15.162 Persistent Event Log Pages: Not Supported 00:27:15.162 Supported Log Pages Log Page: May Support 00:27:15.162 Commands Supported & Effects Log Page: Not Supported 00:27:15.162 Feature Identifiers & Effects Log Page:May Support 00:27:15.162 NVMe-MI Commands & Effects Log Page: May Support 00:27:15.162 Data Area 4 for Telemetry Log: Not Supported 00:27:15.162 Error Log Page Entries Supported: 1 00:27:15.162 Keep Alive: Not Supported 00:27:15.162 00:27:15.162 NVM Command Set Attributes 00:27:15.162 ========================== 00:27:15.162 Submission Queue Entry Size 00:27:15.162 Max: 1 00:27:15.162 Min: 1 00:27:15.162 Completion Queue Entry Size 00:27:15.162 Max: 1 00:27:15.162 Min: 1 00:27:15.162 Number of Namespaces: 0 00:27:15.162 Compare Command: Not Supported 00:27:15.162 Write Uncorrectable Command: Not Supported 00:27:15.162 Dataset Management Command: Not Supported 00:27:15.162 Write Zeroes Command: Not Supported 00:27:15.162 Set Features Save Field: Not Supported 00:27:15.162 Reservations: Not Supported 00:27:15.162 Timestamp: Not Supported 00:27:15.162 Copy: Not Supported 00:27:15.162 Volatile Write Cache: Not Present 00:27:15.162 Atomic Write Unit (Normal): 1 00:27:15.162 Atomic Write Unit (PFail): 1 00:27:15.162 Atomic Compare & Write Unit: 1 00:27:15.162 Fused Compare & Write: Not Supported 00:27:15.162 Scatter-Gather List 00:27:15.162 SGL Command Set: Supported 00:27:15.162 SGL Keyed: Not Supported 00:27:15.162 SGL Bit Bucket Descriptor: Not Supported 00:27:15.162 SGL Metadata Pointer: Not Supported 00:27:15.162 Oversized SGL: Not Supported 00:27:15.162 SGL Metadata Address: Not Supported 00:27:15.162 SGL Offset: Supported 00:27:15.162 Transport SGL Data Block: Not Supported 00:27:15.162 Replay Protected Memory Block: Not Supported 00:27:15.162 00:27:15.162 Firmware Slot Information 00:27:15.162 ========================= 00:27:15.162 Active slot: 0 00:27:15.162 00:27:15.162 00:27:15.162 Error Log 00:27:15.162 ========= 00:27:15.162 00:27:15.162 Active Namespaces 00:27:15.162 ================= 00:27:15.162 Discovery Log Page 00:27:15.162 ================== 00:27:15.162 Generation Counter: 2 00:27:15.162 Number of Records: 2 00:27:15.162 Record Format: 0 00:27:15.162 00:27:15.162 Discovery Log Entry 0 00:27:15.162 ---------------------- 00:27:15.162 Transport Type: 3 (TCP) 00:27:15.162 Address Family: 1 (IPv4) 00:27:15.162 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:15.162 Entry Flags: 00:27:15.162 Duplicate Returned Information: 0 00:27:15.162 Explicit Persistent Connection Support for Discovery: 0 00:27:15.162 Transport Requirements: 00:27:15.162 Secure Channel: Not Specified 00:27:15.162 Port ID: 1 (0x0001) 00:27:15.162 Controller ID: 65535 (0xffff) 00:27:15.162 Admin Max SQ Size: 32 00:27:15.162 Transport Service Identifier: 4420 00:27:15.162 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:15.162 Transport Address: 10.0.0.1 00:27:15.162 Discovery Log Entry 1 00:27:15.162 ---------------------- 00:27:15.162 Transport Type: 3 (TCP) 00:27:15.162 Address Family: 1 (IPv4) 00:27:15.162 Subsystem Type: 2 (NVM Subsystem) 00:27:15.162 Entry Flags: 00:27:15.162 Duplicate Returned Information: 0 00:27:15.162 Explicit Persistent Connection Support for Discovery: 0 00:27:15.162 Transport Requirements: 00:27:15.162 Secure Channel: Not Specified 00:27:15.162 Port ID: 1 (0x0001) 00:27:15.162 Controller ID: 65535 (0xffff) 00:27:15.162 Admin Max SQ Size: 32 00:27:15.162 Transport Service Identifier: 4420 00:27:15.162 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:15.162 Transport Address: 10.0.0.1 00:27:15.162 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:15.162 get_feature(0x01) failed 00:27:15.162 get_feature(0x02) failed 00:27:15.162 get_feature(0x04) failed 00:27:15.162 ===================================================== 00:27:15.162 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:15.162 ===================================================== 00:27:15.162 Controller Capabilities/Features 00:27:15.162 ================================ 00:27:15.162 Vendor ID: 0000 00:27:15.162 Subsystem Vendor ID: 0000 00:27:15.162 Serial Number: 1db2cb2d2790dd5102ea 00:27:15.162 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:15.162 Firmware Version: 6.8.9-20 00:27:15.162 Recommended Arb Burst: 6 00:27:15.162 IEEE OUI Identifier: 00 00 00 00:27:15.162 Multi-path I/O 00:27:15.162 May have multiple subsystem ports: Yes 00:27:15.162 May have multiple controllers: Yes 00:27:15.162 Associated with SR-IOV VF: No 00:27:15.162 Max Data Transfer Size: Unlimited 00:27:15.162 Max Number of Namespaces: 1024 00:27:15.162 Max Number of I/O Queues: 128 00:27:15.162 NVMe Specification Version (VS): 1.3 00:27:15.162 NVMe Specification Version (Identify): 1.3 00:27:15.162 Maximum Queue Entries: 1024 00:27:15.162 Contiguous Queues Required: No 00:27:15.162 Arbitration Mechanisms Supported 00:27:15.162 Weighted Round Robin: Not Supported 00:27:15.162 Vendor Specific: Not Supported 00:27:15.162 Reset Timeout: 7500 ms 00:27:15.162 Doorbell Stride: 4 bytes 00:27:15.162 NVM Subsystem Reset: Not Supported 00:27:15.162 Command Sets Supported 00:27:15.162 NVM Command Set: Supported 00:27:15.162 Boot Partition: Not Supported 00:27:15.162 Memory Page Size Minimum: 4096 bytes 00:27:15.162 Memory Page Size Maximum: 4096 bytes 00:27:15.162 Persistent Memory Region: Not Supported 00:27:15.162 Optional Asynchronous Events Supported 00:27:15.162 Namespace Attribute Notices: Supported 00:27:15.162 Firmware Activation Notices: Not Supported 00:27:15.162 ANA Change Notices: Supported 00:27:15.162 PLE Aggregate Log Change Notices: Not Supported 00:27:15.162 LBA Status Info Alert Notices: Not Supported 00:27:15.162 EGE Aggregate Log Change Notices: Not Supported 00:27:15.162 Normal NVM Subsystem Shutdown event: Not Supported 00:27:15.162 Zone Descriptor Change Notices: Not Supported 00:27:15.163 Discovery Log Change Notices: Not Supported 00:27:15.163 Controller Attributes 00:27:15.163 128-bit Host Identifier: Supported 00:27:15.163 Non-Operational Permissive Mode: Not Supported 00:27:15.163 NVM Sets: Not Supported 00:27:15.163 Read Recovery Levels: Not Supported 00:27:15.163 Endurance Groups: Not Supported 00:27:15.163 Predictable Latency Mode: Not Supported 00:27:15.163 Traffic Based Keep ALive: Supported 00:27:15.163 Namespace Granularity: Not Supported 00:27:15.163 SQ Associations: Not Supported 00:27:15.163 UUID List: Not Supported 00:27:15.163 Multi-Domain Subsystem: Not Supported 00:27:15.163 Fixed Capacity Management: Not Supported 00:27:15.163 Variable Capacity Management: Not Supported 00:27:15.163 Delete Endurance Group: Not Supported 00:27:15.163 Delete NVM Set: Not Supported 00:27:15.163 Extended LBA Formats Supported: Not Supported 00:27:15.163 Flexible Data Placement Supported: Not Supported 00:27:15.163 00:27:15.163 Controller Memory Buffer Support 00:27:15.163 ================================ 00:27:15.163 Supported: No 00:27:15.163 00:27:15.163 Persistent Memory Region Support 00:27:15.163 ================================ 00:27:15.163 Supported: No 00:27:15.163 00:27:15.163 Admin Command Set Attributes 00:27:15.163 ============================ 00:27:15.163 Security Send/Receive: Not Supported 00:27:15.163 Format NVM: Not Supported 00:27:15.163 Firmware Activate/Download: Not Supported 00:27:15.163 Namespace Management: Not Supported 00:27:15.163 Device Self-Test: Not Supported 00:27:15.163 Directives: Not Supported 00:27:15.163 NVMe-MI: Not Supported 00:27:15.163 Virtualization Management: Not Supported 00:27:15.163 Doorbell Buffer Config: Not Supported 00:27:15.163 Get LBA Status Capability: Not Supported 00:27:15.163 Command & Feature Lockdown Capability: Not Supported 00:27:15.163 Abort Command Limit: 4 00:27:15.163 Async Event Request Limit: 4 00:27:15.163 Number of Firmware Slots: N/A 00:27:15.163 Firmware Slot 1 Read-Only: N/A 00:27:15.163 Firmware Activation Without Reset: N/A 00:27:15.163 Multiple Update Detection Support: N/A 00:27:15.163 Firmware Update Granularity: No Information Provided 00:27:15.163 Per-Namespace SMART Log: Yes 00:27:15.163 Asymmetric Namespace Access Log Page: Supported 00:27:15.163 ANA Transition Time : 10 sec 00:27:15.163 00:27:15.163 Asymmetric Namespace Access Capabilities 00:27:15.163 ANA Optimized State : Supported 00:27:15.163 ANA Non-Optimized State : Supported 00:27:15.163 ANA Inaccessible State : Supported 00:27:15.163 ANA Persistent Loss State : Supported 00:27:15.163 ANA Change State : Supported 00:27:15.163 ANAGRPID is not changed : No 00:27:15.163 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:15.163 00:27:15.163 ANA Group Identifier Maximum : 128 00:27:15.163 Number of ANA Group Identifiers : 128 00:27:15.163 Max Number of Allowed Namespaces : 1024 00:27:15.163 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:15.163 Command Effects Log Page: Supported 00:27:15.163 Get Log Page Extended Data: Supported 00:27:15.163 Telemetry Log Pages: Not Supported 00:27:15.163 Persistent Event Log Pages: Not Supported 00:27:15.163 Supported Log Pages Log Page: May Support 00:27:15.163 Commands Supported & Effects Log Page: Not Supported 00:27:15.163 Feature Identifiers & Effects Log Page:May Support 00:27:15.163 NVMe-MI Commands & Effects Log Page: May Support 00:27:15.163 Data Area 4 for Telemetry Log: Not Supported 00:27:15.163 Error Log Page Entries Supported: 128 00:27:15.163 Keep Alive: Supported 00:27:15.163 Keep Alive Granularity: 1000 ms 00:27:15.163 00:27:15.163 NVM Command Set Attributes 00:27:15.163 ========================== 00:27:15.163 Submission Queue Entry Size 00:27:15.163 Max: 64 00:27:15.163 Min: 64 00:27:15.163 Completion Queue Entry Size 00:27:15.163 Max: 16 00:27:15.163 Min: 16 00:27:15.163 Number of Namespaces: 1024 00:27:15.163 Compare Command: Not Supported 00:27:15.163 Write Uncorrectable Command: Not Supported 00:27:15.163 Dataset Management Command: Supported 00:27:15.163 Write Zeroes Command: Supported 00:27:15.163 Set Features Save Field: Not Supported 00:27:15.163 Reservations: Not Supported 00:27:15.163 Timestamp: Not Supported 00:27:15.163 Copy: Not Supported 00:27:15.163 Volatile Write Cache: Present 00:27:15.163 Atomic Write Unit (Normal): 1 00:27:15.163 Atomic Write Unit (PFail): 1 00:27:15.163 Atomic Compare & Write Unit: 1 00:27:15.163 Fused Compare & Write: Not Supported 00:27:15.163 Scatter-Gather List 00:27:15.163 SGL Command Set: Supported 00:27:15.163 SGL Keyed: Not Supported 00:27:15.163 SGL Bit Bucket Descriptor: Not Supported 00:27:15.163 SGL Metadata Pointer: Not Supported 00:27:15.163 Oversized SGL: Not Supported 00:27:15.163 SGL Metadata Address: Not Supported 00:27:15.163 SGL Offset: Supported 00:27:15.163 Transport SGL Data Block: Not Supported 00:27:15.163 Replay Protected Memory Block: Not Supported 00:27:15.163 00:27:15.163 Firmware Slot Information 00:27:15.163 ========================= 00:27:15.163 Active slot: 0 00:27:15.163 00:27:15.163 Asymmetric Namespace Access 00:27:15.163 =========================== 00:27:15.163 Change Count : 0 00:27:15.163 Number of ANA Group Descriptors : 1 00:27:15.163 ANA Group Descriptor : 0 00:27:15.163 ANA Group ID : 1 00:27:15.163 Number of NSID Values : 1 00:27:15.163 Change Count : 0 00:27:15.163 ANA State : 1 00:27:15.163 Namespace Identifier : 1 00:27:15.163 00:27:15.163 Commands Supported and Effects 00:27:15.163 ============================== 00:27:15.163 Admin Commands 00:27:15.163 -------------- 00:27:15.163 Get Log Page (02h): Supported 00:27:15.163 Identify (06h): Supported 00:27:15.163 Abort (08h): Supported 00:27:15.163 Set Features (09h): Supported 00:27:15.163 Get Features (0Ah): Supported 00:27:15.163 Asynchronous Event Request (0Ch): Supported 00:27:15.163 Keep Alive (18h): Supported 00:27:15.163 I/O Commands 00:27:15.163 ------------ 00:27:15.163 Flush (00h): Supported 00:27:15.163 Write (01h): Supported LBA-Change 00:27:15.163 Read (02h): Supported 00:27:15.163 Write Zeroes (08h): Supported LBA-Change 00:27:15.163 Dataset Management (09h): Supported 00:27:15.163 00:27:15.163 Error Log 00:27:15.163 ========= 00:27:15.163 Entry: 0 00:27:15.163 Error Count: 0x3 00:27:15.163 Submission Queue Id: 0x0 00:27:15.163 Command Id: 0x5 00:27:15.163 Phase Bit: 0 00:27:15.163 Status Code: 0x2 00:27:15.163 Status Code Type: 0x0 00:27:15.163 Do Not Retry: 1 00:27:15.163 Error Location: 0x28 00:27:15.163 LBA: 0x0 00:27:15.163 Namespace: 0x0 00:27:15.163 Vendor Log Page: 0x0 00:27:15.163 ----------- 00:27:15.163 Entry: 1 00:27:15.163 Error Count: 0x2 00:27:15.163 Submission Queue Id: 0x0 00:27:15.163 Command Id: 0x5 00:27:15.163 Phase Bit: 0 00:27:15.163 Status Code: 0x2 00:27:15.163 Status Code Type: 0x0 00:27:15.163 Do Not Retry: 1 00:27:15.163 Error Location: 0x28 00:27:15.163 LBA: 0x0 00:27:15.163 Namespace: 0x0 00:27:15.163 Vendor Log Page: 0x0 00:27:15.163 ----------- 00:27:15.163 Entry: 2 00:27:15.163 Error Count: 0x1 00:27:15.163 Submission Queue Id: 0x0 00:27:15.163 Command Id: 0x4 00:27:15.163 Phase Bit: 0 00:27:15.163 Status Code: 0x2 00:27:15.163 Status Code Type: 0x0 00:27:15.163 Do Not Retry: 1 00:27:15.163 Error Location: 0x28 00:27:15.163 LBA: 0x0 00:27:15.163 Namespace: 0x0 00:27:15.163 Vendor Log Page: 0x0 00:27:15.163 00:27:15.163 Number of Queues 00:27:15.163 ================ 00:27:15.163 Number of I/O Submission Queues: 128 00:27:15.163 Number of I/O Completion Queues: 128 00:27:15.163 00:27:15.163 ZNS Specific Controller Data 00:27:15.163 ============================ 00:27:15.163 Zone Append Size Limit: 0 00:27:15.163 00:27:15.163 00:27:15.163 Active Namespaces 00:27:15.163 ================= 00:27:15.163 get_feature(0x05) failed 00:27:15.163 Namespace ID:1 00:27:15.163 Command Set Identifier: NVM (00h) 00:27:15.163 Deallocate: Supported 00:27:15.163 Deallocated/Unwritten Error: Not Supported 00:27:15.163 Deallocated Read Value: Unknown 00:27:15.163 Deallocate in Write Zeroes: Not Supported 00:27:15.163 Deallocated Guard Field: 0xFFFF 00:27:15.163 Flush: Supported 00:27:15.163 Reservation: Not Supported 00:27:15.163 Namespace Sharing Capabilities: Multiple Controllers 00:27:15.163 Size (in LBAs): 3125627568 (1490GiB) 00:27:15.163 Capacity (in LBAs): 3125627568 (1490GiB) 00:27:15.164 Utilization (in LBAs): 3125627568 (1490GiB) 00:27:15.164 UUID: fcbb19c1-3d13-4ac8-bda8-912600dc9e05 00:27:15.164 Thin Provisioning: Not Supported 00:27:15.164 Per-NS Atomic Units: Yes 00:27:15.164 Atomic Boundary Size (Normal): 0 00:27:15.164 Atomic Boundary Size (PFail): 0 00:27:15.164 Atomic Boundary Offset: 0 00:27:15.164 NGUID/EUI64 Never Reused: No 00:27:15.164 ANA group ID: 1 00:27:15.164 Namespace Write Protected: No 00:27:15.164 Number of LBA Formats: 1 00:27:15.164 Current LBA Format: LBA Format #00 00:27:15.164 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:15.164 00:27:15.164 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:15.164 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:15.164 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:27:15.164 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:15.164 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:27:15.164 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:15.164 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:15.164 rmmod nvme_tcp 00:27:15.164 rmmod nvme_fabrics 00:27:15.423 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:15.423 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:27:15.423 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:27:15.423 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:27:15.423 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:15.423 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:15.423 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:15.423 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:27:15.423 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:27:15.423 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:15.423 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:27:15.423 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:15.423 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:15.423 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:15.423 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:15.423 12:41:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:17.331 12:41:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:17.331 12:41:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:17.331 12:41:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:17.331 12:41:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:27:17.331 12:41:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:17.331 12:41:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:17.331 12:41:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:17.331 12:41:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:17.331 12:41:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:17.331 12:41:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:17.331 12:41:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:20.621 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:20.621 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:20.621 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:20.621 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:20.621 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:20.621 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:20.621 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:20.621 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:20.621 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:20.621 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:20.621 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:20.621 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:20.621 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:20.621 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:20.621 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:20.621 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:21.559 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:27:21.818 00:27:21.818 real 0m17.264s 00:27:21.818 user 0m4.415s 00:27:21.818 sys 0m8.719s 00:27:21.818 12:41:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:21.818 12:41:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:21.818 ************************************ 00:27:21.818 END TEST nvmf_identify_kernel_target 00:27:21.818 ************************************ 00:27:21.818 12:41:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:21.818 12:41:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:21.818 12:41:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:21.818 12:41:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.818 ************************************ 00:27:21.818 START TEST nvmf_auth_host 00:27:21.818 ************************************ 00:27:21.818 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:22.078 * Looking for test storage... 00:27:22.078 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:22.078 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:22.078 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:22.078 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:27:22.078 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:22.078 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:22.078 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:22.078 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:22.078 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:22.078 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:22.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:22.079 --rc genhtml_branch_coverage=1 00:27:22.079 --rc genhtml_function_coverage=1 00:27:22.079 --rc genhtml_legend=1 00:27:22.079 --rc geninfo_all_blocks=1 00:27:22.079 --rc geninfo_unexecuted_blocks=1 00:27:22.079 00:27:22.079 ' 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:22.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:22.079 --rc genhtml_branch_coverage=1 00:27:22.079 --rc genhtml_function_coverage=1 00:27:22.079 --rc genhtml_legend=1 00:27:22.079 --rc geninfo_all_blocks=1 00:27:22.079 --rc geninfo_unexecuted_blocks=1 00:27:22.079 00:27:22.079 ' 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:22.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:22.079 --rc genhtml_branch_coverage=1 00:27:22.079 --rc genhtml_function_coverage=1 00:27:22.079 --rc genhtml_legend=1 00:27:22.079 --rc geninfo_all_blocks=1 00:27:22.079 --rc geninfo_unexecuted_blocks=1 00:27:22.079 00:27:22.079 ' 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:22.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:22.079 --rc genhtml_branch_coverage=1 00:27:22.079 --rc genhtml_function_coverage=1 00:27:22.079 --rc genhtml_legend=1 00:27:22.079 --rc geninfo_all_blocks=1 00:27:22.079 --rc geninfo_unexecuted_blocks=1 00:27:22.079 00:27:22.079 ' 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:22.079 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:22.079 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:22.080 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:22.080 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:22.080 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:22.080 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:22.080 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:27:22.080 12:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:28.653 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:28.653 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:28.653 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:28.654 Found net devices under 0000:86:00.0: cvl_0_0 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:28.654 Found net devices under 0000:86:00.1: cvl_0_1 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:28.654 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:28.654 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.473 ms 00:27:28.654 00:27:28.654 --- 10.0.0.2 ping statistics --- 00:27:28.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:28.654 rtt min/avg/max/mdev = 0.473/0.473/0.473/0.000 ms 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:28.654 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:28.654 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:27:28.654 00:27:28.654 --- 10.0.0.1 ping statistics --- 00:27:28.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:28.654 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=314170 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 314170 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 314170 ']' 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c2a29f8cd2ba4937dd63366492b3eed5 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.rZn 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c2a29f8cd2ba4937dd63366492b3eed5 0 00:27:28.654 12:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c2a29f8cd2ba4937dd63366492b3eed5 0 00:27:28.654 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:28.654 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:28.654 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c2a29f8cd2ba4937dd63366492b3eed5 00:27:28.654 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:28.654 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:28.654 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.rZn 00:27:28.654 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.rZn 00:27:28.654 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.rZn 00:27:28.654 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:28.654 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:28.654 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:28.654 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:28.654 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:28.654 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:28.654 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:28.654 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=454353d64c21b9df59bda42f11eb97fe728f830c70a1df605f634c2cb7d603ad 00:27:28.654 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.CZX 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 454353d64c21b9df59bda42f11eb97fe728f830c70a1df605f634c2cb7d603ad 3 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 454353d64c21b9df59bda42f11eb97fe728f830c70a1df605f634c2cb7d603ad 3 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=454353d64c21b9df59bda42f11eb97fe728f830c70a1df605f634c2cb7d603ad 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.CZX 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.CZX 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.CZX 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6fefb871885267a049d4cf544a46b6d770b029a77420a6b2 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Xbi 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6fefb871885267a049d4cf544a46b6d770b029a77420a6b2 0 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6fefb871885267a049d4cf544a46b6d770b029a77420a6b2 0 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6fefb871885267a049d4cf544a46b6d770b029a77420a6b2 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Xbi 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Xbi 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Xbi 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2a98564a6496905ac85b813b5316869d36c2ff6277b53680 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.FWe 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2a98564a6496905ac85b813b5316869d36c2ff6277b53680 2 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2a98564a6496905ac85b813b5316869d36c2ff6277b53680 2 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2a98564a6496905ac85b813b5316869d36c2ff6277b53680 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.FWe 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.FWe 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.FWe 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=31318151a4892d5704e98180322357df 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.abN 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 31318151a4892d5704e98180322357df 1 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 31318151a4892d5704e98180322357df 1 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=31318151a4892d5704e98180322357df 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.abN 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.abN 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.abN 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=435c9cc4e60fcb6c6b6198f452c8c1c2 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.54h 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 435c9cc4e60fcb6c6b6198f452c8c1c2 1 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 435c9cc4e60fcb6c6b6198f452c8c1c2 1 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=435c9cc4e60fcb6c6b6198f452c8c1c2 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.54h 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.54h 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.54h 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0cbd2366e95d5c5a6c3391f9378ba4342fed8ec7b98fcf02 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.7gm 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0cbd2366e95d5c5a6c3391f9378ba4342fed8ec7b98fcf02 2 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0cbd2366e95d5c5a6c3391f9378ba4342fed8ec7b98fcf02 2 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0cbd2366e95d5c5a6c3391f9378ba4342fed8ec7b98fcf02 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.7gm 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.7gm 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.7gm 00:27:28.655 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:28.656 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:28.656 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:28.656 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:28.656 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:28.656 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:28.656 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:28.915 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f501cba9542662464191d1b9a5bcc0ce 00:27:28.915 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:28.915 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.IwC 00:27:28.916 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f501cba9542662464191d1b9a5bcc0ce 0 00:27:28.916 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f501cba9542662464191d1b9a5bcc0ce 0 00:27:28.916 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:28.916 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:28.916 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f501cba9542662464191d1b9a5bcc0ce 00:27:28.916 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:28.916 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:28.916 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.IwC 00:27:28.916 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.IwC 00:27:28.916 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.IwC 00:27:28.916 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:28.916 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:28.916 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:28.916 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:28.916 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:28.916 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:28.916 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:28.916 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=bb39f3ff793b12fb5ec4ba6ec6946c929172c41bcc784905f00b895cba67624b 00:27:28.916 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:28.916 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.hVX 00:27:28.916 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key bb39f3ff793b12fb5ec4ba6ec6946c929172c41bcc784905f00b895cba67624b 3 00:27:28.916 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 bb39f3ff793b12fb5ec4ba6ec6946c929172c41bcc784905f00b895cba67624b 3 00:27:28.916 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:28.916 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:28.916 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=bb39f3ff793b12fb5ec4ba6ec6946c929172c41bcc784905f00b895cba67624b 00:27:28.916 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:28.916 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:28.916 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.hVX 00:27:28.916 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.hVX 00:27:28.916 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.hVX 00:27:28.916 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:28.916 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 314170 00:27:28.916 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 314170 ']' 00:27:28.916 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:28.916 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:28.916 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:28.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:28.916 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:28.916 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.175 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:29.175 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:27:29.175 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:29.175 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.rZn 00:27:29.175 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.175 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.175 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.175 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.CZX ]] 00:27:29.175 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.CZX 00:27:29.175 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.175 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.175 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.175 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:29.175 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Xbi 00:27:29.175 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.175 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.175 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.175 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.FWe ]] 00:27:29.175 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.FWe 00:27:29.175 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.175 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.175 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.abN 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.54h ]] 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.54h 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.7gm 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.IwC ]] 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.IwC 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.hVX 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:29.176 12:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:31.709 Waiting for block devices as requested 00:27:31.968 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:27:31.968 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:31.968 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:32.227 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:32.227 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:32.227 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:32.227 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:32.485 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:32.485 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:32.485 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:32.485 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:32.745 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:32.745 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:32.745 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:33.004 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:33.004 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:33.004 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:33.572 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:33.572 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:33.572 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:33.572 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:27:33.572 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:33.572 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:33.572 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:33.572 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:33.572 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:33.572 No valid GPT data, bailing 00:27:33.572 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:33.572 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:33.572 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:33.572 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:33.572 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:33.572 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:27:33.832 00:27:33.832 Discovery Log Number of Records 2, Generation counter 2 00:27:33.832 =====Discovery Log Entry 0====== 00:27:33.832 trtype: tcp 00:27:33.832 adrfam: ipv4 00:27:33.832 subtype: current discovery subsystem 00:27:33.832 treq: not specified, sq flow control disable supported 00:27:33.832 portid: 1 00:27:33.832 trsvcid: 4420 00:27:33.832 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:33.832 traddr: 10.0.0.1 00:27:33.832 eflags: none 00:27:33.832 sectype: none 00:27:33.832 =====Discovery Log Entry 1====== 00:27:33.832 trtype: tcp 00:27:33.832 adrfam: ipv4 00:27:33.832 subtype: nvme subsystem 00:27:33.832 treq: not specified, sq flow control disable supported 00:27:33.832 portid: 1 00:27:33.832 trsvcid: 4420 00:27:33.832 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:33.832 traddr: 10.0.0.1 00:27:33.832 eflags: none 00:27:33.832 sectype: none 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlZmI4NzE4ODUyNjdhMDQ5ZDRjZjU0NGE0NmI2ZDc3MGIwMjlhNzc0MjBhNmIy4LtRHg==: 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlZmI4NzE4ODUyNjdhMDQ5ZDRjZjU0NGE0NmI2ZDc3MGIwMjlhNzc0MjBhNmIy4LtRHg==: 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: ]] 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.832 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.092 nvme0n1 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJhMjlmOGNkMmJhNDkzN2RkNjMzNjY0OTJiM2VlZDWftO6P: 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDU0MzUzZDY0YzIxYjlkZjU5YmRhNDJmMTFlYjk3ZmU3MjhmODMwYzcwYTFkZjYwNWY2MzRjMmNiN2Q2MDNhZCMj14I=: 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJhMjlmOGNkMmJhNDkzN2RkNjMzNjY0OTJiM2VlZDWftO6P: 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDU0MzUzZDY0YzIxYjlkZjU5YmRhNDJmMTFlYjk3ZmU3MjhmODMwYzcwYTFkZjYwNWY2MzRjMmNiN2Q2MDNhZCMj14I=: ]] 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDU0MzUzZDY0YzIxYjlkZjU5YmRhNDJmMTFlYjk3ZmU3MjhmODMwYzcwYTFkZjYwNWY2MzRjMmNiN2Q2MDNhZCMj14I=: 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.092 nvme0n1 00:27:34.092 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.352 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.352 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.352 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.352 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.352 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.352 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.352 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.352 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.352 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.352 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.352 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.352 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:34.352 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.352 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:34.352 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:34.352 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:34.352 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlZmI4NzE4ODUyNjdhMDQ5ZDRjZjU0NGE0NmI2ZDc3MGIwMjlhNzc0MjBhNmIy4LtRHg==: 00:27:34.352 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: 00:27:34.352 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:34.352 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:34.352 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlZmI4NzE4ODUyNjdhMDQ5ZDRjZjU0NGE0NmI2ZDc3MGIwMjlhNzc0MjBhNmIy4LtRHg==: 00:27:34.352 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: ]] 00:27:34.352 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: 00:27:34.352 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:34.352 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.352 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:34.352 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:34.352 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:34.352 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.352 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:34.352 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.352 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.352 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.352 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.352 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:34.352 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:34.352 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:34.352 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.352 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.352 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:34.352 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.352 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:34.352 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:34.352 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:34.352 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:34.352 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.352 12:41:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.352 nvme0n1 00:27:34.352 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.352 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.352 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.352 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.352 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.352 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzEzMTgxNTFhNDg5MmQ1NzA0ZTk4MTgwMzIyMzU3ZGbe5Y4u: 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzEzMTgxNTFhNDg5MmQ1NzA0ZTk4MTgwMzIyMzU3ZGbe5Y4u: 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: ]] 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.611 nvme0n1 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGNiZDIzNjZlOTVkNWM1YTZjMzM5MWY5Mzc4YmE0MzQyZmVkOGVjN2I5OGZjZjAysvvTvQ==: 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjUwMWNiYTk1NDI2NjI0NjQxOTFkMWI5YTViY2MwY2UbtRa5: 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGNiZDIzNjZlOTVkNWM1YTZjMzM5MWY5Mzc4YmE0MzQyZmVkOGVjN2I5OGZjZjAysvvTvQ==: 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjUwMWNiYTk1NDI2NjI0NjQxOTFkMWI5YTViY2MwY2UbtRa5: ]] 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjUwMWNiYTk1NDI2NjI0NjQxOTFkMWI5YTViY2MwY2UbtRa5: 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.611 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:34.612 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:34.612 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:34.612 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.612 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:34.612 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.612 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.870 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.870 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.870 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:34.870 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:34.870 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:34.870 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.870 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.870 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:34.870 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.870 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:34.870 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:34.870 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.871 nvme0n1 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmIzOWYzZmY3OTNiMTJmYjVlYzRiYTZlYzY5NDZjOTI5MTcyYzQxYmNjNzg0OTA1ZjAwYjg5NWNiYTY3NjI0YgoVZjM=: 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmIzOWYzZmY3OTNiMTJmYjVlYzRiYTZlYzY5NDZjOTI5MTcyYzQxYmNjNzg0OTA1ZjAwYjg5NWNiYTY3NjI0YgoVZjM=: 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.871 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.130 nvme0n1 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJhMjlmOGNkMmJhNDkzN2RkNjMzNjY0OTJiM2VlZDWftO6P: 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDU0MzUzZDY0YzIxYjlkZjU5YmRhNDJmMTFlYjk3ZmU3MjhmODMwYzcwYTFkZjYwNWY2MzRjMmNiN2Q2MDNhZCMj14I=: 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJhMjlmOGNkMmJhNDkzN2RkNjMzNjY0OTJiM2VlZDWftO6P: 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDU0MzUzZDY0YzIxYjlkZjU5YmRhNDJmMTFlYjk3ZmU3MjhmODMwYzcwYTFkZjYwNWY2MzRjMmNiN2Q2MDNhZCMj14I=: ]] 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDU0MzUzZDY0YzIxYjlkZjU5YmRhNDJmMTFlYjk3ZmU3MjhmODMwYzcwYTFkZjYwNWY2MzRjMmNiN2Q2MDNhZCMj14I=: 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.130 12:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.389 nvme0n1 00:27:35.389 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.389 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.389 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.389 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.389 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.389 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.389 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.389 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.389 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.389 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.389 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.389 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.389 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:35.389 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.389 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:35.389 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:35.389 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:35.389 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlZmI4NzE4ODUyNjdhMDQ5ZDRjZjU0NGE0NmI2ZDc3MGIwMjlhNzc0MjBhNmIy4LtRHg==: 00:27:35.389 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: 00:27:35.389 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:35.389 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:35.389 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlZmI4NzE4ODUyNjdhMDQ5ZDRjZjU0NGE0NmI2ZDc3MGIwMjlhNzc0MjBhNmIy4LtRHg==: 00:27:35.389 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: ]] 00:27:35.389 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: 00:27:35.389 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:35.389 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.389 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:35.389 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:35.389 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:35.389 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.389 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:35.389 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.389 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.389 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.389 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.389 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:35.389 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:35.389 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:35.389 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.389 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.389 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:35.389 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.389 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:35.389 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:35.389 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:35.389 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:35.389 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.389 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.649 nvme0n1 00:27:35.649 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.649 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.649 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.649 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.649 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.649 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.649 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.649 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.649 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.649 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.649 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.649 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.649 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:35.649 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.649 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:35.649 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:35.649 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:35.649 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzEzMTgxNTFhNDg5MmQ1NzA0ZTk4MTgwMzIyMzU3ZGbe5Y4u: 00:27:35.649 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: 00:27:35.649 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:35.649 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:35.649 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzEzMTgxNTFhNDg5MmQ1NzA0ZTk4MTgwMzIyMzU3ZGbe5Y4u: 00:27:35.649 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: ]] 00:27:35.649 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: 00:27:35.649 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:35.649 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.649 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:35.649 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:35.649 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:35.649 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.649 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:35.649 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.649 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.649 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.649 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.649 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:35.649 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:35.649 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:35.649 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.649 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.649 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:35.649 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.649 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:35.649 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:35.649 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:35.649 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:35.649 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.649 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.909 nvme0n1 00:27:35.909 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.909 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.909 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.909 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.909 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.909 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.909 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.909 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.909 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.909 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.909 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.909 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.909 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:35.909 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.909 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:35.909 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:35.909 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:35.909 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGNiZDIzNjZlOTVkNWM1YTZjMzM5MWY5Mzc4YmE0MzQyZmVkOGVjN2I5OGZjZjAysvvTvQ==: 00:27:35.909 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjUwMWNiYTk1NDI2NjI0NjQxOTFkMWI5YTViY2MwY2UbtRa5: 00:27:35.909 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:35.909 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:35.909 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGNiZDIzNjZlOTVkNWM1YTZjMzM5MWY5Mzc4YmE0MzQyZmVkOGVjN2I5OGZjZjAysvvTvQ==: 00:27:35.909 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjUwMWNiYTk1NDI2NjI0NjQxOTFkMWI5YTViY2MwY2UbtRa5: ]] 00:27:35.909 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjUwMWNiYTk1NDI2NjI0NjQxOTFkMWI5YTViY2MwY2UbtRa5: 00:27:35.909 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:35.909 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.909 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:35.909 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:35.909 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:35.909 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.909 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:35.909 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.909 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.909 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.909 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.909 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:35.909 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:35.909 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:35.909 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.909 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.909 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:35.909 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.909 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:35.909 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:35.909 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:35.909 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:35.909 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.909 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.169 nvme0n1 00:27:36.169 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.169 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.169 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.169 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.169 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.169 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.169 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.169 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.169 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.169 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.169 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.169 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.169 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:36.169 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.169 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:36.169 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:36.169 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:36.169 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmIzOWYzZmY3OTNiMTJmYjVlYzRiYTZlYzY5NDZjOTI5MTcyYzQxYmNjNzg0OTA1ZjAwYjg5NWNiYTY3NjI0YgoVZjM=: 00:27:36.169 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:36.169 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:36.169 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:36.169 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmIzOWYzZmY3OTNiMTJmYjVlYzRiYTZlYzY5NDZjOTI5MTcyYzQxYmNjNzg0OTA1ZjAwYjg5NWNiYTY3NjI0YgoVZjM=: 00:27:36.169 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:36.169 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:36.169 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.169 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:36.169 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:36.169 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:36.169 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.169 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:36.169 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.169 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.169 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.169 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.169 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:36.169 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:36.169 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:36.169 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.169 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.169 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:36.169 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.169 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:36.169 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:36.169 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:36.169 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:36.169 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.169 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.429 nvme0n1 00:27:36.429 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.429 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.429 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.429 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.429 12:41:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.429 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.429 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.429 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.429 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.429 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.429 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.429 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:36.429 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.429 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:36.429 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.429 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:36.429 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:36.429 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:36.429 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJhMjlmOGNkMmJhNDkzN2RkNjMzNjY0OTJiM2VlZDWftO6P: 00:27:36.429 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDU0MzUzZDY0YzIxYjlkZjU5YmRhNDJmMTFlYjk3ZmU3MjhmODMwYzcwYTFkZjYwNWY2MzRjMmNiN2Q2MDNhZCMj14I=: 00:27:36.429 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:36.429 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:36.429 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJhMjlmOGNkMmJhNDkzN2RkNjMzNjY0OTJiM2VlZDWftO6P: 00:27:36.429 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDU0MzUzZDY0YzIxYjlkZjU5YmRhNDJmMTFlYjk3ZmU3MjhmODMwYzcwYTFkZjYwNWY2MzRjMmNiN2Q2MDNhZCMj14I=: ]] 00:27:36.429 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDU0MzUzZDY0YzIxYjlkZjU5YmRhNDJmMTFlYjk3ZmU3MjhmODMwYzcwYTFkZjYwNWY2MzRjMmNiN2Q2MDNhZCMj14I=: 00:27:36.429 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:36.429 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.429 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:36.429 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:36.429 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:36.429 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.429 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:36.429 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.429 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.429 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.429 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.429 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:36.429 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:36.429 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:36.429 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.429 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.429 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:36.429 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.429 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:36.429 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:36.429 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:36.429 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:36.429 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.429 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.689 nvme0n1 00:27:36.689 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.689 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.689 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.689 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.689 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.689 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.689 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.689 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.689 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.689 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.689 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.689 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.689 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:36.689 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.689 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:36.689 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:36.689 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:36.689 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlZmI4NzE4ODUyNjdhMDQ5ZDRjZjU0NGE0NmI2ZDc3MGIwMjlhNzc0MjBhNmIy4LtRHg==: 00:27:36.689 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: 00:27:36.689 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:36.689 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:36.689 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlZmI4NzE4ODUyNjdhMDQ5ZDRjZjU0NGE0NmI2ZDc3MGIwMjlhNzc0MjBhNmIy4LtRHg==: 00:27:36.689 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: ]] 00:27:36.689 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: 00:27:36.689 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:36.689 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.689 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:36.689 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:36.689 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:36.689 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.689 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:36.689 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.689 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.689 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.689 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.689 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:36.689 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:36.689 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:36.689 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.689 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.689 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:36.689 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.689 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:36.689 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:36.689 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:36.689 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:36.689 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.689 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.948 nvme0n1 00:27:36.948 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.948 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.948 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.948 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.948 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.948 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.948 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.948 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.948 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.948 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.948 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.948 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.948 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:36.948 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.948 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:36.948 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:36.948 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:36.948 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzEzMTgxNTFhNDg5MmQ1NzA0ZTk4MTgwMzIyMzU3ZGbe5Y4u: 00:27:36.949 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: 00:27:36.949 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:36.949 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:36.949 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzEzMTgxNTFhNDg5MmQ1NzA0ZTk4MTgwMzIyMzU3ZGbe5Y4u: 00:27:36.949 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: ]] 00:27:36.949 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: 00:27:36.949 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:36.949 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.949 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:36.949 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:36.949 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:36.949 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.949 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:36.949 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.949 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.949 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.949 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.949 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:36.949 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:36.949 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:36.949 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.949 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.949 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:36.949 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.949 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:36.949 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:36.949 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:36.949 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:36.949 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.949 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.207 nvme0n1 00:27:37.207 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.207 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.207 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.207 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.207 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.207 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.465 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.465 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.465 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.465 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.465 12:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.466 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.466 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:37.466 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.466 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:37.466 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:37.466 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:37.466 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGNiZDIzNjZlOTVkNWM1YTZjMzM5MWY5Mzc4YmE0MzQyZmVkOGVjN2I5OGZjZjAysvvTvQ==: 00:27:37.466 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjUwMWNiYTk1NDI2NjI0NjQxOTFkMWI5YTViY2MwY2UbtRa5: 00:27:37.466 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:37.466 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:37.466 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGNiZDIzNjZlOTVkNWM1YTZjMzM5MWY5Mzc4YmE0MzQyZmVkOGVjN2I5OGZjZjAysvvTvQ==: 00:27:37.466 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjUwMWNiYTk1NDI2NjI0NjQxOTFkMWI5YTViY2MwY2UbtRa5: ]] 00:27:37.466 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjUwMWNiYTk1NDI2NjI0NjQxOTFkMWI5YTViY2MwY2UbtRa5: 00:27:37.466 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:37.466 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.466 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:37.466 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:37.466 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:37.466 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.466 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:37.466 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.466 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.466 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.466 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.466 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:37.466 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:37.466 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:37.466 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.466 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.466 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:37.466 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.466 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:37.466 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:37.466 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:37.466 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:37.466 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.466 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.724 nvme0n1 00:27:37.724 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.724 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.724 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.724 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.724 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.724 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.724 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.724 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.724 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.724 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.724 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.724 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.724 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:37.724 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.724 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:37.724 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:37.724 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:37.724 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmIzOWYzZmY3OTNiMTJmYjVlYzRiYTZlYzY5NDZjOTI5MTcyYzQxYmNjNzg0OTA1ZjAwYjg5NWNiYTY3NjI0YgoVZjM=: 00:27:37.724 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:37.724 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:37.724 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:37.724 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmIzOWYzZmY3OTNiMTJmYjVlYzRiYTZlYzY5NDZjOTI5MTcyYzQxYmNjNzg0OTA1ZjAwYjg5NWNiYTY3NjI0YgoVZjM=: 00:27:37.724 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:37.724 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:37.724 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.724 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:37.724 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:37.724 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:37.724 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.724 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:37.724 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.724 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.724 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.724 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.724 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:37.724 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:37.724 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:37.724 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.724 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.724 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:37.724 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.724 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:37.724 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:37.724 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:37.724 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:37.724 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.724 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.982 nvme0n1 00:27:37.982 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.983 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.983 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.983 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.983 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.983 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.983 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.983 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.983 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.983 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.983 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.983 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:37.983 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.983 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:37.983 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.983 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:37.983 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:37.983 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:37.983 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJhMjlmOGNkMmJhNDkzN2RkNjMzNjY0OTJiM2VlZDWftO6P: 00:27:37.983 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDU0MzUzZDY0YzIxYjlkZjU5YmRhNDJmMTFlYjk3ZmU3MjhmODMwYzcwYTFkZjYwNWY2MzRjMmNiN2Q2MDNhZCMj14I=: 00:27:37.983 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:37.983 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:37.983 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJhMjlmOGNkMmJhNDkzN2RkNjMzNjY0OTJiM2VlZDWftO6P: 00:27:37.983 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDU0MzUzZDY0YzIxYjlkZjU5YmRhNDJmMTFlYjk3ZmU3MjhmODMwYzcwYTFkZjYwNWY2MzRjMmNiN2Q2MDNhZCMj14I=: ]] 00:27:37.983 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDU0MzUzZDY0YzIxYjlkZjU5YmRhNDJmMTFlYjk3ZmU3MjhmODMwYzcwYTFkZjYwNWY2MzRjMmNiN2Q2MDNhZCMj14I=: 00:27:37.983 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:37.983 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.983 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:37.983 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:37.983 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:37.983 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.983 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:37.983 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.983 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.983 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.983 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.983 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:37.983 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:37.983 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:37.983 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.983 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.983 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:37.983 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.983 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:37.983 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:37.983 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:37.983 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:37.983 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.983 12:41:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.549 nvme0n1 00:27:38.549 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.549 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.549 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.549 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.549 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.549 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.549 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.549 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.549 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.549 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.549 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.549 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.549 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:38.549 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.549 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:38.549 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:38.549 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:38.549 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlZmI4NzE4ODUyNjdhMDQ5ZDRjZjU0NGE0NmI2ZDc3MGIwMjlhNzc0MjBhNmIy4LtRHg==: 00:27:38.549 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: 00:27:38.549 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:38.549 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:38.549 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlZmI4NzE4ODUyNjdhMDQ5ZDRjZjU0NGE0NmI2ZDc3MGIwMjlhNzc0MjBhNmIy4LtRHg==: 00:27:38.550 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: ]] 00:27:38.550 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: 00:27:38.550 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:38.550 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.550 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:38.550 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:38.550 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:38.550 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.550 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:38.550 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.550 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.550 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.550 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.550 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:38.550 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:38.550 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:38.550 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.550 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.550 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:38.550 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.550 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:38.550 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:38.550 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:38.550 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:38.550 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.550 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.808 nvme0n1 00:27:38.808 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.808 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.808 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.808 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.808 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.808 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.808 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.808 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.808 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.808 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.808 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.808 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.808 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:38.808 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.808 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:38.808 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:38.808 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:38.808 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzEzMTgxNTFhNDg5MmQ1NzA0ZTk4MTgwMzIyMzU3ZGbe5Y4u: 00:27:38.808 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: 00:27:38.808 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:38.808 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:38.808 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzEzMTgxNTFhNDg5MmQ1NzA0ZTk4MTgwMzIyMzU3ZGbe5Y4u: 00:27:38.808 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: ]] 00:27:38.808 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: 00:27:38.808 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:38.808 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.808 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:38.808 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:38.808 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:38.808 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.808 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:38.808 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.808 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.808 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.808 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.808 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:38.808 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:38.808 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:38.808 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.808 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.808 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:38.808 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.808 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:38.808 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:38.808 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:38.808 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:38.808 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.808 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.374 nvme0n1 00:27:39.374 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.374 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.374 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.374 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.374 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.374 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.374 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.374 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.374 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.374 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.374 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.374 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.374 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:39.374 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.374 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:39.374 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:39.374 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:39.374 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGNiZDIzNjZlOTVkNWM1YTZjMzM5MWY5Mzc4YmE0MzQyZmVkOGVjN2I5OGZjZjAysvvTvQ==: 00:27:39.374 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjUwMWNiYTk1NDI2NjI0NjQxOTFkMWI5YTViY2MwY2UbtRa5: 00:27:39.374 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:39.374 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:39.374 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGNiZDIzNjZlOTVkNWM1YTZjMzM5MWY5Mzc4YmE0MzQyZmVkOGVjN2I5OGZjZjAysvvTvQ==: 00:27:39.374 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjUwMWNiYTk1NDI2NjI0NjQxOTFkMWI5YTViY2MwY2UbtRa5: ]] 00:27:39.374 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjUwMWNiYTk1NDI2NjI0NjQxOTFkMWI5YTViY2MwY2UbtRa5: 00:27:39.374 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:39.374 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.374 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:39.374 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:39.374 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:39.374 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.374 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:39.374 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.374 12:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.374 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.374 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.374 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:39.374 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:39.374 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:39.374 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.374 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.374 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:39.374 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.374 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:39.374 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:39.374 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:39.374 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:39.374 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.374 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.632 nvme0n1 00:27:39.632 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.632 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.632 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.632 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.632 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.632 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.891 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.891 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.891 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.891 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.891 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.891 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.891 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:39.891 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.891 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:39.891 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:39.891 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:39.891 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmIzOWYzZmY3OTNiMTJmYjVlYzRiYTZlYzY5NDZjOTI5MTcyYzQxYmNjNzg0OTA1ZjAwYjg5NWNiYTY3NjI0YgoVZjM=: 00:27:39.891 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:39.891 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:39.891 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:39.891 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmIzOWYzZmY3OTNiMTJmYjVlYzRiYTZlYzY5NDZjOTI5MTcyYzQxYmNjNzg0OTA1ZjAwYjg5NWNiYTY3NjI0YgoVZjM=: 00:27:39.891 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:39.891 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:39.891 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.891 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:39.891 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:39.891 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:39.891 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.891 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:39.891 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.891 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.891 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.891 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.891 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:39.891 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:39.891 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:39.891 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.891 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.891 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:39.891 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.891 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:39.891 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:39.891 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:39.891 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:39.892 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.892 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.150 nvme0n1 00:27:40.150 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.150 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.150 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.150 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.150 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.150 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.150 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.150 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.150 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.150 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.150 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.150 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:40.150 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.151 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:40.151 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.151 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:40.151 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:40.151 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:40.151 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJhMjlmOGNkMmJhNDkzN2RkNjMzNjY0OTJiM2VlZDWftO6P: 00:27:40.151 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDU0MzUzZDY0YzIxYjlkZjU5YmRhNDJmMTFlYjk3ZmU3MjhmODMwYzcwYTFkZjYwNWY2MzRjMmNiN2Q2MDNhZCMj14I=: 00:27:40.151 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:40.151 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:40.151 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJhMjlmOGNkMmJhNDkzN2RkNjMzNjY0OTJiM2VlZDWftO6P: 00:27:40.151 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDU0MzUzZDY0YzIxYjlkZjU5YmRhNDJmMTFlYjk3ZmU3MjhmODMwYzcwYTFkZjYwNWY2MzRjMmNiN2Q2MDNhZCMj14I=: ]] 00:27:40.151 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDU0MzUzZDY0YzIxYjlkZjU5YmRhNDJmMTFlYjk3ZmU3MjhmODMwYzcwYTFkZjYwNWY2MzRjMmNiN2Q2MDNhZCMj14I=: 00:27:40.151 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:40.151 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.151 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:40.151 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:40.151 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:40.151 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.151 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:40.151 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.151 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.151 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.151 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.151 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:40.151 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:40.151 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:40.151 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.151 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.151 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:40.151 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.151 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:40.151 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:40.151 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:40.151 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:40.151 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.151 12:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.718 nvme0n1 00:27:40.718 12:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.718 12:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.718 12:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.718 12:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.718 12:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.978 12:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.978 12:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.978 12:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.978 12:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.978 12:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.978 12:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.978 12:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.978 12:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:40.978 12:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.978 12:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:40.978 12:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:40.978 12:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:40.978 12:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlZmI4NzE4ODUyNjdhMDQ5ZDRjZjU0NGE0NmI2ZDc3MGIwMjlhNzc0MjBhNmIy4LtRHg==: 00:27:40.978 12:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: 00:27:40.978 12:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:40.978 12:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:40.978 12:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlZmI4NzE4ODUyNjdhMDQ5ZDRjZjU0NGE0NmI2ZDc3MGIwMjlhNzc0MjBhNmIy4LtRHg==: 00:27:40.978 12:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: ]] 00:27:40.978 12:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: 00:27:40.978 12:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:40.978 12:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.978 12:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:40.978 12:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:40.978 12:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:40.978 12:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.978 12:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:40.978 12:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.978 12:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.978 12:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.978 12:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.978 12:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:40.978 12:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:40.978 12:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:40.978 12:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.978 12:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.978 12:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:40.978 12:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.978 12:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:40.978 12:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:40.978 12:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:40.978 12:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:40.978 12:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.978 12:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.546 nvme0n1 00:27:41.546 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.546 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.546 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.546 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.546 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.546 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.546 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.546 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.546 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.546 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.546 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.546 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.546 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:41.546 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.546 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:41.546 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:41.546 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:41.546 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzEzMTgxNTFhNDg5MmQ1NzA0ZTk4MTgwMzIyMzU3ZGbe5Y4u: 00:27:41.546 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: 00:27:41.546 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:41.546 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:41.546 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzEzMTgxNTFhNDg5MmQ1NzA0ZTk4MTgwMzIyMzU3ZGbe5Y4u: 00:27:41.546 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: ]] 00:27:41.546 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: 00:27:41.546 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:41.546 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.546 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:41.546 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:41.546 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:41.546 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.546 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:41.546 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.546 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.546 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.546 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.546 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:41.546 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:41.546 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:41.546 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.546 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.546 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:41.546 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.546 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:41.546 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:41.546 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:41.546 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:41.546 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.546 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.115 nvme0n1 00:27:42.115 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.115 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.115 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.115 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.115 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.115 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.115 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.115 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.115 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.115 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.115 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.115 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.115 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:42.115 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.115 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:42.115 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:42.115 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:42.115 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGNiZDIzNjZlOTVkNWM1YTZjMzM5MWY5Mzc4YmE0MzQyZmVkOGVjN2I5OGZjZjAysvvTvQ==: 00:27:42.115 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjUwMWNiYTk1NDI2NjI0NjQxOTFkMWI5YTViY2MwY2UbtRa5: 00:27:42.115 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:42.115 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:42.115 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGNiZDIzNjZlOTVkNWM1YTZjMzM5MWY5Mzc4YmE0MzQyZmVkOGVjN2I5OGZjZjAysvvTvQ==: 00:27:42.115 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjUwMWNiYTk1NDI2NjI0NjQxOTFkMWI5YTViY2MwY2UbtRa5: ]] 00:27:42.115 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjUwMWNiYTk1NDI2NjI0NjQxOTFkMWI5YTViY2MwY2UbtRa5: 00:27:42.115 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:42.115 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.115 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:42.115 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:42.115 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:42.115 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.115 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:42.115 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.115 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.115 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.115 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.115 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.115 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.115 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.115 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.115 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.115 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.115 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.115 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.115 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.115 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.115 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:42.115 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.115 12:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.683 nvme0n1 00:27:42.683 12:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.683 12:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.683 12:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.683 12:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.683 12:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.683 12:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.942 12:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.943 12:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.943 12:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.943 12:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.943 12:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.943 12:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.943 12:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:42.943 12:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.943 12:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:42.943 12:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:42.943 12:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:42.943 12:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmIzOWYzZmY3OTNiMTJmYjVlYzRiYTZlYzY5NDZjOTI5MTcyYzQxYmNjNzg0OTA1ZjAwYjg5NWNiYTY3NjI0YgoVZjM=: 00:27:42.943 12:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:42.943 12:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:42.943 12:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:42.943 12:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmIzOWYzZmY3OTNiMTJmYjVlYzRiYTZlYzY5NDZjOTI5MTcyYzQxYmNjNzg0OTA1ZjAwYjg5NWNiYTY3NjI0YgoVZjM=: 00:27:42.943 12:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:42.943 12:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:42.943 12:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.943 12:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:42.943 12:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:42.943 12:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:42.943 12:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.943 12:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:42.943 12:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.943 12:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.943 12:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.943 12:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.943 12:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.943 12:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.943 12:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.943 12:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.943 12:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.943 12:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.943 12:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.943 12:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.943 12:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.943 12:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.943 12:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:42.943 12:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.943 12:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.514 nvme0n1 00:27:43.514 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.514 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.514 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.514 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.514 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.514 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.515 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.515 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.515 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.515 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.515 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.515 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:43.515 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:43.515 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.515 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:43.515 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.515 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:43.515 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:43.515 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:43.515 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJhMjlmOGNkMmJhNDkzN2RkNjMzNjY0OTJiM2VlZDWftO6P: 00:27:43.515 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDU0MzUzZDY0YzIxYjlkZjU5YmRhNDJmMTFlYjk3ZmU3MjhmODMwYzcwYTFkZjYwNWY2MzRjMmNiN2Q2MDNhZCMj14I=: 00:27:43.515 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:43.515 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:43.515 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJhMjlmOGNkMmJhNDkzN2RkNjMzNjY0OTJiM2VlZDWftO6P: 00:27:43.515 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDU0MzUzZDY0YzIxYjlkZjU5YmRhNDJmMTFlYjk3ZmU3MjhmODMwYzcwYTFkZjYwNWY2MzRjMmNiN2Q2MDNhZCMj14I=: ]] 00:27:43.515 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDU0MzUzZDY0YzIxYjlkZjU5YmRhNDJmMTFlYjk3ZmU3MjhmODMwYzcwYTFkZjYwNWY2MzRjMmNiN2Q2MDNhZCMj14I=: 00:27:43.515 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:43.515 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.515 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:43.515 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:43.515 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:43.515 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.515 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:43.515 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.515 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.515 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.515 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.515 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:43.515 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:43.515 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:43.515 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.515 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.515 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:43.515 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.515 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:43.515 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:43.515 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:43.515 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:43.515 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.515 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.774 nvme0n1 00:27:43.774 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.774 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.774 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.774 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.774 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.774 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.774 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.774 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.774 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.774 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.774 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.774 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.774 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:43.774 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.774 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:43.774 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:43.774 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:43.774 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlZmI4NzE4ODUyNjdhMDQ5ZDRjZjU0NGE0NmI2ZDc3MGIwMjlhNzc0MjBhNmIy4LtRHg==: 00:27:43.774 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: 00:27:43.774 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:43.774 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:43.774 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlZmI4NzE4ODUyNjdhMDQ5ZDRjZjU0NGE0NmI2ZDc3MGIwMjlhNzc0MjBhNmIy4LtRHg==: 00:27:43.774 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: ]] 00:27:43.774 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: 00:27:43.774 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:43.774 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.774 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:43.774 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:43.775 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:43.775 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.775 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:43.775 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.775 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.775 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.775 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.775 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:43.775 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:43.775 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:43.775 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.775 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.775 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:43.775 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.775 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:43.775 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:43.775 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:43.775 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:43.775 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.775 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.775 nvme0n1 00:27:43.775 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.775 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.775 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.775 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.775 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.775 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzEzMTgxNTFhNDg5MmQ1NzA0ZTk4MTgwMzIyMzU3ZGbe5Y4u: 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzEzMTgxNTFhNDg5MmQ1NzA0ZTk4MTgwMzIyMzU3ZGbe5Y4u: 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: ]] 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.034 nvme0n1 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.034 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.035 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:44.035 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.035 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:44.035 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:44.035 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:44.035 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGNiZDIzNjZlOTVkNWM1YTZjMzM5MWY5Mzc4YmE0MzQyZmVkOGVjN2I5OGZjZjAysvvTvQ==: 00:27:44.035 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjUwMWNiYTk1NDI2NjI0NjQxOTFkMWI5YTViY2MwY2UbtRa5: 00:27:44.035 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:44.035 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:44.035 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGNiZDIzNjZlOTVkNWM1YTZjMzM5MWY5Mzc4YmE0MzQyZmVkOGVjN2I5OGZjZjAysvvTvQ==: 00:27:44.035 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjUwMWNiYTk1NDI2NjI0NjQxOTFkMWI5YTViY2MwY2UbtRa5: ]] 00:27:44.035 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjUwMWNiYTk1NDI2NjI0NjQxOTFkMWI5YTViY2MwY2UbtRa5: 00:27:44.035 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:44.035 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.035 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:44.035 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:44.035 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:44.035 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.035 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:44.035 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.035 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.294 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.294 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.294 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:44.294 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:44.294 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:44.294 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.294 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.294 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:44.294 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.294 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:44.294 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:44.294 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:44.294 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:44.294 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.294 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.294 nvme0n1 00:27:44.294 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.294 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.294 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.294 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.294 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.294 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.294 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.294 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.294 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.294 12:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.295 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.295 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.295 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:44.295 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.295 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:44.295 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:44.295 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:44.295 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmIzOWYzZmY3OTNiMTJmYjVlYzRiYTZlYzY5NDZjOTI5MTcyYzQxYmNjNzg0OTA1ZjAwYjg5NWNiYTY3NjI0YgoVZjM=: 00:27:44.295 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:44.295 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:44.295 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:44.295 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmIzOWYzZmY3OTNiMTJmYjVlYzRiYTZlYzY5NDZjOTI5MTcyYzQxYmNjNzg0OTA1ZjAwYjg5NWNiYTY3NjI0YgoVZjM=: 00:27:44.295 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:44.295 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:44.295 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.295 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:44.295 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:44.295 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:44.295 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.295 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:44.295 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.295 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.295 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.295 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.295 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:44.295 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:44.295 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:44.295 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.295 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.295 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:44.295 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.295 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:44.295 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:44.295 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:44.295 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:44.295 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.295 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.554 nvme0n1 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJhMjlmOGNkMmJhNDkzN2RkNjMzNjY0OTJiM2VlZDWftO6P: 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDU0MzUzZDY0YzIxYjlkZjU5YmRhNDJmMTFlYjk3ZmU3MjhmODMwYzcwYTFkZjYwNWY2MzRjMmNiN2Q2MDNhZCMj14I=: 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJhMjlmOGNkMmJhNDkzN2RkNjMzNjY0OTJiM2VlZDWftO6P: 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDU0MzUzZDY0YzIxYjlkZjU5YmRhNDJmMTFlYjk3ZmU3MjhmODMwYzcwYTFkZjYwNWY2MzRjMmNiN2Q2MDNhZCMj14I=: ]] 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDU0MzUzZDY0YzIxYjlkZjU5YmRhNDJmMTFlYjk3ZmU3MjhmODMwYzcwYTFkZjYwNWY2MzRjMmNiN2Q2MDNhZCMj14I=: 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.554 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.813 nvme0n1 00:27:44.813 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.813 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.813 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.813 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.813 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.813 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.813 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.813 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.813 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.813 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.813 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.813 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.813 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:44.813 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.813 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:44.813 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:44.813 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:44.813 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlZmI4NzE4ODUyNjdhMDQ5ZDRjZjU0NGE0NmI2ZDc3MGIwMjlhNzc0MjBhNmIy4LtRHg==: 00:27:44.813 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: 00:27:44.813 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:44.813 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:44.813 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlZmI4NzE4ODUyNjdhMDQ5ZDRjZjU0NGE0NmI2ZDc3MGIwMjlhNzc0MjBhNmIy4LtRHg==: 00:27:44.813 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: ]] 00:27:44.813 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: 00:27:44.813 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:44.813 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.813 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:44.813 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:44.814 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:44.814 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.814 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:44.814 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.814 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.814 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.814 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.814 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:44.814 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:44.814 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:44.814 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.814 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.814 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:44.814 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.814 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:44.814 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:44.814 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:44.814 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:44.814 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.814 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.073 nvme0n1 00:27:45.073 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.073 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.073 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.073 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.073 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.073 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.073 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.073 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.073 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.073 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.073 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.073 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.073 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:45.073 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.073 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:45.073 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:45.073 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:45.073 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzEzMTgxNTFhNDg5MmQ1NzA0ZTk4MTgwMzIyMzU3ZGbe5Y4u: 00:27:45.073 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: 00:27:45.073 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:45.073 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:45.073 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzEzMTgxNTFhNDg5MmQ1NzA0ZTk4MTgwMzIyMzU3ZGbe5Y4u: 00:27:45.073 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: ]] 00:27:45.073 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: 00:27:45.073 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:45.073 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.073 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:45.073 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:45.073 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:45.073 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.073 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:45.073 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.073 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.073 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.073 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.073 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.073 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.073 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.073 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.073 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.073 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.073 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.073 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.073 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.073 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.073 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:45.073 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.073 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.332 nvme0n1 00:27:45.332 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.332 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.332 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.332 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.332 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.332 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.332 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.332 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.332 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.332 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.332 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.332 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.332 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:45.332 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.332 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:45.332 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:45.332 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:45.332 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGNiZDIzNjZlOTVkNWM1YTZjMzM5MWY5Mzc4YmE0MzQyZmVkOGVjN2I5OGZjZjAysvvTvQ==: 00:27:45.332 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjUwMWNiYTk1NDI2NjI0NjQxOTFkMWI5YTViY2MwY2UbtRa5: 00:27:45.333 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:45.333 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:45.333 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGNiZDIzNjZlOTVkNWM1YTZjMzM5MWY5Mzc4YmE0MzQyZmVkOGVjN2I5OGZjZjAysvvTvQ==: 00:27:45.333 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjUwMWNiYTk1NDI2NjI0NjQxOTFkMWI5YTViY2MwY2UbtRa5: ]] 00:27:45.333 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjUwMWNiYTk1NDI2NjI0NjQxOTFkMWI5YTViY2MwY2UbtRa5: 00:27:45.333 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:45.333 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.333 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:45.333 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:45.333 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:45.333 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.333 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:45.333 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.333 12:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.333 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.333 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.333 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.333 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.333 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.333 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.333 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.333 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.333 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.333 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.333 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.333 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.333 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:45.333 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.333 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.592 nvme0n1 00:27:45.592 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.592 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.592 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.592 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.592 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.592 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.592 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.592 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.592 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.592 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.592 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.592 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.592 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:45.592 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.592 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:45.592 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:45.592 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:45.592 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmIzOWYzZmY3OTNiMTJmYjVlYzRiYTZlYzY5NDZjOTI5MTcyYzQxYmNjNzg0OTA1ZjAwYjg5NWNiYTY3NjI0YgoVZjM=: 00:27:45.592 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:45.592 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:45.592 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:45.592 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmIzOWYzZmY3OTNiMTJmYjVlYzRiYTZlYzY5NDZjOTI5MTcyYzQxYmNjNzg0OTA1ZjAwYjg5NWNiYTY3NjI0YgoVZjM=: 00:27:45.592 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:45.592 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:45.592 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.592 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:45.592 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:45.592 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:45.592 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.592 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:45.592 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.592 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.592 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.592 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.593 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.593 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.593 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.593 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.593 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.593 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.593 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.593 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.593 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.593 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.593 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:45.593 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.593 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.851 nvme0n1 00:27:45.851 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.851 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.851 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.851 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.851 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.851 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.851 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.851 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.851 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.851 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.852 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.852 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:45.852 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.852 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:45.852 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.852 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:45.852 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:45.852 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:45.852 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJhMjlmOGNkMmJhNDkzN2RkNjMzNjY0OTJiM2VlZDWftO6P: 00:27:45.852 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDU0MzUzZDY0YzIxYjlkZjU5YmRhNDJmMTFlYjk3ZmU3MjhmODMwYzcwYTFkZjYwNWY2MzRjMmNiN2Q2MDNhZCMj14I=: 00:27:45.852 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:45.852 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:45.852 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJhMjlmOGNkMmJhNDkzN2RkNjMzNjY0OTJiM2VlZDWftO6P: 00:27:45.852 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDU0MzUzZDY0YzIxYjlkZjU5YmRhNDJmMTFlYjk3ZmU3MjhmODMwYzcwYTFkZjYwNWY2MzRjMmNiN2Q2MDNhZCMj14I=: ]] 00:27:45.852 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDU0MzUzZDY0YzIxYjlkZjU5YmRhNDJmMTFlYjk3ZmU3MjhmODMwYzcwYTFkZjYwNWY2MzRjMmNiN2Q2MDNhZCMj14I=: 00:27:45.852 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:45.852 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.852 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:45.852 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:45.852 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:45.852 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.852 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:45.852 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.852 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.852 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.852 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.852 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.852 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.852 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.852 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.852 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.852 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.852 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.852 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.852 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.852 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.852 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:45.852 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.852 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.112 nvme0n1 00:27:46.112 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.112 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.112 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.112 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.112 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.112 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.112 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.112 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.112 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.112 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.112 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.112 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.112 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:46.112 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.112 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:46.112 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:46.112 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:46.112 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlZmI4NzE4ODUyNjdhMDQ5ZDRjZjU0NGE0NmI2ZDc3MGIwMjlhNzc0MjBhNmIy4LtRHg==: 00:27:46.112 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: 00:27:46.112 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:46.112 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:46.112 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlZmI4NzE4ODUyNjdhMDQ5ZDRjZjU0NGE0NmI2ZDc3MGIwMjlhNzc0MjBhNmIy4LtRHg==: 00:27:46.112 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: ]] 00:27:46.112 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: 00:27:46.112 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:46.112 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.112 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:46.112 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:46.112 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:46.112 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.112 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:46.112 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.112 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.112 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.112 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.112 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:46.112 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:46.112 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:46.112 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.112 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.112 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:46.112 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.112 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:46.112 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:46.112 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.112 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:46.112 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.112 12:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.371 nvme0n1 00:27:46.371 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.371 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.371 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.372 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.372 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.372 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.631 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.631 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.631 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.631 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.631 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.631 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.631 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:46.631 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.631 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:46.631 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:46.631 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:46.631 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzEzMTgxNTFhNDg5MmQ1NzA0ZTk4MTgwMzIyMzU3ZGbe5Y4u: 00:27:46.631 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: 00:27:46.631 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:46.631 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:46.631 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzEzMTgxNTFhNDg5MmQ1NzA0ZTk4MTgwMzIyMzU3ZGbe5Y4u: 00:27:46.631 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: ]] 00:27:46.631 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: 00:27:46.631 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:46.631 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.631 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:46.631 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:46.631 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:46.631 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.631 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:46.631 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.631 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.631 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.631 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.631 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:46.631 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:46.631 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:46.631 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.631 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.631 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:46.631 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.631 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:46.631 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:46.631 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.631 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:46.631 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.631 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.891 nvme0n1 00:27:46.891 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.891 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.891 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.891 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.891 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.891 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.891 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.891 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.891 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.891 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.891 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.891 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.891 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:46.891 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.891 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:46.891 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:46.891 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:46.891 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGNiZDIzNjZlOTVkNWM1YTZjMzM5MWY5Mzc4YmE0MzQyZmVkOGVjN2I5OGZjZjAysvvTvQ==: 00:27:46.891 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjUwMWNiYTk1NDI2NjI0NjQxOTFkMWI5YTViY2MwY2UbtRa5: 00:27:46.891 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:46.891 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:46.891 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGNiZDIzNjZlOTVkNWM1YTZjMzM5MWY5Mzc4YmE0MzQyZmVkOGVjN2I5OGZjZjAysvvTvQ==: 00:27:46.891 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjUwMWNiYTk1NDI2NjI0NjQxOTFkMWI5YTViY2MwY2UbtRa5: ]] 00:27:46.891 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjUwMWNiYTk1NDI2NjI0NjQxOTFkMWI5YTViY2MwY2UbtRa5: 00:27:46.891 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:46.891 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.891 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:46.891 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:46.891 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:46.891 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.891 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:46.891 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.891 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.891 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.891 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.891 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:46.891 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:46.892 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:46.892 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.892 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.892 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:46.892 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.892 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:46.892 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:46.892 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.892 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:46.892 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.892 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.151 nvme0n1 00:27:47.151 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.151 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.151 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.151 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.151 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.151 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.151 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.151 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.151 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.151 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.151 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.151 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.151 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:47.151 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.151 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:47.151 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:47.151 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:47.151 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmIzOWYzZmY3OTNiMTJmYjVlYzRiYTZlYzY5NDZjOTI5MTcyYzQxYmNjNzg0OTA1ZjAwYjg5NWNiYTY3NjI0YgoVZjM=: 00:27:47.151 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:47.151 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:47.151 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:47.151 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmIzOWYzZmY3OTNiMTJmYjVlYzRiYTZlYzY5NDZjOTI5MTcyYzQxYmNjNzg0OTA1ZjAwYjg5NWNiYTY3NjI0YgoVZjM=: 00:27:47.151 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:47.151 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:47.151 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.151 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:47.151 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:47.151 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:47.151 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.151 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:47.151 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.151 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.151 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.151 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.151 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:47.152 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:47.152 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:47.152 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.152 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.152 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:47.152 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.152 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:47.152 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:47.152 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:47.152 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:47.152 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.152 12:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.412 nvme0n1 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJhMjlmOGNkMmJhNDkzN2RkNjMzNjY0OTJiM2VlZDWftO6P: 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDU0MzUzZDY0YzIxYjlkZjU5YmRhNDJmMTFlYjk3ZmU3MjhmODMwYzcwYTFkZjYwNWY2MzRjMmNiN2Q2MDNhZCMj14I=: 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJhMjlmOGNkMmJhNDkzN2RkNjMzNjY0OTJiM2VlZDWftO6P: 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDU0MzUzZDY0YzIxYjlkZjU5YmRhNDJmMTFlYjk3ZmU3MjhmODMwYzcwYTFkZjYwNWY2MzRjMmNiN2Q2MDNhZCMj14I=: ]] 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDU0MzUzZDY0YzIxYjlkZjU5YmRhNDJmMTFlYjk3ZmU3MjhmODMwYzcwYTFkZjYwNWY2MzRjMmNiN2Q2MDNhZCMj14I=: 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.412 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.981 nvme0n1 00:27:47.981 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.981 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.981 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.981 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.981 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.981 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.981 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.981 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.981 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.981 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.981 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.981 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.981 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:47.981 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.981 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:47.981 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:47.981 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:47.981 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlZmI4NzE4ODUyNjdhMDQ5ZDRjZjU0NGE0NmI2ZDc3MGIwMjlhNzc0MjBhNmIy4LtRHg==: 00:27:47.981 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: 00:27:47.981 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:47.981 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:47.981 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlZmI4NzE4ODUyNjdhMDQ5ZDRjZjU0NGE0NmI2ZDc3MGIwMjlhNzc0MjBhNmIy4LtRHg==: 00:27:47.981 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: ]] 00:27:47.981 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: 00:27:47.981 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:47.981 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.981 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:47.981 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:47.981 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:47.981 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.981 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:47.981 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.981 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.981 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.981 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.981 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:47.981 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:47.981 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:47.981 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.981 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.981 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:47.981 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.981 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:47.981 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:47.982 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:47.982 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:47.982 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.982 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.240 nvme0n1 00:27:48.240 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.241 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.241 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.241 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.241 12:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.500 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.500 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.500 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.500 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.500 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.500 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.500 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.500 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:48.500 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.500 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:48.500 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:48.500 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:48.500 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzEzMTgxNTFhNDg5MmQ1NzA0ZTk4MTgwMzIyMzU3ZGbe5Y4u: 00:27:48.500 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: 00:27:48.500 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:48.500 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:48.500 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzEzMTgxNTFhNDg5MmQ1NzA0ZTk4MTgwMzIyMzU3ZGbe5Y4u: 00:27:48.500 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: ]] 00:27:48.500 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: 00:27:48.500 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:48.500 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.500 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:48.500 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:48.500 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:48.500 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.500 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:48.500 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.500 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.500 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.500 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.500 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.500 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.500 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.500 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.500 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.500 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.500 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.500 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.500 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.500 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.500 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:48.500 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.500 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.759 nvme0n1 00:27:48.759 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.759 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.759 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.759 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.759 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.759 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.759 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.759 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.760 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.760 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.760 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.760 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.760 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:48.760 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.760 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:48.760 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:48.760 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:48.760 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGNiZDIzNjZlOTVkNWM1YTZjMzM5MWY5Mzc4YmE0MzQyZmVkOGVjN2I5OGZjZjAysvvTvQ==: 00:27:48.760 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjUwMWNiYTk1NDI2NjI0NjQxOTFkMWI5YTViY2MwY2UbtRa5: 00:27:48.760 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:48.760 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:48.760 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGNiZDIzNjZlOTVkNWM1YTZjMzM5MWY5Mzc4YmE0MzQyZmVkOGVjN2I5OGZjZjAysvvTvQ==: 00:27:48.760 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjUwMWNiYTk1NDI2NjI0NjQxOTFkMWI5YTViY2MwY2UbtRa5: ]] 00:27:48.760 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjUwMWNiYTk1NDI2NjI0NjQxOTFkMWI5YTViY2MwY2UbtRa5: 00:27:48.760 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:48.760 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.760 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:48.760 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:48.760 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:48.760 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.760 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:48.760 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.760 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.760 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.760 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.760 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.760 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:49.018 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:49.018 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.018 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.018 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:49.018 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.018 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:49.019 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:49.019 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:49.019 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:49.019 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.019 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.278 nvme0n1 00:27:49.278 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.278 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.278 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.278 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.278 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.278 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.278 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.278 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.278 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.278 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.278 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.278 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.278 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:49.278 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.278 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:49.278 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:49.278 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:49.278 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmIzOWYzZmY3OTNiMTJmYjVlYzRiYTZlYzY5NDZjOTI5MTcyYzQxYmNjNzg0OTA1ZjAwYjg5NWNiYTY3NjI0YgoVZjM=: 00:27:49.278 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:49.278 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:49.278 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:49.278 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmIzOWYzZmY3OTNiMTJmYjVlYzRiYTZlYzY5NDZjOTI5MTcyYzQxYmNjNzg0OTA1ZjAwYjg5NWNiYTY3NjI0YgoVZjM=: 00:27:49.278 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:49.278 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:49.278 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.278 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:49.278 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:49.278 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:49.278 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.278 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:49.278 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.278 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.278 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.278 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.278 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:49.278 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:49.278 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:49.278 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.278 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.278 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:49.278 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.278 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:49.278 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:49.278 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:49.278 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:49.278 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.278 12:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.847 nvme0n1 00:27:49.847 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.847 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.847 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.847 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.847 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.847 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.847 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.847 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.847 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.847 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.847 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.847 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:49.847 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.847 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:49.847 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.847 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:49.847 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:49.847 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:49.847 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJhMjlmOGNkMmJhNDkzN2RkNjMzNjY0OTJiM2VlZDWftO6P: 00:27:49.847 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDU0MzUzZDY0YzIxYjlkZjU5YmRhNDJmMTFlYjk3ZmU3MjhmODMwYzcwYTFkZjYwNWY2MzRjMmNiN2Q2MDNhZCMj14I=: 00:27:49.847 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:49.847 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:49.847 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJhMjlmOGNkMmJhNDkzN2RkNjMzNjY0OTJiM2VlZDWftO6P: 00:27:49.847 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDU0MzUzZDY0YzIxYjlkZjU5YmRhNDJmMTFlYjk3ZmU3MjhmODMwYzcwYTFkZjYwNWY2MzRjMmNiN2Q2MDNhZCMj14I=: ]] 00:27:49.847 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDU0MzUzZDY0YzIxYjlkZjU5YmRhNDJmMTFlYjk3ZmU3MjhmODMwYzcwYTFkZjYwNWY2MzRjMmNiN2Q2MDNhZCMj14I=: 00:27:49.847 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:49.847 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.847 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:49.847 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:49.847 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:49.847 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.847 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:49.847 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.847 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.847 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.847 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.847 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:49.847 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:49.847 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:49.847 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.847 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.847 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:49.847 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.847 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:49.847 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:49.847 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:49.847 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:49.847 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.848 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.416 nvme0n1 00:27:50.416 12:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.416 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.416 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.416 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.416 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.416 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.416 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.416 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.416 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.416 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.416 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.416 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.416 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:50.416 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.416 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:50.416 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:50.416 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:50.416 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlZmI4NzE4ODUyNjdhMDQ5ZDRjZjU0NGE0NmI2ZDc3MGIwMjlhNzc0MjBhNmIy4LtRHg==: 00:27:50.416 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: 00:27:50.416 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:50.416 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:50.416 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlZmI4NzE4ODUyNjdhMDQ5ZDRjZjU0NGE0NmI2ZDc3MGIwMjlhNzc0MjBhNmIy4LtRHg==: 00:27:50.416 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: ]] 00:27:50.416 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: 00:27:50.416 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:50.416 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.416 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:50.416 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:50.416 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:50.416 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.416 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:50.416 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.416 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.416 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.416 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.416 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:50.416 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:50.416 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:50.416 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.416 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.416 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:50.416 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.416 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:50.416 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:50.416 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:50.416 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:50.416 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.416 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.984 nvme0n1 00:27:50.984 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.984 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.984 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.984 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.984 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.984 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.984 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.984 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.984 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.984 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.984 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.984 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.984 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:50.984 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.984 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:50.984 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:50.984 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:50.984 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzEzMTgxNTFhNDg5MmQ1NzA0ZTk4MTgwMzIyMzU3ZGbe5Y4u: 00:27:50.984 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: 00:27:50.984 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:50.984 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:50.984 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzEzMTgxNTFhNDg5MmQ1NzA0ZTk4MTgwMzIyMzU3ZGbe5Y4u: 00:27:50.984 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: ]] 00:27:50.984 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: 00:27:50.984 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:50.985 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.985 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:50.985 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:50.985 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:50.985 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.985 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:50.985 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.985 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.985 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.985 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.985 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:50.985 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:50.985 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:50.985 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.985 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.985 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:50.985 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.985 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:50.985 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:50.985 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:50.985 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:50.985 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.985 12:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.552 nvme0n1 00:27:51.552 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.552 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.552 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.552 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.552 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.552 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.811 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.811 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.811 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.811 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.811 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.811 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.811 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:51.811 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.811 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:51.811 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:51.811 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:51.811 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGNiZDIzNjZlOTVkNWM1YTZjMzM5MWY5Mzc4YmE0MzQyZmVkOGVjN2I5OGZjZjAysvvTvQ==: 00:27:51.811 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjUwMWNiYTk1NDI2NjI0NjQxOTFkMWI5YTViY2MwY2UbtRa5: 00:27:51.811 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:51.811 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:51.811 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGNiZDIzNjZlOTVkNWM1YTZjMzM5MWY5Mzc4YmE0MzQyZmVkOGVjN2I5OGZjZjAysvvTvQ==: 00:27:51.811 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjUwMWNiYTk1NDI2NjI0NjQxOTFkMWI5YTViY2MwY2UbtRa5: ]] 00:27:51.811 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjUwMWNiYTk1NDI2NjI0NjQxOTFkMWI5YTViY2MwY2UbtRa5: 00:27:51.811 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:51.811 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.811 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:51.811 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:51.811 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:51.811 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.811 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:51.811 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.811 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.811 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.811 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.811 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:51.811 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:51.811 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:51.811 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.811 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.811 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:51.811 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.811 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:51.811 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:51.811 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:51.811 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:51.811 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.811 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.379 nvme0n1 00:27:52.379 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.379 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.379 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.379 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.379 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.379 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.379 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.379 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.379 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.379 12:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.379 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.379 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.379 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:52.379 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.379 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:52.379 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:52.379 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:52.379 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmIzOWYzZmY3OTNiMTJmYjVlYzRiYTZlYzY5NDZjOTI5MTcyYzQxYmNjNzg0OTA1ZjAwYjg5NWNiYTY3NjI0YgoVZjM=: 00:27:52.379 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:52.379 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:52.379 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:52.379 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmIzOWYzZmY3OTNiMTJmYjVlYzRiYTZlYzY5NDZjOTI5MTcyYzQxYmNjNzg0OTA1ZjAwYjg5NWNiYTY3NjI0YgoVZjM=: 00:27:52.379 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:52.379 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:52.379 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.379 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:52.379 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:52.379 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:52.379 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.379 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:52.379 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.379 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.379 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.379 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.379 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.379 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.379 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.379 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.379 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.379 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.379 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.379 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.379 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.379 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.379 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:52.379 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.379 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.947 nvme0n1 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJhMjlmOGNkMmJhNDkzN2RkNjMzNjY0OTJiM2VlZDWftO6P: 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDU0MzUzZDY0YzIxYjlkZjU5YmRhNDJmMTFlYjk3ZmU3MjhmODMwYzcwYTFkZjYwNWY2MzRjMmNiN2Q2MDNhZCMj14I=: 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJhMjlmOGNkMmJhNDkzN2RkNjMzNjY0OTJiM2VlZDWftO6P: 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDU0MzUzZDY0YzIxYjlkZjU5YmRhNDJmMTFlYjk3ZmU3MjhmODMwYzcwYTFkZjYwNWY2MzRjMmNiN2Q2MDNhZCMj14I=: ]] 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDU0MzUzZDY0YzIxYjlkZjU5YmRhNDJmMTFlYjk3ZmU3MjhmODMwYzcwYTFkZjYwNWY2MzRjMmNiN2Q2MDNhZCMj14I=: 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.947 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.206 nvme0n1 00:27:53.206 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.206 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.206 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.206 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.206 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.206 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.206 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.206 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.206 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.206 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.206 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.206 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.206 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:53.206 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.206 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.206 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:53.206 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:53.206 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlZmI4NzE4ODUyNjdhMDQ5ZDRjZjU0NGE0NmI2ZDc3MGIwMjlhNzc0MjBhNmIy4LtRHg==: 00:27:53.206 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: 00:27:53.206 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.206 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:53.206 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlZmI4NzE4ODUyNjdhMDQ5ZDRjZjU0NGE0NmI2ZDc3MGIwMjlhNzc0MjBhNmIy4LtRHg==: 00:27:53.206 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: ]] 00:27:53.206 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: 00:27:53.206 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:53.206 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.206 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.206 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:53.206 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:53.206 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.206 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:53.206 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.206 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.206 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.207 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.207 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:53.207 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:53.207 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:53.207 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.207 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.207 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:53.207 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.207 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:53.207 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:53.207 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:53.207 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:53.207 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.207 12:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.465 nvme0n1 00:27:53.465 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.465 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.465 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.465 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.465 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.465 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.465 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.465 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.465 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.465 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.465 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.465 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.465 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:53.465 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.465 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.465 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:53.465 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:53.465 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzEzMTgxNTFhNDg5MmQ1NzA0ZTk4MTgwMzIyMzU3ZGbe5Y4u: 00:27:53.465 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: 00:27:53.465 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.465 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:53.465 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzEzMTgxNTFhNDg5MmQ1NzA0ZTk4MTgwMzIyMzU3ZGbe5Y4u: 00:27:53.465 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: ]] 00:27:53.465 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: 00:27:53.465 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:53.465 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.465 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.465 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:53.465 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:53.465 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.465 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:53.465 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.466 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.466 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.466 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.466 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:53.466 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:53.466 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:53.466 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.466 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.466 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:53.466 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.466 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:53.466 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:53.466 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:53.466 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:53.466 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.466 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.724 nvme0n1 00:27:53.724 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.724 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.724 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.724 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.724 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.724 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.724 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.724 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.724 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.724 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.724 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.724 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.724 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:53.724 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.724 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.724 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:53.724 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:53.724 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGNiZDIzNjZlOTVkNWM1YTZjMzM5MWY5Mzc4YmE0MzQyZmVkOGVjN2I5OGZjZjAysvvTvQ==: 00:27:53.724 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjUwMWNiYTk1NDI2NjI0NjQxOTFkMWI5YTViY2MwY2UbtRa5: 00:27:53.724 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.724 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:53.724 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGNiZDIzNjZlOTVkNWM1YTZjMzM5MWY5Mzc4YmE0MzQyZmVkOGVjN2I5OGZjZjAysvvTvQ==: 00:27:53.724 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjUwMWNiYTk1NDI2NjI0NjQxOTFkMWI5YTViY2MwY2UbtRa5: ]] 00:27:53.724 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjUwMWNiYTk1NDI2NjI0NjQxOTFkMWI5YTViY2MwY2UbtRa5: 00:27:53.724 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:53.724 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.724 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.724 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:53.724 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:53.724 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.724 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:53.724 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.724 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.724 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.724 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.724 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:53.724 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:53.724 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:53.724 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.724 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.724 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:53.724 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.724 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:53.724 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:53.724 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:53.724 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:53.724 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.724 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.983 nvme0n1 00:27:53.983 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.983 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.983 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.983 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.983 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.983 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.983 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.983 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.983 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.983 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.983 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.983 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.983 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:53.983 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.983 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.983 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:53.983 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:53.983 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmIzOWYzZmY3OTNiMTJmYjVlYzRiYTZlYzY5NDZjOTI5MTcyYzQxYmNjNzg0OTA1ZjAwYjg5NWNiYTY3NjI0YgoVZjM=: 00:27:53.983 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:53.983 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.984 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:53.984 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmIzOWYzZmY3OTNiMTJmYjVlYzRiYTZlYzY5NDZjOTI5MTcyYzQxYmNjNzg0OTA1ZjAwYjg5NWNiYTY3NjI0YgoVZjM=: 00:27:53.984 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:53.984 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:53.984 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.984 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.984 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:53.984 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:53.984 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.984 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:53.984 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.984 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.984 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.984 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.984 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:53.984 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:53.984 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:53.984 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.984 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.984 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:53.984 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.984 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:53.984 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:53.984 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:53.984 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:53.984 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.984 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.984 nvme0n1 00:27:53.984 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.984 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.984 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.984 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.984 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.984 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.243 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.243 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.243 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.243 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.243 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.243 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:54.243 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.243 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:54.243 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.243 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:54.243 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:54.244 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:54.244 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJhMjlmOGNkMmJhNDkzN2RkNjMzNjY0OTJiM2VlZDWftO6P: 00:27:54.244 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDU0MzUzZDY0YzIxYjlkZjU5YmRhNDJmMTFlYjk3ZmU3MjhmODMwYzcwYTFkZjYwNWY2MzRjMmNiN2Q2MDNhZCMj14I=: 00:27:54.244 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:54.244 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:54.244 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJhMjlmOGNkMmJhNDkzN2RkNjMzNjY0OTJiM2VlZDWftO6P: 00:27:54.244 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDU0MzUzZDY0YzIxYjlkZjU5YmRhNDJmMTFlYjk3ZmU3MjhmODMwYzcwYTFkZjYwNWY2MzRjMmNiN2Q2MDNhZCMj14I=: ]] 00:27:54.244 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDU0MzUzZDY0YzIxYjlkZjU5YmRhNDJmMTFlYjk3ZmU3MjhmODMwYzcwYTFkZjYwNWY2MzRjMmNiN2Q2MDNhZCMj14I=: 00:27:54.244 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:54.244 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.244 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:54.244 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:54.244 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:54.244 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.244 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:54.244 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.244 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.244 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.244 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.244 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:54.244 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:54.244 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:54.244 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.244 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.244 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:54.244 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.244 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:54.244 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:54.244 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:54.244 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:54.244 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.244 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.244 nvme0n1 00:27:54.244 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.244 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.244 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.244 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.244 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.244 12:41:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlZmI4NzE4ODUyNjdhMDQ5ZDRjZjU0NGE0NmI2ZDc3MGIwMjlhNzc0MjBhNmIy4LtRHg==: 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlZmI4NzE4ODUyNjdhMDQ5ZDRjZjU0NGE0NmI2ZDc3MGIwMjlhNzc0MjBhNmIy4LtRHg==: 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: ]] 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.503 nvme0n1 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.503 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.762 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.762 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.762 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:54.762 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.762 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:54.762 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:54.762 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:54.762 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzEzMTgxNTFhNDg5MmQ1NzA0ZTk4MTgwMzIyMzU3ZGbe5Y4u: 00:27:54.762 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: 00:27:54.762 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:54.762 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:54.762 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzEzMTgxNTFhNDg5MmQ1NzA0ZTk4MTgwMzIyMzU3ZGbe5Y4u: 00:27:54.762 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: ]] 00:27:54.762 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: 00:27:54.762 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:54.762 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.762 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:54.762 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:54.762 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:54.763 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.763 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:54.763 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.763 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.763 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.763 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.763 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:54.763 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:54.763 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:54.763 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.763 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.763 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:54.763 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.763 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:54.763 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:54.763 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:54.763 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:54.763 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.763 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.763 nvme0n1 00:27:54.763 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.763 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.763 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.763 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.763 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.763 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.763 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.763 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.763 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.763 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.023 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.023 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.023 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:55.023 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.023 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:55.023 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:55.023 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:55.023 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGNiZDIzNjZlOTVkNWM1YTZjMzM5MWY5Mzc4YmE0MzQyZmVkOGVjN2I5OGZjZjAysvvTvQ==: 00:27:55.023 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjUwMWNiYTk1NDI2NjI0NjQxOTFkMWI5YTViY2MwY2UbtRa5: 00:27:55.023 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:55.023 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:55.023 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGNiZDIzNjZlOTVkNWM1YTZjMzM5MWY5Mzc4YmE0MzQyZmVkOGVjN2I5OGZjZjAysvvTvQ==: 00:27:55.023 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjUwMWNiYTk1NDI2NjI0NjQxOTFkMWI5YTViY2MwY2UbtRa5: ]] 00:27:55.023 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjUwMWNiYTk1NDI2NjI0NjQxOTFkMWI5YTViY2MwY2UbtRa5: 00:27:55.023 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:55.023 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.023 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:55.023 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:55.023 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:55.023 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.023 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:55.023 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.023 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.023 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.023 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.023 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:55.023 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:55.023 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:55.023 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.023 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.023 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:55.023 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.023 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:55.023 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:55.023 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:55.023 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:55.023 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.023 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.023 nvme0n1 00:27:55.023 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.023 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.023 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.023 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.023 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.023 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.023 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.023 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.023 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.023 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.283 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.283 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.283 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:55.283 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.283 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:55.283 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:55.283 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:55.283 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmIzOWYzZmY3OTNiMTJmYjVlYzRiYTZlYzY5NDZjOTI5MTcyYzQxYmNjNzg0OTA1ZjAwYjg5NWNiYTY3NjI0YgoVZjM=: 00:27:55.283 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:55.283 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:55.283 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:55.283 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmIzOWYzZmY3OTNiMTJmYjVlYzRiYTZlYzY5NDZjOTI5MTcyYzQxYmNjNzg0OTA1ZjAwYjg5NWNiYTY3NjI0YgoVZjM=: 00:27:55.283 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:55.283 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:55.283 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.283 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:55.283 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:55.283 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:55.283 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.283 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:55.283 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.283 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.283 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.283 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.283 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:55.283 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:55.283 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:55.283 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.283 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.283 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:55.283 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.283 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:55.283 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:55.283 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:55.283 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:55.283 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.283 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.283 nvme0n1 00:27:55.283 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.283 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.283 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.284 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.284 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.284 12:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.284 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.284 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.284 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.284 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.284 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.284 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:55.284 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.284 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:55.284 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.284 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:55.284 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:55.284 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:55.284 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJhMjlmOGNkMmJhNDkzN2RkNjMzNjY0OTJiM2VlZDWftO6P: 00:27:55.284 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDU0MzUzZDY0YzIxYjlkZjU5YmRhNDJmMTFlYjk3ZmU3MjhmODMwYzcwYTFkZjYwNWY2MzRjMmNiN2Q2MDNhZCMj14I=: 00:27:55.284 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:55.284 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:55.284 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJhMjlmOGNkMmJhNDkzN2RkNjMzNjY0OTJiM2VlZDWftO6P: 00:27:55.284 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDU0MzUzZDY0YzIxYjlkZjU5YmRhNDJmMTFlYjk3ZmU3MjhmODMwYzcwYTFkZjYwNWY2MzRjMmNiN2Q2MDNhZCMj14I=: ]] 00:27:55.284 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDU0MzUzZDY0YzIxYjlkZjU5YmRhNDJmMTFlYjk3ZmU3MjhmODMwYzcwYTFkZjYwNWY2MzRjMmNiN2Q2MDNhZCMj14I=: 00:27:55.284 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:55.284 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.284 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:55.284 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:55.284 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:55.284 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.284 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:55.284 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.284 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.543 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.543 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.543 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:55.543 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:55.543 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:55.543 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.543 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.543 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:55.543 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.543 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:55.543 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:55.543 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:55.543 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:55.543 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.543 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.543 nvme0n1 00:27:55.543 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.543 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.543 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.543 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.543 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.801 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.801 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.801 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.801 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.801 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.801 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.801 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.801 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:55.801 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.801 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:55.801 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:55.801 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:55.801 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlZmI4NzE4ODUyNjdhMDQ5ZDRjZjU0NGE0NmI2ZDc3MGIwMjlhNzc0MjBhNmIy4LtRHg==: 00:27:55.801 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: 00:27:55.801 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:55.801 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:55.801 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlZmI4NzE4ODUyNjdhMDQ5ZDRjZjU0NGE0NmI2ZDc3MGIwMjlhNzc0MjBhNmIy4LtRHg==: 00:27:55.801 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: ]] 00:27:55.801 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: 00:27:55.802 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:55.802 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.802 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:55.802 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:55.802 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:55.802 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.802 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:55.802 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.802 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.802 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.802 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.802 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:55.802 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:55.802 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:55.802 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.802 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.802 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:55.802 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.802 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:55.802 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:55.802 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:55.802 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:55.802 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.802 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.060 nvme0n1 00:27:56.060 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.060 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.060 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.060 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.060 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.060 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.060 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.060 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.060 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.060 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.060 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.060 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.060 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:56.060 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.060 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:56.060 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:56.060 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:56.060 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzEzMTgxNTFhNDg5MmQ1NzA0ZTk4MTgwMzIyMzU3ZGbe5Y4u: 00:27:56.060 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: 00:27:56.060 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:56.060 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:56.060 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzEzMTgxNTFhNDg5MmQ1NzA0ZTk4MTgwMzIyMzU3ZGbe5Y4u: 00:27:56.060 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: ]] 00:27:56.060 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: 00:27:56.060 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:56.060 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.060 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:56.060 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:56.060 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:56.060 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.060 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:56.060 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.060 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.060 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.060 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.060 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:56.060 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:56.060 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:56.060 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.060 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.060 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:56.060 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.060 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:56.060 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:56.060 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:56.061 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:56.061 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.061 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.320 nvme0n1 00:27:56.320 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.320 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.320 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.320 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.320 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.320 12:42:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.320 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.320 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.320 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.320 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.320 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.320 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.320 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:56.320 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.320 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:56.320 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:56.320 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:56.320 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGNiZDIzNjZlOTVkNWM1YTZjMzM5MWY5Mzc4YmE0MzQyZmVkOGVjN2I5OGZjZjAysvvTvQ==: 00:27:56.320 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjUwMWNiYTk1NDI2NjI0NjQxOTFkMWI5YTViY2MwY2UbtRa5: 00:27:56.320 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:56.320 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:56.320 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGNiZDIzNjZlOTVkNWM1YTZjMzM5MWY5Mzc4YmE0MzQyZmVkOGVjN2I5OGZjZjAysvvTvQ==: 00:27:56.320 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjUwMWNiYTk1NDI2NjI0NjQxOTFkMWI5YTViY2MwY2UbtRa5: ]] 00:27:56.320 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjUwMWNiYTk1NDI2NjI0NjQxOTFkMWI5YTViY2MwY2UbtRa5: 00:27:56.320 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:56.320 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.320 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:56.320 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:56.320 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:56.320 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.320 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:56.320 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.320 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.320 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.320 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.320 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:56.320 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:56.320 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:56.320 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.320 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.320 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:56.320 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.320 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:56.320 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:56.320 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:56.320 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:56.320 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.320 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.594 nvme0n1 00:27:56.594 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.594 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.594 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.594 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.594 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.594 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.594 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.594 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.594 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.594 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.903 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.903 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.903 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:56.903 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.903 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:56.903 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:56.903 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:56.904 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmIzOWYzZmY3OTNiMTJmYjVlYzRiYTZlYzY5NDZjOTI5MTcyYzQxYmNjNzg0OTA1ZjAwYjg5NWNiYTY3NjI0YgoVZjM=: 00:27:56.904 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:56.904 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:56.904 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:56.904 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmIzOWYzZmY3OTNiMTJmYjVlYzRiYTZlYzY5NDZjOTI5MTcyYzQxYmNjNzg0OTA1ZjAwYjg5NWNiYTY3NjI0YgoVZjM=: 00:27:56.904 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:56.904 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:56.904 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.904 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:56.904 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:56.904 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:56.904 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.904 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:56.904 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.904 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.904 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.904 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.904 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:56.904 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:56.904 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:56.904 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.904 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.904 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:56.904 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.904 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:56.904 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:56.904 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:56.904 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:56.904 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.904 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.904 nvme0n1 00:27:56.904 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.904 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.904 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.904 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.904 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.904 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.193 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.193 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.193 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.193 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.193 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.193 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:57.193 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.193 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:57.193 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.193 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:57.193 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:57.193 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:57.193 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJhMjlmOGNkMmJhNDkzN2RkNjMzNjY0OTJiM2VlZDWftO6P: 00:27:57.193 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDU0MzUzZDY0YzIxYjlkZjU5YmRhNDJmMTFlYjk3ZmU3MjhmODMwYzcwYTFkZjYwNWY2MzRjMmNiN2Q2MDNhZCMj14I=: 00:27:57.193 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:57.193 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:57.193 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJhMjlmOGNkMmJhNDkzN2RkNjMzNjY0OTJiM2VlZDWftO6P: 00:27:57.193 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDU0MzUzZDY0YzIxYjlkZjU5YmRhNDJmMTFlYjk3ZmU3MjhmODMwYzcwYTFkZjYwNWY2MzRjMmNiN2Q2MDNhZCMj14I=: ]] 00:27:57.193 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDU0MzUzZDY0YzIxYjlkZjU5YmRhNDJmMTFlYjk3ZmU3MjhmODMwYzcwYTFkZjYwNWY2MzRjMmNiN2Q2MDNhZCMj14I=: 00:27:57.193 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:57.193 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.193 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:57.193 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:57.193 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:57.193 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.193 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:57.193 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.193 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.193 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.193 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.193 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:57.193 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:57.193 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:57.193 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.193 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.193 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:57.193 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.193 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:57.193 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:57.193 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:57.193 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:57.193 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.193 12:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.453 nvme0n1 00:27:57.453 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.453 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.453 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.453 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.453 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.453 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.453 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.453 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.453 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.453 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.453 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.453 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.453 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:57.453 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.453 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:57.453 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:57.453 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:57.453 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlZmI4NzE4ODUyNjdhMDQ5ZDRjZjU0NGE0NmI2ZDc3MGIwMjlhNzc0MjBhNmIy4LtRHg==: 00:27:57.453 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: 00:27:57.453 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:57.453 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:57.453 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlZmI4NzE4ODUyNjdhMDQ5ZDRjZjU0NGE0NmI2ZDc3MGIwMjlhNzc0MjBhNmIy4LtRHg==: 00:27:57.453 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: ]] 00:27:57.453 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: 00:27:57.453 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:57.453 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.453 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:57.453 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:57.453 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:57.453 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.453 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:57.453 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.453 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.453 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.453 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.453 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:57.453 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:57.453 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:57.453 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.453 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.453 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:57.453 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.453 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:57.453 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:57.453 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:57.453 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:57.453 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.453 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.021 nvme0n1 00:27:58.021 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.021 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.021 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.021 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.021 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.021 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.021 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.021 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.021 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.021 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.021 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.021 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.021 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:58.021 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.021 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:58.021 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:58.021 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:58.021 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzEzMTgxNTFhNDg5MmQ1NzA0ZTk4MTgwMzIyMzU3ZGbe5Y4u: 00:27:58.021 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: 00:27:58.021 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:58.021 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:58.021 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzEzMTgxNTFhNDg5MmQ1NzA0ZTk4MTgwMzIyMzU3ZGbe5Y4u: 00:27:58.021 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: ]] 00:27:58.021 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: 00:27:58.021 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:58.021 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.021 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:58.021 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:58.021 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:58.021 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.022 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:58.022 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.022 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.022 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.022 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.022 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:58.022 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:58.022 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:58.022 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.022 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.022 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:58.022 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.022 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:58.022 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:58.022 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:58.022 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:58.022 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.022 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.281 nvme0n1 00:27:58.281 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.281 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.281 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.281 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.281 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.281 12:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.281 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.281 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.281 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.281 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.281 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.281 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.281 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:58.281 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.281 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:58.540 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:58.540 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:58.540 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGNiZDIzNjZlOTVkNWM1YTZjMzM5MWY5Mzc4YmE0MzQyZmVkOGVjN2I5OGZjZjAysvvTvQ==: 00:27:58.540 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjUwMWNiYTk1NDI2NjI0NjQxOTFkMWI5YTViY2MwY2UbtRa5: 00:27:58.540 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:58.540 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:58.540 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGNiZDIzNjZlOTVkNWM1YTZjMzM5MWY5Mzc4YmE0MzQyZmVkOGVjN2I5OGZjZjAysvvTvQ==: 00:27:58.540 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjUwMWNiYTk1NDI2NjI0NjQxOTFkMWI5YTViY2MwY2UbtRa5: ]] 00:27:58.540 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjUwMWNiYTk1NDI2NjI0NjQxOTFkMWI5YTViY2MwY2UbtRa5: 00:27:58.540 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:58.540 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.540 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:58.540 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:58.540 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:58.540 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.540 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:58.540 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.540 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.540 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.540 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.540 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:58.540 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:58.540 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:58.540 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.540 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.540 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:58.540 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.540 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:58.540 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:58.540 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:58.540 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:58.540 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.540 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.799 nvme0n1 00:27:58.799 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.799 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.799 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.799 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.799 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.799 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.799 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.799 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.799 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.799 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.799 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.799 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.799 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:58.799 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.799 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:58.799 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:58.799 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:58.799 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmIzOWYzZmY3OTNiMTJmYjVlYzRiYTZlYzY5NDZjOTI5MTcyYzQxYmNjNzg0OTA1ZjAwYjg5NWNiYTY3NjI0YgoVZjM=: 00:27:58.799 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:58.799 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:58.799 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:58.799 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmIzOWYzZmY3OTNiMTJmYjVlYzRiYTZlYzY5NDZjOTI5MTcyYzQxYmNjNzg0OTA1ZjAwYjg5NWNiYTY3NjI0YgoVZjM=: 00:27:58.799 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:58.799 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:58.799 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.799 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:58.799 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:58.799 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:58.799 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.799 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:58.799 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.799 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.799 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.799 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.799 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:58.799 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:58.800 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:58.800 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.800 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.800 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:58.800 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.800 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:58.800 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:58.800 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:58.800 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:58.800 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.800 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.368 nvme0n1 00:27:59.368 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.368 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.368 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.368 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.368 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.368 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.368 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.368 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.368 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.368 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.368 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.368 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:59.368 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.368 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:59.368 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.368 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:59.368 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:59.368 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:59.368 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJhMjlmOGNkMmJhNDkzN2RkNjMzNjY0OTJiM2VlZDWftO6P: 00:27:59.368 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDU0MzUzZDY0YzIxYjlkZjU5YmRhNDJmMTFlYjk3ZmU3MjhmODMwYzcwYTFkZjYwNWY2MzRjMmNiN2Q2MDNhZCMj14I=: 00:27:59.368 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:59.368 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:59.368 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJhMjlmOGNkMmJhNDkzN2RkNjMzNjY0OTJiM2VlZDWftO6P: 00:27:59.368 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDU0MzUzZDY0YzIxYjlkZjU5YmRhNDJmMTFlYjk3ZmU3MjhmODMwYzcwYTFkZjYwNWY2MzRjMmNiN2Q2MDNhZCMj14I=: ]] 00:27:59.368 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDU0MzUzZDY0YzIxYjlkZjU5YmRhNDJmMTFlYjk3ZmU3MjhmODMwYzcwYTFkZjYwNWY2MzRjMmNiN2Q2MDNhZCMj14I=: 00:27:59.368 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:59.368 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.368 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:59.368 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:59.369 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:59.369 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.369 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:59.369 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.369 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.369 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.369 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.369 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:59.369 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:59.369 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:59.369 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.369 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.369 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:59.369 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.369 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:59.369 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:59.369 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:59.369 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:59.369 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.369 12:42:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.937 nvme0n1 00:27:59.937 12:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.937 12:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.937 12:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.937 12:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.937 12:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.937 12:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.937 12:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.937 12:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.937 12:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.937 12:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.937 12:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.937 12:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.937 12:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:59.937 12:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.937 12:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:59.937 12:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:59.937 12:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:59.937 12:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlZmI4NzE4ODUyNjdhMDQ5ZDRjZjU0NGE0NmI2ZDc3MGIwMjlhNzc0MjBhNmIy4LtRHg==: 00:27:59.937 12:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: 00:27:59.937 12:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:59.937 12:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:59.937 12:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlZmI4NzE4ODUyNjdhMDQ5ZDRjZjU0NGE0NmI2ZDc3MGIwMjlhNzc0MjBhNmIy4LtRHg==: 00:27:59.937 12:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: ]] 00:27:59.937 12:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: 00:27:59.937 12:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:59.937 12:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.937 12:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:59.937 12:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:59.937 12:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:59.937 12:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.937 12:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:59.937 12:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.937 12:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.937 12:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.937 12:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.937 12:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:59.937 12:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:59.938 12:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:59.938 12:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.938 12:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.938 12:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:59.938 12:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.938 12:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:59.938 12:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:59.938 12:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:59.938 12:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:59.938 12:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.938 12:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.506 nvme0n1 00:28:00.506 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.506 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.506 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.506 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.506 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.506 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.506 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.506 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.506 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.506 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.766 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.766 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.766 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:00.766 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.766 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:00.766 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:00.766 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:00.766 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzEzMTgxNTFhNDg5MmQ1NzA0ZTk4MTgwMzIyMzU3ZGbe5Y4u: 00:28:00.766 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: 00:28:00.766 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:00.766 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:00.766 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzEzMTgxNTFhNDg5MmQ1NzA0ZTk4MTgwMzIyMzU3ZGbe5Y4u: 00:28:00.766 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: ]] 00:28:00.766 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: 00:28:00.766 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:00.766 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.766 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:00.766 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:00.766 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:00.766 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.766 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:00.766 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.766 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.766 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.766 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.766 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:00.766 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:00.766 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:00.766 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.766 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.766 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:00.766 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.766 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:00.766 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:00.766 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:00.766 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:00.766 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.766 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.334 nvme0n1 00:28:01.334 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.334 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.334 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.334 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.334 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.334 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.334 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.334 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.334 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.334 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.334 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.334 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.334 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:01.334 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.334 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:01.334 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:01.334 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:01.334 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGNiZDIzNjZlOTVkNWM1YTZjMzM5MWY5Mzc4YmE0MzQyZmVkOGVjN2I5OGZjZjAysvvTvQ==: 00:28:01.334 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjUwMWNiYTk1NDI2NjI0NjQxOTFkMWI5YTViY2MwY2UbtRa5: 00:28:01.334 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:01.334 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:01.334 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGNiZDIzNjZlOTVkNWM1YTZjMzM5MWY5Mzc4YmE0MzQyZmVkOGVjN2I5OGZjZjAysvvTvQ==: 00:28:01.334 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjUwMWNiYTk1NDI2NjI0NjQxOTFkMWI5YTViY2MwY2UbtRa5: ]] 00:28:01.334 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjUwMWNiYTk1NDI2NjI0NjQxOTFkMWI5YTViY2MwY2UbtRa5: 00:28:01.334 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:01.334 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.334 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:01.334 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:01.334 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:01.334 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.334 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:01.334 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.334 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.334 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.334 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.334 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:01.334 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:01.334 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:01.334 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.334 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.334 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:01.334 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.334 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:01.334 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:01.334 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:01.334 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:01.334 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.334 12:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.902 nvme0n1 00:28:01.902 12:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.902 12:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.902 12:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.902 12:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.902 12:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.902 12:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.902 12:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.902 12:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.902 12:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.902 12:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.902 12:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.902 12:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.902 12:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:01.902 12:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.902 12:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:01.902 12:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:01.902 12:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:01.902 12:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmIzOWYzZmY3OTNiMTJmYjVlYzRiYTZlYzY5NDZjOTI5MTcyYzQxYmNjNzg0OTA1ZjAwYjg5NWNiYTY3NjI0YgoVZjM=: 00:28:01.902 12:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:01.902 12:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:01.902 12:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:01.902 12:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmIzOWYzZmY3OTNiMTJmYjVlYzRiYTZlYzY5NDZjOTI5MTcyYzQxYmNjNzg0OTA1ZjAwYjg5NWNiYTY3NjI0YgoVZjM=: 00:28:01.902 12:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:01.902 12:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:01.902 12:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.902 12:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:01.902 12:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:01.902 12:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:01.902 12:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.902 12:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:01.902 12:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.902 12:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.902 12:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.902 12:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.902 12:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:01.902 12:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:01.902 12:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:01.902 12:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.902 12:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.902 12:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:01.902 12:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.902 12:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:01.902 12:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:01.902 12:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:01.902 12:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:01.902 12:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.902 12:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.471 nvme0n1 00:28:02.471 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.471 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.471 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.471 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.471 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.471 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.471 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.471 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.471 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.471 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.471 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.471 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:02.471 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.471 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:02.471 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:02.471 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:02.471 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlZmI4NzE4ODUyNjdhMDQ5ZDRjZjU0NGE0NmI2ZDc3MGIwMjlhNzc0MjBhNmIy4LtRHg==: 00:28:02.471 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: 00:28:02.471 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:02.471 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:02.471 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlZmI4NzE4ODUyNjdhMDQ5ZDRjZjU0NGE0NmI2ZDc3MGIwMjlhNzc0MjBhNmIy4LtRHg==: 00:28:02.471 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: ]] 00:28:02.471 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: 00:28:02.471 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:02.471 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.471 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.730 request: 00:28:02.730 { 00:28:02.730 "name": "nvme0", 00:28:02.730 "trtype": "tcp", 00:28:02.730 "traddr": "10.0.0.1", 00:28:02.730 "adrfam": "ipv4", 00:28:02.730 "trsvcid": "4420", 00:28:02.730 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:02.730 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:02.730 "prchk_reftag": false, 00:28:02.730 "prchk_guard": false, 00:28:02.730 "hdgst": false, 00:28:02.730 "ddgst": false, 00:28:02.730 "allow_unrecognized_csi": false, 00:28:02.730 "method": "bdev_nvme_attach_controller", 00:28:02.730 "req_id": 1 00:28:02.730 } 00:28:02.730 Got JSON-RPC error response 00:28:02.730 response: 00:28:02.730 { 00:28:02.730 "code": -5, 00:28:02.730 "message": "Input/output error" 00:28:02.730 } 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.730 request: 00:28:02.730 { 00:28:02.730 "name": "nvme0", 00:28:02.730 "trtype": "tcp", 00:28:02.730 "traddr": "10.0.0.1", 00:28:02.730 "adrfam": "ipv4", 00:28:02.730 "trsvcid": "4420", 00:28:02.730 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:02.730 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:02.730 "prchk_reftag": false, 00:28:02.730 "prchk_guard": false, 00:28:02.730 "hdgst": false, 00:28:02.730 "ddgst": false, 00:28:02.730 "dhchap_key": "key2", 00:28:02.730 "allow_unrecognized_csi": false, 00:28:02.730 "method": "bdev_nvme_attach_controller", 00:28:02.730 "req_id": 1 00:28:02.730 } 00:28:02.730 Got JSON-RPC error response 00:28:02.730 response: 00:28:02.730 { 00:28:02.730 "code": -5, 00:28:02.730 "message": "Input/output error" 00:28:02.730 } 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.730 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:02.731 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:02.731 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:02.731 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:02.731 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:02.731 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:02.731 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:02.731 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:02.731 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:02.731 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:02.731 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:02.731 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.731 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.990 request: 00:28:02.990 { 00:28:02.990 "name": "nvme0", 00:28:02.990 "trtype": "tcp", 00:28:02.990 "traddr": "10.0.0.1", 00:28:02.990 "adrfam": "ipv4", 00:28:02.990 "trsvcid": "4420", 00:28:02.990 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:02.990 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:02.990 "prchk_reftag": false, 00:28:02.990 "prchk_guard": false, 00:28:02.990 "hdgst": false, 00:28:02.990 "ddgst": false, 00:28:02.990 "dhchap_key": "key1", 00:28:02.990 "dhchap_ctrlr_key": "ckey2", 00:28:02.990 "allow_unrecognized_csi": false, 00:28:02.990 "method": "bdev_nvme_attach_controller", 00:28:02.990 "req_id": 1 00:28:02.990 } 00:28:02.990 Got JSON-RPC error response 00:28:02.990 response: 00:28:02.990 { 00:28:02.990 "code": -5, 00:28:02.990 "message": "Input/output error" 00:28:02.990 } 00:28:02.990 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:02.990 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:02.990 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:02.990 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:02.990 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:02.990 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:28:02.990 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:02.990 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:02.990 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:02.990 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.990 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.990 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:02.990 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.990 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:02.990 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:02.990 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:02.990 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:02.990 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.990 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.990 nvme0n1 00:28:02.990 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.990 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:02.990 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.990 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:02.990 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:02.990 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:02.990 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzEzMTgxNTFhNDg5MmQ1NzA0ZTk4MTgwMzIyMzU3ZGbe5Y4u: 00:28:02.990 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: 00:28:02.990 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:02.990 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:02.990 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzEzMTgxNTFhNDg5MmQ1NzA0ZTk4MTgwMzIyMzU3ZGbe5Y4u: 00:28:02.990 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: ]] 00:28:02.990 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: 00:28:02.990 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:02.990 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.990 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.249 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.249 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.249 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:28:03.249 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.250 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.250 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.250 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.250 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:03.250 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:03.250 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:03.250 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:03.250 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:03.250 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:03.250 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:03.250 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:03.250 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.250 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.250 request: 00:28:03.250 { 00:28:03.250 "name": "nvme0", 00:28:03.250 "dhchap_key": "key1", 00:28:03.250 "dhchap_ctrlr_key": "ckey2", 00:28:03.250 "method": "bdev_nvme_set_keys", 00:28:03.250 "req_id": 1 00:28:03.250 } 00:28:03.250 Got JSON-RPC error response 00:28:03.250 response: 00:28:03.250 { 00:28:03.250 "code": -13, 00:28:03.250 "message": "Permission denied" 00:28:03.250 } 00:28:03.250 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:03.250 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:03.250 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:03.250 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:03.250 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:03.250 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.250 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:03.250 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.250 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.250 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.250 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:03.250 12:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:04.187 12:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.187 12:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:04.187 12:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.187 12:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.187 12:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.446 12:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:04.446 12:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:05.384 12:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.384 12:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:05.384 12:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.384 12:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.384 12:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.384 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:28:05.384 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:05.384 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.384 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:05.384 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:05.384 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:05.384 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlZmI4NzE4ODUyNjdhMDQ5ZDRjZjU0NGE0NmI2ZDc3MGIwMjlhNzc0MjBhNmIy4LtRHg==: 00:28:05.384 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: 00:28:05.384 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:05.384 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:05.384 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlZmI4NzE4ODUyNjdhMDQ5ZDRjZjU0NGE0NmI2ZDc3MGIwMjlhNzc0MjBhNmIy4LtRHg==: 00:28:05.384 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: ]] 00:28:05.384 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmE5ODU2NGE2NDk2OTA1YWM4NWI4MTNiNTMxNjg2OWQzNmMyZmY2Mjc3YjUzNjgwlMN4XQ==: 00:28:05.384 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:28:05.384 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:05.384 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:05.384 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:05.384 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.384 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.384 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:05.384 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.384 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:05.384 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:05.384 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:05.384 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:05.384 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.384 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.643 nvme0n1 00:28:05.643 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.643 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:05.643 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.643 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:05.643 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:05.643 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:05.643 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzEzMTgxNTFhNDg5MmQ1NzA0ZTk4MTgwMzIyMzU3ZGbe5Y4u: 00:28:05.643 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: 00:28:05.643 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:05.643 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:05.643 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzEzMTgxNTFhNDg5MmQ1NzA0ZTk4MTgwMzIyMzU3ZGbe5Y4u: 00:28:05.643 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: ]] 00:28:05.643 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM1YzljYzRlNjBmY2I2YzZiNjE5OGY0NTJjOGMxYzJ0JqZw: 00:28:05.643 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:05.643 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:05.643 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:05.643 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:05.643 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:05.643 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:05.643 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:05.643 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:05.643 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.643 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.643 request: 00:28:05.643 { 00:28:05.643 "name": "nvme0", 00:28:05.643 "dhchap_key": "key2", 00:28:05.643 "dhchap_ctrlr_key": "ckey1", 00:28:05.643 "method": "bdev_nvme_set_keys", 00:28:05.643 "req_id": 1 00:28:05.643 } 00:28:05.643 Got JSON-RPC error response 00:28:05.643 response: 00:28:05.643 { 00:28:05.643 "code": -13, 00:28:05.643 "message": "Permission denied" 00:28:05.643 } 00:28:05.643 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:05.643 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:05.643 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:05.643 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:05.643 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:05.643 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.643 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:05.643 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.643 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.643 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.643 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:28:05.643 12:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:28:06.578 12:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.578 12:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:06.578 12:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.578 12:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.578 12:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.578 12:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:28:06.578 12:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:28:06.578 12:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:28:06.578 12:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:06.578 12:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:06.578 12:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:28:06.578 12:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:06.578 12:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:28:06.578 12:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:06.578 12:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:06.578 rmmod nvme_tcp 00:28:06.837 rmmod nvme_fabrics 00:28:06.837 12:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:06.837 12:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:28:06.837 12:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:28:06.837 12:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 314170 ']' 00:28:06.837 12:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 314170 00:28:06.837 12:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 314170 ']' 00:28:06.837 12:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 314170 00:28:06.837 12:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:28:06.837 12:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:06.837 12:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 314170 00:28:06.837 12:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:06.837 12:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:06.837 12:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 314170' 00:28:06.837 killing process with pid 314170 00:28:06.837 12:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 314170 00:28:06.837 12:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 314170 00:28:06.837 12:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:06.837 12:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:06.837 12:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:06.837 12:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:28:06.837 12:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:28:06.837 12:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:06.837 12:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:28:06.837 12:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:06.837 12:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:06.837 12:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:06.837 12:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:06.837 12:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:09.371 12:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:09.371 12:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:09.371 12:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:09.371 12:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:09.371 12:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:09.371 12:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:28:09.371 12:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:09.371 12:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:09.371 12:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:09.371 12:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:09.371 12:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:09.371 12:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:09.371 12:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:11.907 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:11.907 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:11.907 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:11.907 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:11.907 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:11.907 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:11.907 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:11.907 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:11.907 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:11.907 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:11.907 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:11.907 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:11.907 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:11.907 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:12.166 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:12.166 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:13.557 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:28:13.557 12:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.rZn /tmp/spdk.key-null.Xbi /tmp/spdk.key-sha256.abN /tmp/spdk.key-sha384.7gm /tmp/spdk.key-sha512.hVX /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:13.557 12:42:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:16.101 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:28:16.101 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:28:16.101 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:28:16.101 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:28:16.101 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:28:16.101 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:28:16.101 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:28:16.101 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:28:16.101 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:28:16.101 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:28:16.101 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:28:16.101 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:28:16.101 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:28:16.101 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:28:16.101 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:28:16.101 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:28:16.101 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:28:16.360 00:28:16.360 real 0m54.451s 00:28:16.360 user 0m48.683s 00:28:16.360 sys 0m12.672s 00:28:16.360 12:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:16.360 12:42:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.360 ************************************ 00:28:16.360 END TEST nvmf_auth_host 00:28:16.360 ************************************ 00:28:16.360 12:42:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:28:16.360 12:42:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:16.360 12:42:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:16.360 12:42:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:16.360 12:42:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.360 ************************************ 00:28:16.360 START TEST nvmf_digest 00:28:16.360 ************************************ 00:28:16.360 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:16.620 * Looking for test storage... 00:28:16.620 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:16.620 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:16.620 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:28:16.620 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:16.620 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:16.620 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:16.620 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:16.620 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:16.620 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:28:16.620 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:28:16.620 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:16.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.621 --rc genhtml_branch_coverage=1 00:28:16.621 --rc genhtml_function_coverage=1 00:28:16.621 --rc genhtml_legend=1 00:28:16.621 --rc geninfo_all_blocks=1 00:28:16.621 --rc geninfo_unexecuted_blocks=1 00:28:16.621 00:28:16.621 ' 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:16.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.621 --rc genhtml_branch_coverage=1 00:28:16.621 --rc genhtml_function_coverage=1 00:28:16.621 --rc genhtml_legend=1 00:28:16.621 --rc geninfo_all_blocks=1 00:28:16.621 --rc geninfo_unexecuted_blocks=1 00:28:16.621 00:28:16.621 ' 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:16.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.621 --rc genhtml_branch_coverage=1 00:28:16.621 --rc genhtml_function_coverage=1 00:28:16.621 --rc genhtml_legend=1 00:28:16.621 --rc geninfo_all_blocks=1 00:28:16.621 --rc geninfo_unexecuted_blocks=1 00:28:16.621 00:28:16.621 ' 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:16.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.621 --rc genhtml_branch_coverage=1 00:28:16.621 --rc genhtml_function_coverage=1 00:28:16.621 --rc genhtml_legend=1 00:28:16.621 --rc geninfo_all_blocks=1 00:28:16.621 --rc geninfo_unexecuted_blocks=1 00:28:16.621 00:28:16.621 ' 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:16.621 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:28:16.621 12:42:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:23.193 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:23.193 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:28:23.193 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:23.193 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:23.193 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:23.193 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:23.193 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:23.193 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:28:23.193 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:23.193 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:28:23.193 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:28:23.193 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:28:23.193 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:28:23.193 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:28:23.193 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:28:23.193 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:23.193 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:23.193 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:23.193 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:23.193 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:23.193 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:23.193 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:23.193 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:23.193 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:23.193 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:23.193 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:23.193 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:23.193 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:23.193 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:23.193 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:23.193 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:23.193 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:23.193 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:23.193 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:23.193 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:23.193 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:23.193 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:23.193 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:23.193 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:23.193 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:23.194 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:23.194 Found net devices under 0000:86:00.0: cvl_0_0 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:23.194 Found net devices under 0000:86:00.1: cvl_0_1 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:23.194 12:42:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:23.194 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:23.194 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:23.194 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:23.194 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:23.194 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:23.194 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:23.194 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:23.194 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:23.194 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:23.194 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.450 ms 00:28:23.194 00:28:23.194 --- 10.0.0.2 ping statistics --- 00:28:23.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:23.194 rtt min/avg/max/mdev = 0.450/0.450/0.450/0.000 ms 00:28:23.194 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:23.194 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:23.194 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:28:23.194 00:28:23.194 --- 10.0.0.1 ping statistics --- 00:28:23.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:23.194 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:28:23.194 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:23.194 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:28:23.194 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:23.194 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:23.194 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:23.194 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:23.194 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:23.194 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:23.194 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:23.194 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:23.194 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:23.194 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:23.194 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:23.194 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:23.194 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:23.194 ************************************ 00:28:23.194 START TEST nvmf_digest_clean 00:28:23.194 ************************************ 00:28:23.194 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:28:23.194 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:23.194 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:23.194 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:23.194 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:23.194 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:23.194 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:23.194 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:23.194 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:23.194 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=328449 00:28:23.194 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 328449 00:28:23.194 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:23.194 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 328449 ']' 00:28:23.194 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:23.194 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:23.194 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:23.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:23.194 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:23.194 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:23.194 [2024-11-20 12:42:28.254963] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:28:23.194 [2024-11-20 12:42:28.255008] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:23.194 [2024-11-20 12:42:28.334743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:23.194 [2024-11-20 12:42:28.375024] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:23.194 [2024-11-20 12:42:28.375060] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:23.194 [2024-11-20 12:42:28.375067] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:23.194 [2024-11-20 12:42:28.375073] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:23.194 [2024-11-20 12:42:28.375078] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:23.195 [2024-11-20 12:42:28.375607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:23.195 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:23.195 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:23.195 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:23.195 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:23.195 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:23.195 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:23.195 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:23.195 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:23.195 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:23.195 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.195 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:23.195 null0 00:28:23.195 [2024-11-20 12:42:28.526733] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:23.195 [2024-11-20 12:42:28.550935] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:23.195 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.195 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:23.195 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:23.195 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:23.195 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:23.195 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:23.195 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:23.195 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:23.195 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=328476 00:28:23.195 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 328476 /var/tmp/bperf.sock 00:28:23.195 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:23.195 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 328476 ']' 00:28:23.195 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:23.195 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:23.195 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:23.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:23.195 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:23.195 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:23.195 [2024-11-20 12:42:28.603880] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:28:23.195 [2024-11-20 12:42:28.603922] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid328476 ] 00:28:23.195 [2024-11-20 12:42:28.677469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:23.195 [2024-11-20 12:42:28.717566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:23.195 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:23.195 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:23.195 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:23.195 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:23.195 12:42:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:23.455 12:42:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:23.455 12:42:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:23.717 nvme0n1 00:28:23.717 12:42:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:23.717 12:42:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:23.976 Running I/O for 2 seconds... 00:28:25.850 25799.00 IOPS, 100.78 MiB/s [2024-11-20T11:42:31.616Z] 25443.00 IOPS, 99.39 MiB/s 00:28:25.850 Latency(us) 00:28:25.850 [2024-11-20T11:42:31.616Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:25.850 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:25.850 nvme0n1 : 2.05 24933.72 97.40 0.00 0.00 5025.88 2543.42 46187.28 00:28:25.850 [2024-11-20T11:42:31.616Z] =================================================================================================================== 00:28:25.850 [2024-11-20T11:42:31.616Z] Total : 24933.72 97.40 0.00 0.00 5025.88 2543.42 46187.28 00:28:25.850 { 00:28:25.850 "results": [ 00:28:25.850 { 00:28:25.850 "job": "nvme0n1", 00:28:25.850 "core_mask": "0x2", 00:28:25.850 "workload": "randread", 00:28:25.850 "status": "finished", 00:28:25.850 "queue_depth": 128, 00:28:25.850 "io_size": 4096, 00:28:25.850 "runtime": 2.045984, 00:28:25.850 "iops": 24933.72382188717, 00:28:25.850 "mibps": 97.39735867924676, 00:28:25.850 "io_failed": 0, 00:28:25.850 "io_timeout": 0, 00:28:25.850 "avg_latency_us": 5025.8804814364685, 00:28:25.850 "min_latency_us": 2543.4209523809523, 00:28:25.850 "max_latency_us": 46187.276190476194 00:28:25.850 } 00:28:25.850 ], 00:28:25.850 "core_count": 1 00:28:25.850 } 00:28:25.850 12:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:25.850 12:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:26.109 12:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:26.109 12:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:26.109 | select(.opcode=="crc32c") 00:28:26.109 | "\(.module_name) \(.executed)"' 00:28:26.109 12:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:26.109 12:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:26.109 12:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:26.109 12:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:26.109 12:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:26.109 12:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 328476 00:28:26.109 12:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 328476 ']' 00:28:26.109 12:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 328476 00:28:26.109 12:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:26.109 12:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:26.109 12:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 328476 00:28:26.109 12:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:26.109 12:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:26.109 12:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 328476' 00:28:26.109 killing process with pid 328476 00:28:26.109 12:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 328476 00:28:26.109 Received shutdown signal, test time was about 2.000000 seconds 00:28:26.109 00:28:26.109 Latency(us) 00:28:26.109 [2024-11-20T11:42:31.875Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:26.109 [2024-11-20T11:42:31.875Z] =================================================================================================================== 00:28:26.109 [2024-11-20T11:42:31.875Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:26.109 12:42:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 328476 00:28:26.368 12:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:26.368 12:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:26.368 12:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:26.368 12:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:26.368 12:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:26.368 12:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:26.369 12:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:26.369 12:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=328960 00:28:26.369 12:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 328960 /var/tmp/bperf.sock 00:28:26.369 12:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 328960 ']' 00:28:26.369 12:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:26.369 12:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:26.369 12:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:26.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:26.369 12:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:26.369 12:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:26.369 12:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:26.369 [2024-11-20 12:42:32.058032] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:28:26.369 [2024-11-20 12:42:32.058087] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid328960 ] 00:28:26.369 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:26.369 Zero copy mechanism will not be used. 00:28:26.627 [2024-11-20 12:42:32.134973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:26.627 [2024-11-20 12:42:32.176374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:26.627 12:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:26.627 12:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:26.627 12:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:26.627 12:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:26.627 12:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:26.886 12:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:26.886 12:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:27.145 nvme0n1 00:28:27.145 12:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:27.145 12:42:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:27.145 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:27.145 Zero copy mechanism will not be used. 00:28:27.145 Running I/O for 2 seconds... 00:28:29.461 5985.00 IOPS, 748.12 MiB/s [2024-11-20T11:42:35.227Z] 6098.50 IOPS, 762.31 MiB/s 00:28:29.461 Latency(us) 00:28:29.461 [2024-11-20T11:42:35.227Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:29.461 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:29.461 nvme0n1 : 2.00 6097.34 762.17 0.00 0.00 2621.62 631.95 10922.67 00:28:29.461 [2024-11-20T11:42:35.227Z] =================================================================================================================== 00:28:29.461 [2024-11-20T11:42:35.227Z] Total : 6097.34 762.17 0.00 0.00 2621.62 631.95 10922.67 00:28:29.461 { 00:28:29.461 "results": [ 00:28:29.461 { 00:28:29.461 "job": "nvme0n1", 00:28:29.461 "core_mask": "0x2", 00:28:29.461 "workload": "randread", 00:28:29.461 "status": "finished", 00:28:29.461 "queue_depth": 16, 00:28:29.461 "io_size": 131072, 00:28:29.461 "runtime": 2.003003, 00:28:29.461 "iops": 6097.344836727653, 00:28:29.461 "mibps": 762.1681045909567, 00:28:29.461 "io_failed": 0, 00:28:29.461 "io_timeout": 0, 00:28:29.461 "avg_latency_us": 2621.6215844942744, 00:28:29.461 "min_latency_us": 631.9542857142857, 00:28:29.461 "max_latency_us": 10922.666666666666 00:28:29.461 } 00:28:29.461 ], 00:28:29.461 "core_count": 1 00:28:29.461 } 00:28:29.461 12:42:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:29.461 12:42:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:29.461 12:42:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:29.461 12:42:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:29.461 | select(.opcode=="crc32c") 00:28:29.461 | "\(.module_name) \(.executed)"' 00:28:29.461 12:42:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:29.461 12:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:29.461 12:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:29.461 12:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:29.461 12:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:29.461 12:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 328960 00:28:29.461 12:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 328960 ']' 00:28:29.461 12:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 328960 00:28:29.461 12:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:29.461 12:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:29.461 12:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 328960 00:28:29.461 12:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:29.461 12:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:29.461 12:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 328960' 00:28:29.461 killing process with pid 328960 00:28:29.461 12:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 328960 00:28:29.461 Received shutdown signal, test time was about 2.000000 seconds 00:28:29.461 00:28:29.461 Latency(us) 00:28:29.461 [2024-11-20T11:42:35.227Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:29.461 [2024-11-20T11:42:35.227Z] =================================================================================================================== 00:28:29.461 [2024-11-20T11:42:35.227Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:29.461 12:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 328960 00:28:29.721 12:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:29.721 12:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:29.721 12:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:29.721 12:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:29.721 12:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:29.721 12:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:29.721 12:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:29.721 12:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=329634 00:28:29.721 12:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 329634 /var/tmp/bperf.sock 00:28:29.721 12:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:29.721 12:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 329634 ']' 00:28:29.721 12:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:29.721 12:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:29.721 12:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:29.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:29.721 12:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:29.721 12:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:29.721 [2024-11-20 12:42:35.363613] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:28:29.721 [2024-11-20 12:42:35.363664] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid329634 ] 00:28:29.721 [2024-11-20 12:42:35.434992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.721 [2024-11-20 12:42:35.477791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:29.980 12:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:29.980 12:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:29.980 12:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:29.980 12:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:29.980 12:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:30.239 12:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:30.239 12:42:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:30.498 nvme0n1 00:28:30.498 12:42:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:30.498 12:42:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:30.498 Running I/O for 2 seconds... 00:28:32.812 28556.00 IOPS, 111.55 MiB/s [2024-11-20T11:42:38.578Z] 28554.50 IOPS, 111.54 MiB/s 00:28:32.812 Latency(us) 00:28:32.812 [2024-11-20T11:42:38.578Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:32.812 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:32.812 nvme0n1 : 2.00 28565.35 111.58 0.00 0.00 4474.94 1856.85 7021.71 00:28:32.812 [2024-11-20T11:42:38.578Z] =================================================================================================================== 00:28:32.812 [2024-11-20T11:42:38.578Z] Total : 28565.35 111.58 0.00 0.00 4474.94 1856.85 7021.71 00:28:32.812 { 00:28:32.812 "results": [ 00:28:32.812 { 00:28:32.812 "job": "nvme0n1", 00:28:32.812 "core_mask": "0x2", 00:28:32.812 "workload": "randwrite", 00:28:32.812 "status": "finished", 00:28:32.812 "queue_depth": 128, 00:28:32.812 "io_size": 4096, 00:28:32.812 "runtime": 2.003721, 00:28:32.812 "iops": 28565.354158587947, 00:28:32.812 "mibps": 111.58341468198417, 00:28:32.812 "io_failed": 0, 00:28:32.812 "io_timeout": 0, 00:28:32.812 "avg_latency_us": 4474.94218235457, 00:28:32.812 "min_latency_us": 1856.8533333333332, 00:28:32.812 "max_latency_us": 7021.714285714285 00:28:32.812 } 00:28:32.812 ], 00:28:32.812 "core_count": 1 00:28:32.812 } 00:28:32.812 12:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:32.812 12:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:32.812 12:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:32.812 12:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:32.812 | select(.opcode=="crc32c") 00:28:32.812 | "\(.module_name) \(.executed)"' 00:28:32.812 12:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:32.812 12:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:32.812 12:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:32.812 12:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:32.812 12:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:32.812 12:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 329634 00:28:32.812 12:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 329634 ']' 00:28:32.812 12:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 329634 00:28:32.812 12:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:32.812 12:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:32.812 12:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 329634 00:28:32.812 12:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:32.812 12:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:32.812 12:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 329634' 00:28:32.812 killing process with pid 329634 00:28:32.812 12:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 329634 00:28:32.812 Received shutdown signal, test time was about 2.000000 seconds 00:28:32.812 00:28:32.812 Latency(us) 00:28:32.812 [2024-11-20T11:42:38.578Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:32.812 [2024-11-20T11:42:38.578Z] =================================================================================================================== 00:28:32.812 [2024-11-20T11:42:38.578Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:32.812 12:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 329634 00:28:33.071 12:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:33.071 12:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:33.071 12:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:33.071 12:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:33.071 12:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:33.071 12:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:33.071 12:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:33.071 12:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=330116 00:28:33.071 12:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 330116 /var/tmp/bperf.sock 00:28:33.071 12:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:33.071 12:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 330116 ']' 00:28:33.071 12:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:33.071 12:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:33.071 12:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:33.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:33.072 12:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:33.072 12:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:33.072 [2024-11-20 12:42:38.718125] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:28:33.072 [2024-11-20 12:42:38.718172] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid330116 ] 00:28:33.072 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:33.072 Zero copy mechanism will not be used. 00:28:33.072 [2024-11-20 12:42:38.792550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.331 [2024-11-20 12:42:38.835052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:33.331 12:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:33.331 12:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:33.331 12:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:33.331 12:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:33.331 12:42:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:33.589 12:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:33.589 12:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:33.846 nvme0n1 00:28:33.846 12:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:33.846 12:42:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:33.846 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:33.846 Zero copy mechanism will not be used. 00:28:33.847 Running I/O for 2 seconds... 00:28:35.715 7017.00 IOPS, 877.12 MiB/s [2024-11-20T11:42:41.481Z] 6749.00 IOPS, 843.62 MiB/s 00:28:35.715 Latency(us) 00:28:35.715 [2024-11-20T11:42:41.481Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:35.715 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:35.715 nvme0n1 : 2.00 6744.59 843.07 0.00 0.00 2367.63 1693.01 8426.06 00:28:35.715 [2024-11-20T11:42:41.481Z] =================================================================================================================== 00:28:35.715 [2024-11-20T11:42:41.481Z] Total : 6744.59 843.07 0.00 0.00 2367.63 1693.01 8426.06 00:28:35.715 { 00:28:35.715 "results": [ 00:28:35.715 { 00:28:35.715 "job": "nvme0n1", 00:28:35.715 "core_mask": "0x2", 00:28:35.715 "workload": "randwrite", 00:28:35.715 "status": "finished", 00:28:35.715 "queue_depth": 16, 00:28:35.715 "io_size": 131072, 00:28:35.715 "runtime": 2.003679, 00:28:35.715 "iops": 6744.5933205867805, 00:28:35.715 "mibps": 843.0741650733476, 00:28:35.715 "io_failed": 0, 00:28:35.715 "io_timeout": 0, 00:28:35.715 "avg_latency_us": 2367.632156846163, 00:28:35.715 "min_latency_us": 1693.0133333333333, 00:28:35.715 "max_latency_us": 8426.057142857142 00:28:35.715 } 00:28:35.715 ], 00:28:35.715 "core_count": 1 00:28:35.715 } 00:28:35.974 12:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:35.974 12:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:35.974 12:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:35.974 12:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:35.974 | select(.opcode=="crc32c") 00:28:35.974 | "\(.module_name) \(.executed)"' 00:28:35.974 12:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:35.974 12:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:35.974 12:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:35.974 12:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:35.974 12:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:35.974 12:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 330116 00:28:35.974 12:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 330116 ']' 00:28:35.974 12:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 330116 00:28:35.974 12:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:35.974 12:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:35.974 12:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 330116 00:28:36.233 12:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:36.233 12:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:36.233 12:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 330116' 00:28:36.233 killing process with pid 330116 00:28:36.233 12:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 330116 00:28:36.233 Received shutdown signal, test time was about 2.000000 seconds 00:28:36.233 00:28:36.233 Latency(us) 00:28:36.233 [2024-11-20T11:42:41.999Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:36.233 [2024-11-20T11:42:41.999Z] =================================================================================================================== 00:28:36.233 [2024-11-20T11:42:41.999Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:36.233 12:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 330116 00:28:36.233 12:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 328449 00:28:36.234 12:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 328449 ']' 00:28:36.234 12:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 328449 00:28:36.234 12:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:36.234 12:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:36.234 12:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 328449 00:28:36.234 12:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:36.234 12:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:36.234 12:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 328449' 00:28:36.234 killing process with pid 328449 00:28:36.234 12:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 328449 00:28:36.234 12:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 328449 00:28:36.492 00:28:36.492 real 0m13.922s 00:28:36.492 user 0m26.639s 00:28:36.492 sys 0m4.604s 00:28:36.492 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:36.492 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:36.492 ************************************ 00:28:36.492 END TEST nvmf_digest_clean 00:28:36.492 ************************************ 00:28:36.492 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:36.492 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:36.492 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:36.492 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:36.492 ************************************ 00:28:36.492 START TEST nvmf_digest_error 00:28:36.492 ************************************ 00:28:36.492 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:28:36.492 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:36.492 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:36.493 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:36.493 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:36.493 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=330765 00:28:36.493 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 330765 00:28:36.493 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:36.493 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 330765 ']' 00:28:36.493 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:36.493 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:36.493 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:36.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:36.493 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:36.493 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:36.493 [2024-11-20 12:42:42.246964] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:28:36.493 [2024-11-20 12:42:42.247004] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:36.752 [2024-11-20 12:42:42.321408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.752 [2024-11-20 12:42:42.361382] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:36.752 [2024-11-20 12:42:42.361418] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:36.752 [2024-11-20 12:42:42.361425] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:36.752 [2024-11-20 12:42:42.361431] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:36.752 [2024-11-20 12:42:42.361436] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:36.752 [2024-11-20 12:42:42.361991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:36.752 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:36.752 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:36.752 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:36.752 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:36.752 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:36.752 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:36.752 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:36.752 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.752 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:36.752 [2024-11-20 12:42:42.426430] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:36.752 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.752 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:36.752 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:36.752 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.752 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:36.752 null0 00:28:37.012 [2024-11-20 12:42:42.515609] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:37.012 [2024-11-20 12:42:42.539797] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:37.012 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.012 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:37.012 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:37.012 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:37.012 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:37.012 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:37.012 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=330847 00:28:37.012 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 330847 /var/tmp/bperf.sock 00:28:37.012 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:37.012 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 330847 ']' 00:28:37.012 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:37.012 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:37.012 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:37.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:37.012 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:37.012 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:37.012 [2024-11-20 12:42:42.590497] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:28:37.012 [2024-11-20 12:42:42.590539] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid330847 ] 00:28:37.012 [2024-11-20 12:42:42.663754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.012 [2024-11-20 12:42:42.705241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:37.271 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:37.271 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:37.271 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:37.271 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:37.271 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:37.271 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.271 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:37.271 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.271 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:37.271 12:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:37.531 nvme0n1 00:28:37.531 12:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:37.531 12:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.531 12:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:37.531 12:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.531 12:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:37.531 12:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:37.791 Running I/O for 2 seconds... 00:28:37.791 [2024-11-20 12:42:43.390877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:37.791 [2024-11-20 12:42:43.390910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.791 [2024-11-20 12:42:43.390921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.791 [2024-11-20 12:42:43.400864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:37.791 [2024-11-20 12:42:43.400889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.791 [2024-11-20 12:42:43.400899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.791 [2024-11-20 12:42:43.410471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:37.791 [2024-11-20 12:42:43.410495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.791 [2024-11-20 12:42:43.410504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.791 [2024-11-20 12:42:43.418926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:37.791 [2024-11-20 12:42:43.418949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.791 [2024-11-20 12:42:43.418959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.791 [2024-11-20 12:42:43.430108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:37.791 [2024-11-20 12:42:43.430131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.791 [2024-11-20 12:42:43.430140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.791 [2024-11-20 12:42:43.442386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:37.791 [2024-11-20 12:42:43.442407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.791 [2024-11-20 12:42:43.442415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.791 [2024-11-20 12:42:43.451291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:37.791 [2024-11-20 12:42:43.451313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.791 [2024-11-20 12:42:43.451321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.791 [2024-11-20 12:42:43.460614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:37.791 [2024-11-20 12:42:43.460635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.791 [2024-11-20 12:42:43.460643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.791 [2024-11-20 12:42:43.471966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:37.791 [2024-11-20 12:42:43.471987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.791 [2024-11-20 12:42:43.471995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.791 [2024-11-20 12:42:43.479466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:37.791 [2024-11-20 12:42:43.479488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.791 [2024-11-20 12:42:43.479496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.791 [2024-11-20 12:42:43.490903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:37.791 [2024-11-20 12:42:43.490924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.791 [2024-11-20 12:42:43.490932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.791 [2024-11-20 12:42:43.501989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:37.791 [2024-11-20 12:42:43.502011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.791 [2024-11-20 12:42:43.502023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.791 [2024-11-20 12:42:43.510311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:37.791 [2024-11-20 12:42:43.510333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.791 [2024-11-20 12:42:43.510341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.791 [2024-11-20 12:42:43.521857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:37.791 [2024-11-20 12:42:43.521880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.791 [2024-11-20 12:42:43.521888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.791 [2024-11-20 12:42:43.533668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:37.791 [2024-11-20 12:42:43.533690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.791 [2024-11-20 12:42:43.533698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.791 [2024-11-20 12:42:43.544272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:37.791 [2024-11-20 12:42:43.544292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.791 [2024-11-20 12:42:43.544301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.791 [2024-11-20 12:42:43.553350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:37.791 [2024-11-20 12:42:43.553371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.791 [2024-11-20 12:42:43.553380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.052 [2024-11-20 12:42:43.564575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.052 [2024-11-20 12:42:43.564602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.052 [2024-11-20 12:42:43.564610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.052 [2024-11-20 12:42:43.574388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.052 [2024-11-20 12:42:43.574409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.052 [2024-11-20 12:42:43.574417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.052 [2024-11-20 12:42:43.582814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.052 [2024-11-20 12:42:43.582835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.052 [2024-11-20 12:42:43.582843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.052 [2024-11-20 12:42:43.592639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.052 [2024-11-20 12:42:43.592661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.052 [2024-11-20 12:42:43.592669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.052 [2024-11-20 12:42:43.604156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.052 [2024-11-20 12:42:43.604178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.052 [2024-11-20 12:42:43.604186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.052 [2024-11-20 12:42:43.614902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.052 [2024-11-20 12:42:43.614924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.052 [2024-11-20 12:42:43.614933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.052 [2024-11-20 12:42:43.623783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.052 [2024-11-20 12:42:43.623805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.052 [2024-11-20 12:42:43.623814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.052 [2024-11-20 12:42:43.633310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.052 [2024-11-20 12:42:43.633331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.052 [2024-11-20 12:42:43.633341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.052 [2024-11-20 12:42:43.643173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.052 [2024-11-20 12:42:43.643195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.052 [2024-11-20 12:42:43.643210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.052 [2024-11-20 12:42:43.652328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.052 [2024-11-20 12:42:43.652350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.052 [2024-11-20 12:42:43.652358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.052 [2024-11-20 12:42:43.661969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.052 [2024-11-20 12:42:43.661990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.052 [2024-11-20 12:42:43.661999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.052 [2024-11-20 12:42:43.672670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.052 [2024-11-20 12:42:43.672691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.052 [2024-11-20 12:42:43.672703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.052 [2024-11-20 12:42:43.680690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.052 [2024-11-20 12:42:43.680711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:14004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.052 [2024-11-20 12:42:43.680720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.052 [2024-11-20 12:42:43.690976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.052 [2024-11-20 12:42:43.690997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.052 [2024-11-20 12:42:43.691005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.052 [2024-11-20 12:42:43.702539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.052 [2024-11-20 12:42:43.702562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.052 [2024-11-20 12:42:43.702570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.052 [2024-11-20 12:42:43.714050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.052 [2024-11-20 12:42:43.714071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.052 [2024-11-20 12:42:43.714079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.052 [2024-11-20 12:42:43.722284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.052 [2024-11-20 12:42:43.722305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.052 [2024-11-20 12:42:43.722314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.052 [2024-11-20 12:42:43.732148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.052 [2024-11-20 12:42:43.732170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.052 [2024-11-20 12:42:43.732178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.052 [2024-11-20 12:42:43.741416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.052 [2024-11-20 12:42:43.741438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.052 [2024-11-20 12:42:43.741446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.052 [2024-11-20 12:42:43.751605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.052 [2024-11-20 12:42:43.751627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.052 [2024-11-20 12:42:43.751635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.052 [2024-11-20 12:42:43.760457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.052 [2024-11-20 12:42:43.760484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.052 [2024-11-20 12:42:43.760493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.052 [2024-11-20 12:42:43.768954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.052 [2024-11-20 12:42:43.768974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.052 [2024-11-20 12:42:43.768981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.053 [2024-11-20 12:42:43.779297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.053 [2024-11-20 12:42:43.779317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.053 [2024-11-20 12:42:43.779326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.053 [2024-11-20 12:42:43.790536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.053 [2024-11-20 12:42:43.790557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.053 [2024-11-20 12:42:43.790565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.053 [2024-11-20 12:42:43.799830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.053 [2024-11-20 12:42:43.799850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:25159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.053 [2024-11-20 12:42:43.799859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.053 [2024-11-20 12:42:43.811518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.053 [2024-11-20 12:42:43.811540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.053 [2024-11-20 12:42:43.811549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.312 [2024-11-20 12:42:43.822792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.312 [2024-11-20 12:42:43.822813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.312 [2024-11-20 12:42:43.822822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.312 [2024-11-20 12:42:43.834565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.312 [2024-11-20 12:42:43.834585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.312 [2024-11-20 12:42:43.834593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.312 [2024-11-20 12:42:43.844002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.313 [2024-11-20 12:42:43.844024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.313 [2024-11-20 12:42:43.844032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.313 [2024-11-20 12:42:43.853995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.313 [2024-11-20 12:42:43.854016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.313 [2024-11-20 12:42:43.854025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.313 [2024-11-20 12:42:43.861631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.313 [2024-11-20 12:42:43.861650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.313 [2024-11-20 12:42:43.861659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.313 [2024-11-20 12:42:43.872334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.313 [2024-11-20 12:42:43.872355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.313 [2024-11-20 12:42:43.872363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.313 [2024-11-20 12:42:43.883423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.313 [2024-11-20 12:42:43.883443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.313 [2024-11-20 12:42:43.883451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.313 [2024-11-20 12:42:43.892089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.313 [2024-11-20 12:42:43.892108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.313 [2024-11-20 12:42:43.892116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.313 [2024-11-20 12:42:43.902023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.313 [2024-11-20 12:42:43.902044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.313 [2024-11-20 12:42:43.902052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.313 [2024-11-20 12:42:43.910902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.313 [2024-11-20 12:42:43.910923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.313 [2024-11-20 12:42:43.910932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.313 [2024-11-20 12:42:43.921008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.313 [2024-11-20 12:42:43.921029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.313 [2024-11-20 12:42:43.921037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.313 [2024-11-20 12:42:43.930585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.313 [2024-11-20 12:42:43.930606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.313 [2024-11-20 12:42:43.930617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.313 [2024-11-20 12:42:43.938367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.313 [2024-11-20 12:42:43.938388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:25499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.313 [2024-11-20 12:42:43.938396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.313 [2024-11-20 12:42:43.950763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.313 [2024-11-20 12:42:43.950783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.313 [2024-11-20 12:42:43.950791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.313 [2024-11-20 12:42:43.960422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.313 [2024-11-20 12:42:43.960443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.313 [2024-11-20 12:42:43.960451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.313 [2024-11-20 12:42:43.971116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.313 [2024-11-20 12:42:43.971137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.313 [2024-11-20 12:42:43.971145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.313 [2024-11-20 12:42:43.982904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.313 [2024-11-20 12:42:43.982924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.313 [2024-11-20 12:42:43.982933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.313 [2024-11-20 12:42:43.991730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.313 [2024-11-20 12:42:43.991750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.313 [2024-11-20 12:42:43.991758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.313 [2024-11-20 12:42:44.001048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.313 [2024-11-20 12:42:44.001068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.313 [2024-11-20 12:42:44.001076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.313 [2024-11-20 12:42:44.011555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.313 [2024-11-20 12:42:44.011577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.313 [2024-11-20 12:42:44.011585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.313 [2024-11-20 12:42:44.021297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.313 [2024-11-20 12:42:44.021323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.313 [2024-11-20 12:42:44.021331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.313 [2024-11-20 12:42:44.031561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.313 [2024-11-20 12:42:44.031582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.313 [2024-11-20 12:42:44.031591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.313 [2024-11-20 12:42:44.039971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.313 [2024-11-20 12:42:44.039992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.313 [2024-11-20 12:42:44.040000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.313 [2024-11-20 12:42:44.049736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.313 [2024-11-20 12:42:44.049757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.313 [2024-11-20 12:42:44.049766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.313 [2024-11-20 12:42:44.059014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.313 [2024-11-20 12:42:44.059035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.313 [2024-11-20 12:42:44.059044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.313 [2024-11-20 12:42:44.069638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.313 [2024-11-20 12:42:44.069660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.313 [2024-11-20 12:42:44.069668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.573 [2024-11-20 12:42:44.080930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.573 [2024-11-20 12:42:44.080951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.573 [2024-11-20 12:42:44.080959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.573 [2024-11-20 12:42:44.088918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.573 [2024-11-20 12:42:44.088940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.573 [2024-11-20 12:42:44.088948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.573 [2024-11-20 12:42:44.099658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.573 [2024-11-20 12:42:44.099679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.573 [2024-11-20 12:42:44.099688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.573 [2024-11-20 12:42:44.108081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.573 [2024-11-20 12:42:44.108102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.573 [2024-11-20 12:42:44.108111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.573 [2024-11-20 12:42:44.119383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.573 [2024-11-20 12:42:44.119404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.573 [2024-11-20 12:42:44.119412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.573 [2024-11-20 12:42:44.131533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.573 [2024-11-20 12:42:44.131554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.573 [2024-11-20 12:42:44.131562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.573 [2024-11-20 12:42:44.139561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.573 [2024-11-20 12:42:44.139581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.573 [2024-11-20 12:42:44.139589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.573 [2024-11-20 12:42:44.149796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.573 [2024-11-20 12:42:44.149816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.573 [2024-11-20 12:42:44.149824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.573 [2024-11-20 12:42:44.159939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.573 [2024-11-20 12:42:44.159960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.573 [2024-11-20 12:42:44.159968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.573 [2024-11-20 12:42:44.168380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.573 [2024-11-20 12:42:44.168402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.573 [2024-11-20 12:42:44.168409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.573 [2024-11-20 12:42:44.178354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.573 [2024-11-20 12:42:44.178374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:11387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.573 [2024-11-20 12:42:44.178383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.573 [2024-11-20 12:42:44.188538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.573 [2024-11-20 12:42:44.188558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.573 [2024-11-20 12:42:44.188570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.573 [2024-11-20 12:42:44.196715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.573 [2024-11-20 12:42:44.196736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.573 [2024-11-20 12:42:44.196744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.573 [2024-11-20 12:42:44.207037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.573 [2024-11-20 12:42:44.207057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.574 [2024-11-20 12:42:44.207066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.574 [2024-11-20 12:42:44.219543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.574 [2024-11-20 12:42:44.219564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.574 [2024-11-20 12:42:44.219572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.574 [2024-11-20 12:42:44.227523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.574 [2024-11-20 12:42:44.227543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.574 [2024-11-20 12:42:44.227551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.574 [2024-11-20 12:42:44.238807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.574 [2024-11-20 12:42:44.238828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.574 [2024-11-20 12:42:44.238836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.574 [2024-11-20 12:42:44.246796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.574 [2024-11-20 12:42:44.246817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.574 [2024-11-20 12:42:44.246825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.574 [2024-11-20 12:42:44.256052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.574 [2024-11-20 12:42:44.256073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.574 [2024-11-20 12:42:44.256081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.574 [2024-11-20 12:42:44.266405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.574 [2024-11-20 12:42:44.266426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.574 [2024-11-20 12:42:44.266434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.574 [2024-11-20 12:42:44.275953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.574 [2024-11-20 12:42:44.275974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.574 [2024-11-20 12:42:44.275982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.574 [2024-11-20 12:42:44.285083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.574 [2024-11-20 12:42:44.285104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.574 [2024-11-20 12:42:44.285112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.574 [2024-11-20 12:42:44.294036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.574 [2024-11-20 12:42:44.294057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.574 [2024-11-20 12:42:44.294064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.574 [2024-11-20 12:42:44.302533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.574 [2024-11-20 12:42:44.302553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.574 [2024-11-20 12:42:44.302562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.574 [2024-11-20 12:42:44.312638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.574 [2024-11-20 12:42:44.312659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.574 [2024-11-20 12:42:44.312667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.574 [2024-11-20 12:42:44.323353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.574 [2024-11-20 12:42:44.323373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.574 [2024-11-20 12:42:44.323382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.833 [2024-11-20 12:42:44.336057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.833 [2024-11-20 12:42:44.336078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.833 [2024-11-20 12:42:44.336086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.834 [2024-11-20 12:42:44.347764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.834 [2024-11-20 12:42:44.347784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.834 [2024-11-20 12:42:44.347792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.834 [2024-11-20 12:42:44.356024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.834 [2024-11-20 12:42:44.356046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.834 [2024-11-20 12:42:44.356057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.834 [2024-11-20 12:42:44.367248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.834 [2024-11-20 12:42:44.367269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.834 [2024-11-20 12:42:44.367277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.834 [2024-11-20 12:42:44.376476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.834 [2024-11-20 12:42:44.376498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.834 [2024-11-20 12:42:44.376506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.834 25603.00 IOPS, 100.01 MiB/s [2024-11-20T11:42:44.600Z] [2024-11-20 12:42:44.390162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.834 [2024-11-20 12:42:44.390183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.834 [2024-11-20 12:42:44.390192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.834 [2024-11-20 12:42:44.401857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.834 [2024-11-20 12:42:44.401878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.834 [2024-11-20 12:42:44.401886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.834 [2024-11-20 12:42:44.412850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.834 [2024-11-20 12:42:44.412869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.834 [2024-11-20 12:42:44.412877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.834 [2024-11-20 12:42:44.422385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.834 [2024-11-20 12:42:44.422405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.834 [2024-11-20 12:42:44.422413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.834 [2024-11-20 12:42:44.433410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.834 [2024-11-20 12:42:44.433431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.834 [2024-11-20 12:42:44.433439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.834 [2024-11-20 12:42:44.444582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.834 [2024-11-20 12:42:44.444603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.834 [2024-11-20 12:42:44.444611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.834 [2024-11-20 12:42:44.454637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.834 [2024-11-20 12:42:44.454661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.834 [2024-11-20 12:42:44.454669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.834 [2024-11-20 12:42:44.463626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.834 [2024-11-20 12:42:44.463647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.834 [2024-11-20 12:42:44.463654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.834 [2024-11-20 12:42:44.472708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.834 [2024-11-20 12:42:44.472730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.834 [2024-11-20 12:42:44.472738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.834 [2024-11-20 12:42:44.482265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.834 [2024-11-20 12:42:44.482286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.834 [2024-11-20 12:42:44.482294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.834 [2024-11-20 12:42:44.492167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.834 [2024-11-20 12:42:44.492187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.834 [2024-11-20 12:42:44.492195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.834 [2024-11-20 12:42:44.500687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.834 [2024-11-20 12:42:44.500706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.834 [2024-11-20 12:42:44.500714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.834 [2024-11-20 12:42:44.511835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.834 [2024-11-20 12:42:44.511855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.834 [2024-11-20 12:42:44.511864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.834 [2024-11-20 12:42:44.521855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.834 [2024-11-20 12:42:44.521876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.834 [2024-11-20 12:42:44.521884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.834 [2024-11-20 12:42:44.530273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.834 [2024-11-20 12:42:44.530292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.834 [2024-11-20 12:42:44.530300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.834 [2024-11-20 12:42:44.542449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.834 [2024-11-20 12:42:44.542470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.834 [2024-11-20 12:42:44.542478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.834 [2024-11-20 12:42:44.553550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.834 [2024-11-20 12:42:44.553571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.834 [2024-11-20 12:42:44.553580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.834 [2024-11-20 12:42:44.561997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.834 [2024-11-20 12:42:44.562018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.834 [2024-11-20 12:42:44.562026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.834 [2024-11-20 12:42:44.573850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.834 [2024-11-20 12:42:44.573872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.834 [2024-11-20 12:42:44.573880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.834 [2024-11-20 12:42:44.586164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:38.834 [2024-11-20 12:42:44.586185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.834 [2024-11-20 12:42:44.586193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.094 [2024-11-20 12:42:44.598582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.094 [2024-11-20 12:42:44.598603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.094 [2024-11-20 12:42:44.598611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.094 [2024-11-20 12:42:44.610814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.094 [2024-11-20 12:42:44.610835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.094 [2024-11-20 12:42:44.610843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.094 [2024-11-20 12:42:44.622636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.094 [2024-11-20 12:42:44.622656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.094 [2024-11-20 12:42:44.622664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.094 [2024-11-20 12:42:44.630569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.094 [2024-11-20 12:42:44.630589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.094 [2024-11-20 12:42:44.630601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.095 [2024-11-20 12:42:44.642383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.095 [2024-11-20 12:42:44.642405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.095 [2024-11-20 12:42:44.642413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.095 [2024-11-20 12:42:44.653666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.095 [2024-11-20 12:42:44.653687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.095 [2024-11-20 12:42:44.653695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.095 [2024-11-20 12:42:44.662094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.095 [2024-11-20 12:42:44.662116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.095 [2024-11-20 12:42:44.662124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.095 [2024-11-20 12:42:44.672529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.095 [2024-11-20 12:42:44.672549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.095 [2024-11-20 12:42:44.672558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.095 [2024-11-20 12:42:44.684005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.095 [2024-11-20 12:42:44.684025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.095 [2024-11-20 12:42:44.684033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.095 [2024-11-20 12:42:44.692918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.095 [2024-11-20 12:42:44.692938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.095 [2024-11-20 12:42:44.692946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.095 [2024-11-20 12:42:44.704462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.095 [2024-11-20 12:42:44.704482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.095 [2024-11-20 12:42:44.704489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.095 [2024-11-20 12:42:44.715178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.095 [2024-11-20 12:42:44.715199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.095 [2024-11-20 12:42:44.715215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.095 [2024-11-20 12:42:44.723061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.095 [2024-11-20 12:42:44.723086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.095 [2024-11-20 12:42:44.723094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.095 [2024-11-20 12:42:44.733065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.095 [2024-11-20 12:42:44.733086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.095 [2024-11-20 12:42:44.733095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.095 [2024-11-20 12:42:44.742484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.095 [2024-11-20 12:42:44.742505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.095 [2024-11-20 12:42:44.742514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.095 [2024-11-20 12:42:44.751666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.095 [2024-11-20 12:42:44.751686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.095 [2024-11-20 12:42:44.751695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.095 [2024-11-20 12:42:44.761923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.095 [2024-11-20 12:42:44.761946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.095 [2024-11-20 12:42:44.761954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.095 [2024-11-20 12:42:44.769694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.095 [2024-11-20 12:42:44.769716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.095 [2024-11-20 12:42:44.769724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.095 [2024-11-20 12:42:44.779001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.095 [2024-11-20 12:42:44.779023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.095 [2024-11-20 12:42:44.779031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.095 [2024-11-20 12:42:44.788041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.095 [2024-11-20 12:42:44.788063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.095 [2024-11-20 12:42:44.788071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.095 [2024-11-20 12:42:44.796829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.095 [2024-11-20 12:42:44.796851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.095 [2024-11-20 12:42:44.796860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.095 [2024-11-20 12:42:44.806304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.095 [2024-11-20 12:42:44.806326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.095 [2024-11-20 12:42:44.806335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.095 [2024-11-20 12:42:44.815798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.095 [2024-11-20 12:42:44.815820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.095 [2024-11-20 12:42:44.815829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.095 [2024-11-20 12:42:44.825478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.095 [2024-11-20 12:42:44.825500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.095 [2024-11-20 12:42:44.825508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.095 [2024-11-20 12:42:44.835172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.095 [2024-11-20 12:42:44.835193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.095 [2024-11-20 12:42:44.835207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.095 [2024-11-20 12:42:44.844394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.095 [2024-11-20 12:42:44.844414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.095 [2024-11-20 12:42:44.844422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.095 [2024-11-20 12:42:44.852969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.095 [2024-11-20 12:42:44.852990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.095 [2024-11-20 12:42:44.852999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.355 [2024-11-20 12:42:44.862100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.355 [2024-11-20 12:42:44.862121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.355 [2024-11-20 12:42:44.862129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.355 [2024-11-20 12:42:44.870751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.355 [2024-11-20 12:42:44.870772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.355 [2024-11-20 12:42:44.870780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.355 [2024-11-20 12:42:44.880361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.355 [2024-11-20 12:42:44.880382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.355 [2024-11-20 12:42:44.880393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.355 [2024-11-20 12:42:44.890053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.355 [2024-11-20 12:42:44.890074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.355 [2024-11-20 12:42:44.890082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.356 [2024-11-20 12:42:44.900284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.356 [2024-11-20 12:42:44.900305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.356 [2024-11-20 12:42:44.900314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.356 [2024-11-20 12:42:44.910605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.356 [2024-11-20 12:42:44.910626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.356 [2024-11-20 12:42:44.910634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.356 [2024-11-20 12:42:44.918944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.356 [2024-11-20 12:42:44.918966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.356 [2024-11-20 12:42:44.918975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.356 [2024-11-20 12:42:44.928996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.356 [2024-11-20 12:42:44.929018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.356 [2024-11-20 12:42:44.929026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.356 [2024-11-20 12:42:44.937479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.356 [2024-11-20 12:42:44.937500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.356 [2024-11-20 12:42:44.937509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.356 [2024-11-20 12:42:44.947524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.356 [2024-11-20 12:42:44.947545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.356 [2024-11-20 12:42:44.947553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.356 [2024-11-20 12:42:44.956824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.356 [2024-11-20 12:42:44.956845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.356 [2024-11-20 12:42:44.956853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.356 [2024-11-20 12:42:44.966704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.356 [2024-11-20 12:42:44.966725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.356 [2024-11-20 12:42:44.966733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.356 [2024-11-20 12:42:44.974941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.356 [2024-11-20 12:42:44.974961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.356 [2024-11-20 12:42:44.974969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.356 [2024-11-20 12:42:44.986514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.356 [2024-11-20 12:42:44.986534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.356 [2024-11-20 12:42:44.986542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.356 [2024-11-20 12:42:44.997726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.356 [2024-11-20 12:42:44.997748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.356 [2024-11-20 12:42:44.997756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.356 [2024-11-20 12:42:45.006797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.356 [2024-11-20 12:42:45.006817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.356 [2024-11-20 12:42:45.006825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.356 [2024-11-20 12:42:45.015897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.356 [2024-11-20 12:42:45.015917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.356 [2024-11-20 12:42:45.015926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.356 [2024-11-20 12:42:45.023988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.356 [2024-11-20 12:42:45.024009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.356 [2024-11-20 12:42:45.024016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.356 [2024-11-20 12:42:45.033810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.356 [2024-11-20 12:42:45.033831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.356 [2024-11-20 12:42:45.033840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.356 [2024-11-20 12:42:45.042011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.356 [2024-11-20 12:42:45.042032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.356 [2024-11-20 12:42:45.042046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.356 [2024-11-20 12:42:45.052304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.356 [2024-11-20 12:42:45.052326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.356 [2024-11-20 12:42:45.052334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.356 [2024-11-20 12:42:45.060862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.356 [2024-11-20 12:42:45.060883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.356 [2024-11-20 12:42:45.060891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.356 [2024-11-20 12:42:45.070395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.356 [2024-11-20 12:42:45.070416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.356 [2024-11-20 12:42:45.070425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.356 [2024-11-20 12:42:45.078741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.356 [2024-11-20 12:42:45.078762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.356 [2024-11-20 12:42:45.078770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.356 [2024-11-20 12:42:45.088073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.356 [2024-11-20 12:42:45.088094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.356 [2024-11-20 12:42:45.088102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.356 [2024-11-20 12:42:45.097446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.356 [2024-11-20 12:42:45.097468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.356 [2024-11-20 12:42:45.097476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.356 [2024-11-20 12:42:45.106862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.356 [2024-11-20 12:42:45.106884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.356 [2024-11-20 12:42:45.106892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.356 [2024-11-20 12:42:45.116609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.356 [2024-11-20 12:42:45.116630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.356 [2024-11-20 12:42:45.116639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.616 [2024-11-20 12:42:45.124754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.616 [2024-11-20 12:42:45.124779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.616 [2024-11-20 12:42:45.124788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.616 [2024-11-20 12:42:45.134849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.616 [2024-11-20 12:42:45.134871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.616 [2024-11-20 12:42:45.134879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.616 [2024-11-20 12:42:45.146081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.616 [2024-11-20 12:42:45.146103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.616 [2024-11-20 12:42:45.146112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.616 [2024-11-20 12:42:45.154963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.616 [2024-11-20 12:42:45.154985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.616 [2024-11-20 12:42:45.154992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.616 [2024-11-20 12:42:45.163384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.616 [2024-11-20 12:42:45.163405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.616 [2024-11-20 12:42:45.163413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.616 [2024-11-20 12:42:45.172497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.616 [2024-11-20 12:42:45.172518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.616 [2024-11-20 12:42:45.172526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.616 [2024-11-20 12:42:45.182565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.616 [2024-11-20 12:42:45.182586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.616 [2024-11-20 12:42:45.182594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.616 [2024-11-20 12:42:45.194519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.616 [2024-11-20 12:42:45.194539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:19843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.616 [2024-11-20 12:42:45.194547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.617 [2024-11-20 12:42:45.204314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.617 [2024-11-20 12:42:45.204334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.617 [2024-11-20 12:42:45.204342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.617 [2024-11-20 12:42:45.212970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.617 [2024-11-20 12:42:45.212990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.617 [2024-11-20 12:42:45.212998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.617 [2024-11-20 12:42:45.224935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.617 [2024-11-20 12:42:45.224955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.617 [2024-11-20 12:42:45.224964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.617 [2024-11-20 12:42:45.236844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.617 [2024-11-20 12:42:45.236865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.617 [2024-11-20 12:42:45.236874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.617 [2024-11-20 12:42:45.247815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.617 [2024-11-20 12:42:45.247836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.617 [2024-11-20 12:42:45.247844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.617 [2024-11-20 12:42:45.256390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.617 [2024-11-20 12:42:45.256410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.617 [2024-11-20 12:42:45.256418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.617 [2024-11-20 12:42:45.268660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.617 [2024-11-20 12:42:45.268682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.617 [2024-11-20 12:42:45.268690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.617 [2024-11-20 12:42:45.279505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.617 [2024-11-20 12:42:45.279526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:25072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.617 [2024-11-20 12:42:45.279535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.617 [2024-11-20 12:42:45.289143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.617 [2024-11-20 12:42:45.289165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.617 [2024-11-20 12:42:45.289173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.617 [2024-11-20 12:42:45.297792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.617 [2024-11-20 12:42:45.297814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:17077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.617 [2024-11-20 12:42:45.297825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.617 [2024-11-20 12:42:45.310549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.617 [2024-11-20 12:42:45.310569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.617 [2024-11-20 12:42:45.310577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.617 [2024-11-20 12:42:45.322857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.617 [2024-11-20 12:42:45.322877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.617 [2024-11-20 12:42:45.322886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.617 [2024-11-20 12:42:45.330922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.617 [2024-11-20 12:42:45.330942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.617 [2024-11-20 12:42:45.330950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.617 [2024-11-20 12:42:45.341761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.617 [2024-11-20 12:42:45.341781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.617 [2024-11-20 12:42:45.341790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.617 [2024-11-20 12:42:45.352928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.617 [2024-11-20 12:42:45.352948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.617 [2024-11-20 12:42:45.352956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.617 [2024-11-20 12:42:45.361076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.617 [2024-11-20 12:42:45.361097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.617 [2024-11-20 12:42:45.361105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.617 [2024-11-20 12:42:45.372400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.617 [2024-11-20 12:42:45.372420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.617 [2024-11-20 12:42:45.372428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.876 [2024-11-20 12:42:45.385538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x910370) 00:28:39.876 [2024-11-20 12:42:45.385560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.876 [2024-11-20 12:42:45.385568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.876 25624.00 IOPS, 100.09 MiB/s 00:28:39.876 Latency(us) 00:28:39.876 [2024-11-20T11:42:45.642Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:39.876 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:39.876 nvme0n1 : 2.05 25119.74 98.12 0.00 0.00 4991.49 2293.76 54426.09 00:28:39.876 [2024-11-20T11:42:45.642Z] =================================================================================================================== 00:28:39.876 [2024-11-20T11:42:45.642Z] Total : 25119.74 98.12 0.00 0.00 4991.49 2293.76 54426.09 00:28:39.876 { 00:28:39.876 "results": [ 00:28:39.876 { 00:28:39.876 "job": "nvme0n1", 00:28:39.876 "core_mask": "0x2", 00:28:39.876 "workload": "randread", 00:28:39.876 "status": "finished", 00:28:39.876 "queue_depth": 128, 00:28:39.876 "io_size": 4096, 00:28:39.876 "runtime": 2.048747, 00:28:39.876 "iops": 25119.74392152862, 00:28:39.876 "mibps": 98.12399969347118, 00:28:39.876 "io_failed": 0, 00:28:39.876 "io_timeout": 0, 00:28:39.876 "avg_latency_us": 4991.4862679783555, 00:28:39.876 "min_latency_us": 2293.76, 00:28:39.876 "max_latency_us": 54426.08761904762 00:28:39.876 } 00:28:39.876 ], 00:28:39.876 "core_count": 1 00:28:39.876 } 00:28:39.876 12:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:39.876 12:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:39.876 12:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:39.876 | .driver_specific 00:28:39.876 | .nvme_error 00:28:39.876 | .status_code 00:28:39.876 | .command_transient_transport_error' 00:28:39.876 12:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:40.136 12:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 201 > 0 )) 00:28:40.136 12:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 330847 00:28:40.136 12:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 330847 ']' 00:28:40.136 12:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 330847 00:28:40.136 12:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:40.136 12:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:40.136 12:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 330847 00:28:40.136 12:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:40.136 12:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:40.136 12:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 330847' 00:28:40.136 killing process with pid 330847 00:28:40.136 12:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 330847 00:28:40.136 Received shutdown signal, test time was about 2.000000 seconds 00:28:40.136 00:28:40.136 Latency(us) 00:28:40.136 [2024-11-20T11:42:45.902Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:40.136 [2024-11-20T11:42:45.902Z] =================================================================================================================== 00:28:40.136 [2024-11-20T11:42:45.902Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:40.136 12:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 330847 00:28:40.136 12:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:40.136 12:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:40.136 12:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:40.136 12:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:40.136 12:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:40.136 12:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=331324 00:28:40.136 12:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 331324 /var/tmp/bperf.sock 00:28:40.136 12:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 331324 ']' 00:28:40.136 12:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:40.136 12:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:40.136 12:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:40.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:40.136 12:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:40.136 12:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:40.136 12:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:40.396 [2024-11-20 12:42:45.910833] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:28:40.396 [2024-11-20 12:42:45.910884] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid331324 ] 00:28:40.396 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:40.396 Zero copy mechanism will not be used. 00:28:40.396 [2024-11-20 12:42:45.986544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.396 [2024-11-20 12:42:46.023264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:40.396 12:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:40.396 12:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:40.396 12:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:40.396 12:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:40.655 12:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:40.655 12:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.655 12:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:40.655 12:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.655 12:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:40.655 12:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:40.915 nvme0n1 00:28:40.915 12:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:40.915 12:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.915 12:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:40.915 12:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.915 12:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:40.915 12:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:41.176 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:41.176 Zero copy mechanism will not be used. 00:28:41.176 Running I/O for 2 seconds... 00:28:41.176 [2024-11-20 12:42:46.733581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.176 [2024-11-20 12:42:46.733618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.176 [2024-11-20 12:42:46.733629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:41.176 [2024-11-20 12:42:46.739480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.176 [2024-11-20 12:42:46.739507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.176 [2024-11-20 12:42:46.739516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:41.176 [2024-11-20 12:42:46.745576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.176 [2024-11-20 12:42:46.745599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.176 [2024-11-20 12:42:46.745607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:41.176 [2024-11-20 12:42:46.750999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.176 [2024-11-20 12:42:46.751022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.176 [2024-11-20 12:42:46.751030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:41.176 [2024-11-20 12:42:46.756884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.176 [2024-11-20 12:42:46.756907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.176 [2024-11-20 12:42:46.756915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:41.176 [2024-11-20 12:42:46.763829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.176 [2024-11-20 12:42:46.763852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.176 [2024-11-20 12:42:46.763861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:41.176 [2024-11-20 12:42:46.771323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.176 [2024-11-20 12:42:46.771345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.176 [2024-11-20 12:42:46.771354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:41.176 [2024-11-20 12:42:46.778498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.176 [2024-11-20 12:42:46.778521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.176 [2024-11-20 12:42:46.778534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:41.176 [2024-11-20 12:42:46.786304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.176 [2024-11-20 12:42:46.786328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.176 [2024-11-20 12:42:46.786337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:41.176 [2024-11-20 12:42:46.794274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.176 [2024-11-20 12:42:46.794297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.176 [2024-11-20 12:42:46.794306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:41.176 [2024-11-20 12:42:46.798394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.176 [2024-11-20 12:42:46.798417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.176 [2024-11-20 12:42:46.798425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:41.176 [2024-11-20 12:42:46.804543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.176 [2024-11-20 12:42:46.804565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.176 [2024-11-20 12:42:46.804574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:41.176 [2024-11-20 12:42:46.812220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.176 [2024-11-20 12:42:46.812243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.176 [2024-11-20 12:42:46.812251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:41.176 [2024-11-20 12:42:46.819228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.176 [2024-11-20 12:42:46.819251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.176 [2024-11-20 12:42:46.819260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:41.176 [2024-11-20 12:42:46.825334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.176 [2024-11-20 12:42:46.825357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.176 [2024-11-20 12:42:46.825366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:41.176 [2024-11-20 12:42:46.830759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.176 [2024-11-20 12:42:46.830781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.176 [2024-11-20 12:42:46.830789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:41.176 [2024-11-20 12:42:46.836171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.176 [2024-11-20 12:42:46.836197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.176 [2024-11-20 12:42:46.836211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:41.177 [2024-11-20 12:42:46.841606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.177 [2024-11-20 12:42:46.841628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.177 [2024-11-20 12:42:46.841636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:41.177 [2024-11-20 12:42:46.847035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.177 [2024-11-20 12:42:46.847056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.177 [2024-11-20 12:42:46.847064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:41.177 [2024-11-20 12:42:46.852422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.177 [2024-11-20 12:42:46.852443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.177 [2024-11-20 12:42:46.852451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:41.177 [2024-11-20 12:42:46.857831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.177 [2024-11-20 12:42:46.857853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.177 [2024-11-20 12:42:46.857860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:41.177 [2024-11-20 12:42:46.863243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.177 [2024-11-20 12:42:46.863264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.177 [2024-11-20 12:42:46.863271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:41.177 [2024-11-20 12:42:46.868591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.177 [2024-11-20 12:42:46.868613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.177 [2024-11-20 12:42:46.868621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:41.177 [2024-11-20 12:42:46.874013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.177 [2024-11-20 12:42:46.874035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.177 [2024-11-20 12:42:46.874043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:41.177 [2024-11-20 12:42:46.879429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.177 [2024-11-20 12:42:46.879450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.177 [2024-11-20 12:42:46.879459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:41.177 [2024-11-20 12:42:46.884859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.177 [2024-11-20 12:42:46.884880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.177 [2024-11-20 12:42:46.884889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:41.177 [2024-11-20 12:42:46.890255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.177 [2024-11-20 12:42:46.890276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.177 [2024-11-20 12:42:46.890284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:41.177 [2024-11-20 12:42:46.895637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.177 [2024-11-20 12:42:46.895658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.177 [2024-11-20 12:42:46.895666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:41.177 [2024-11-20 12:42:46.900977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.177 [2024-11-20 12:42:46.900998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.177 [2024-11-20 12:42:46.901006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:41.177 [2024-11-20 12:42:46.906385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.177 [2024-11-20 12:42:46.906407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.177 [2024-11-20 12:42:46.906415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:41.177 [2024-11-20 12:42:46.911450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.177 [2024-11-20 12:42:46.911473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.177 [2024-11-20 12:42:46.911481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:41.177 [2024-11-20 12:42:46.916670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.177 [2024-11-20 12:42:46.916693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.177 [2024-11-20 12:42:46.916701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:41.177 [2024-11-20 12:42:46.921768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.177 [2024-11-20 12:42:46.921790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.177 [2024-11-20 12:42:46.921798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:41.177 [2024-11-20 12:42:46.927041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.177 [2024-11-20 12:42:46.927062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.177 [2024-11-20 12:42:46.927074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:41.177 [2024-11-20 12:42:46.931925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.177 [2024-11-20 12:42:46.931947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.177 [2024-11-20 12:42:46.931954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:41.177 [2024-11-20 12:42:46.937076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.177 [2024-11-20 12:42:46.937098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.177 [2024-11-20 12:42:46.937106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:41.438 [2024-11-20 12:42:46.942163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.438 [2024-11-20 12:42:46.942184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.438 [2024-11-20 12:42:46.942193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:41.438 [2024-11-20 12:42:46.947307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.438 [2024-11-20 12:42:46.947329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.438 [2024-11-20 12:42:46.947337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:41.438 [2024-11-20 12:42:46.952470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.438 [2024-11-20 12:42:46.952492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.438 [2024-11-20 12:42:46.952500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:41.438 [2024-11-20 12:42:46.957663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.438 [2024-11-20 12:42:46.957684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.438 [2024-11-20 12:42:46.957692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:41.438 [2024-11-20 12:42:46.962822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.438 [2024-11-20 12:42:46.962844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.438 [2024-11-20 12:42:46.962852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:41.438 [2024-11-20 12:42:46.967966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.438 [2024-11-20 12:42:46.967986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.438 [2024-11-20 12:42:46.967994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:41.438 [2024-11-20 12:42:46.973119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.438 [2024-11-20 12:42:46.973147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.438 [2024-11-20 12:42:46.973155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:41.438 [2024-11-20 12:42:46.978309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.438 [2024-11-20 12:42:46.978331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.438 [2024-11-20 12:42:46.978340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:41.438 [2024-11-20 12:42:46.983434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.438 [2024-11-20 12:42:46.983456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.438 [2024-11-20 12:42:46.983464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:41.438 [2024-11-20 12:42:46.988632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.438 [2024-11-20 12:42:46.988654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.439 [2024-11-20 12:42:46.988662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:41.439 [2024-11-20 12:42:46.993895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.439 [2024-11-20 12:42:46.993916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.439 [2024-11-20 12:42:46.993925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:41.439 [2024-11-20 12:42:46.999138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.439 [2024-11-20 12:42:46.999159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.439 [2024-11-20 12:42:46.999167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:41.439 [2024-11-20 12:42:47.004364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.439 [2024-11-20 12:42:47.004386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.439 [2024-11-20 12:42:47.004394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:41.439 [2024-11-20 12:42:47.009758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.439 [2024-11-20 12:42:47.009780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.439 [2024-11-20 12:42:47.009789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:41.439 [2024-11-20 12:42:47.015112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.439 [2024-11-20 12:42:47.015133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.439 [2024-11-20 12:42:47.015141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:41.439 [2024-11-20 12:42:47.020361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.439 [2024-11-20 12:42:47.020382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.439 [2024-11-20 12:42:47.020390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:41.439 [2024-11-20 12:42:47.023255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.439 [2024-11-20 12:42:47.023276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.439 [2024-11-20 12:42:47.023284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:41.439 [2024-11-20 12:42:47.028632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.439 [2024-11-20 12:42:47.028654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.439 [2024-11-20 12:42:47.028662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:41.439 [2024-11-20 12:42:47.034088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.439 [2024-11-20 12:42:47.034110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.439 [2024-11-20 12:42:47.034118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:41.439 [2024-11-20 12:42:47.038866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.439 [2024-11-20 12:42:47.038888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.439 [2024-11-20 12:42:47.038896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:41.439 [2024-11-20 12:42:47.043957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.439 [2024-11-20 12:42:47.043978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.439 [2024-11-20 12:42:47.043986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:41.439 [2024-11-20 12:42:47.049143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.439 [2024-11-20 12:42:47.049165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.439 [2024-11-20 12:42:47.049173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:41.439 [2024-11-20 12:42:47.054357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.439 [2024-11-20 12:42:47.054379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.439 [2024-11-20 12:42:47.054387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:41.439 [2024-11-20 12:42:47.059598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.439 [2024-11-20 12:42:47.059624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.439 [2024-11-20 12:42:47.059632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:41.439 [2024-11-20 12:42:47.064779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.439 [2024-11-20 12:42:47.064801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.439 [2024-11-20 12:42:47.064810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:41.439 [2024-11-20 12:42:47.069903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.439 [2024-11-20 12:42:47.069925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.439 [2024-11-20 12:42:47.069934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:41.439 [2024-11-20 12:42:47.075146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.439 [2024-11-20 12:42:47.075168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.439 [2024-11-20 12:42:47.075176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:41.439 [2024-11-20 12:42:47.081005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.439 [2024-11-20 12:42:47.081027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.439 [2024-11-20 12:42:47.081035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:41.439 [2024-11-20 12:42:47.086386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.439 [2024-11-20 12:42:47.086409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.439 [2024-11-20 12:42:47.086416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:41.439 [2024-11-20 12:42:47.091762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.439 [2024-11-20 12:42:47.091783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.439 [2024-11-20 12:42:47.091792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:41.439 [2024-11-20 12:42:47.097121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.439 [2024-11-20 12:42:47.097142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.439 [2024-11-20 12:42:47.097150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:41.439 [2024-11-20 12:42:47.102464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.439 [2024-11-20 12:42:47.102485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.439 [2024-11-20 12:42:47.102493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:41.440 [2024-11-20 12:42:47.107771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.440 [2024-11-20 12:42:47.107794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.440 [2024-11-20 12:42:47.107802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:41.440 [2024-11-20 12:42:47.113082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.440 [2024-11-20 12:42:47.113104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.440 [2024-11-20 12:42:47.113112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:41.440 [2024-11-20 12:42:47.118081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.440 [2024-11-20 12:42:47.118102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.440 [2024-11-20 12:42:47.118110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:41.440 [2024-11-20 12:42:47.123304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.440 [2024-11-20 12:42:47.123326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.440 [2024-11-20 12:42:47.123333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:41.440 [2024-11-20 12:42:47.128563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.440 [2024-11-20 12:42:47.128585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.440 [2024-11-20 12:42:47.128593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:41.440 [2024-11-20 12:42:47.133808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.440 [2024-11-20 12:42:47.133830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.440 [2024-11-20 12:42:47.133838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:41.440 [2024-11-20 12:42:47.139253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.440 [2024-11-20 12:42:47.139275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.440 [2024-11-20 12:42:47.139283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:41.440 [2024-11-20 12:42:47.144019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.440 [2024-11-20 12:42:47.144040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.440 [2024-11-20 12:42:47.144048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:41.440 [2024-11-20 12:42:47.147073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.440 [2024-11-20 12:42:47.147094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.440 [2024-11-20 12:42:47.147106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:41.440 [2024-11-20 12:42:47.152490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.440 [2024-11-20 12:42:47.152511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.440 [2024-11-20 12:42:47.152519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:41.440 [2024-11-20 12:42:47.157940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.440 [2024-11-20 12:42:47.157962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.440 [2024-11-20 12:42:47.157970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:41.440 [2024-11-20 12:42:47.163280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.440 [2024-11-20 12:42:47.163301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.440 [2024-11-20 12:42:47.163309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:41.440 [2024-11-20 12:42:47.168764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.440 [2024-11-20 12:42:47.168785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.440 [2024-11-20 12:42:47.168793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:41.440 [2024-11-20 12:42:47.174219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.440 [2024-11-20 12:42:47.174240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.440 [2024-11-20 12:42:47.174248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:41.440 [2024-11-20 12:42:47.179537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.440 [2024-11-20 12:42:47.179558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.440 [2024-11-20 12:42:47.179566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:41.440 [2024-11-20 12:42:47.184946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.440 [2024-11-20 12:42:47.184967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.440 [2024-11-20 12:42:47.184975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:41.440 [2024-11-20 12:42:47.190219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.440 [2024-11-20 12:42:47.190241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.440 [2024-11-20 12:42:47.190249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:41.440 [2024-11-20 12:42:47.195709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.440 [2024-11-20 12:42:47.195736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.440 [2024-11-20 12:42:47.195744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:41.700 [2024-11-20 12:42:47.201197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.700 [2024-11-20 12:42:47.201226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.700 [2024-11-20 12:42:47.201234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:41.700 [2024-11-20 12:42:47.206638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.700 [2024-11-20 12:42:47.206659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.700 [2024-11-20 12:42:47.206667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:41.700 [2024-11-20 12:42:47.212091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.700 [2024-11-20 12:42:47.212113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.700 [2024-11-20 12:42:47.212121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:41.700 [2024-11-20 12:42:47.217418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.700 [2024-11-20 12:42:47.217438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.700 [2024-11-20 12:42:47.217448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:41.700 [2024-11-20 12:42:47.222653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.700 [2024-11-20 12:42:47.222675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.700 [2024-11-20 12:42:47.222682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:41.700 [2024-11-20 12:42:47.228096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.700 [2024-11-20 12:42:47.228116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.700 [2024-11-20 12:42:47.228124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:41.700 [2024-11-20 12:42:47.233609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.700 [2024-11-20 12:42:47.233632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.700 [2024-11-20 12:42:47.233640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:41.700 [2024-11-20 12:42:47.239224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.700 [2024-11-20 12:42:47.239246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.700 [2024-11-20 12:42:47.239254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:41.700 [2024-11-20 12:42:47.244849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.700 [2024-11-20 12:42:47.244872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.700 [2024-11-20 12:42:47.244880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:41.700 [2024-11-20 12:42:47.250364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.700 [2024-11-20 12:42:47.250384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.700 [2024-11-20 12:42:47.250391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:41.700 [2024-11-20 12:42:47.255691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.700 [2024-11-20 12:42:47.255712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.700 [2024-11-20 12:42:47.255719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:41.700 [2024-11-20 12:42:47.261045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.700 [2024-11-20 12:42:47.261067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.700 [2024-11-20 12:42:47.261074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:41.700 [2024-11-20 12:42:47.266406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.700 [2024-11-20 12:42:47.266426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.700 [2024-11-20 12:42:47.266434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:41.700 [2024-11-20 12:42:47.271739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.700 [2024-11-20 12:42:47.271759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.700 [2024-11-20 12:42:47.271767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:41.700 [2024-11-20 12:42:47.277187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.700 [2024-11-20 12:42:47.277215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.700 [2024-11-20 12:42:47.277224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:41.700 [2024-11-20 12:42:47.282562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.700 [2024-11-20 12:42:47.282583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.700 [2024-11-20 12:42:47.282591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:41.700 [2024-11-20 12:42:47.287987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.700 [2024-11-20 12:42:47.288009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.700 [2024-11-20 12:42:47.288020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:41.700 [2024-11-20 12:42:47.293479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.700 [2024-11-20 12:42:47.293500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.700 [2024-11-20 12:42:47.293507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:41.700 [2024-11-20 12:42:47.298553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.700 [2024-11-20 12:42:47.298575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.700 [2024-11-20 12:42:47.298583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:41.700 [2024-11-20 12:42:47.304239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.700 [2024-11-20 12:42:47.304261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.700 [2024-11-20 12:42:47.304268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:41.701 [2024-11-20 12:42:47.309879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.701 [2024-11-20 12:42:47.309902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.701 [2024-11-20 12:42:47.309910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:41.701 [2024-11-20 12:42:47.315618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.701 [2024-11-20 12:42:47.315639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.701 [2024-11-20 12:42:47.315647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:41.701 [2024-11-20 12:42:47.321268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.701 [2024-11-20 12:42:47.321290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.701 [2024-11-20 12:42:47.321298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:41.701 [2024-11-20 12:42:47.326747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.701 [2024-11-20 12:42:47.326770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.701 [2024-11-20 12:42:47.326778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:41.701 [2024-11-20 12:42:47.332597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.701 [2024-11-20 12:42:47.332619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.701 [2024-11-20 12:42:47.332627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:41.701 [2024-11-20 12:42:47.338406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.701 [2024-11-20 12:42:47.338432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.701 [2024-11-20 12:42:47.338440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:41.701 [2024-11-20 12:42:47.344012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.701 [2024-11-20 12:42:47.344034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.701 [2024-11-20 12:42:47.344042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:41.701 [2024-11-20 12:42:47.349092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.701 [2024-11-20 12:42:47.349114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.701 [2024-11-20 12:42:47.349122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:41.701 [2024-11-20 12:42:47.354544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.701 [2024-11-20 12:42:47.354566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.701 [2024-11-20 12:42:47.354574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:41.701 [2024-11-20 12:42:47.359926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.701 [2024-11-20 12:42:47.359948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.701 [2024-11-20 12:42:47.359956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:41.701 [2024-11-20 12:42:47.365308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.701 [2024-11-20 12:42:47.365329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.701 [2024-11-20 12:42:47.365337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:41.701 [2024-11-20 12:42:47.371081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.701 [2024-11-20 12:42:47.371105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.701 [2024-11-20 12:42:47.371114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:41.701 [2024-11-20 12:42:47.378997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.701 [2024-11-20 12:42:47.379019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.701 [2024-11-20 12:42:47.379028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:41.701 [2024-11-20 12:42:47.387628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.701 [2024-11-20 12:42:47.387652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.701 [2024-11-20 12:42:47.387660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:41.701 [2024-11-20 12:42:47.395737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.701 [2024-11-20 12:42:47.395760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.701 [2024-11-20 12:42:47.395769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:41.701 [2024-11-20 12:42:47.404071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.701 [2024-11-20 12:42:47.404094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.701 [2024-11-20 12:42:47.404103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:41.701 [2024-11-20 12:42:47.411955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.701 [2024-11-20 12:42:47.411985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.701 [2024-11-20 12:42:47.411994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:41.701 [2024-11-20 12:42:47.420813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.701 [2024-11-20 12:42:47.420836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.701 [2024-11-20 12:42:47.420845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:41.701 [2024-11-20 12:42:47.429403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.701 [2024-11-20 12:42:47.429426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.701 [2024-11-20 12:42:47.429434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:41.701 [2024-11-20 12:42:47.437530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.701 [2024-11-20 12:42:47.437554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.701 [2024-11-20 12:42:47.437563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:41.701 [2024-11-20 12:42:47.445986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.701 [2024-11-20 12:42:47.446009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.701 [2024-11-20 12:42:47.446017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:41.701 [2024-11-20 12:42:47.454471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.701 [2024-11-20 12:42:47.454495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.701 [2024-11-20 12:42:47.454504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:41.993 [2024-11-20 12:42:47.462317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.993 [2024-11-20 12:42:47.462344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.993 [2024-11-20 12:42:47.462353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:41.993 [2024-11-20 12:42:47.470208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.993 [2024-11-20 12:42:47.470232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.993 [2024-11-20 12:42:47.470240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:41.993 [2024-11-20 12:42:47.478086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.993 [2024-11-20 12:42:47.478108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.993 [2024-11-20 12:42:47.478117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:41.993 [2024-11-20 12:42:47.486063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.993 [2024-11-20 12:42:47.486085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.993 [2024-11-20 12:42:47.486094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:41.993 [2024-11-20 12:42:47.492356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.993 [2024-11-20 12:42:47.492379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.993 [2024-11-20 12:42:47.492388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:41.993 [2024-11-20 12:42:47.498147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.993 [2024-11-20 12:42:47.498169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.993 [2024-11-20 12:42:47.498177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:41.993 [2024-11-20 12:42:47.503790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.993 [2024-11-20 12:42:47.503811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.993 [2024-11-20 12:42:47.503819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:41.993 [2024-11-20 12:42:47.509253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.993 [2024-11-20 12:42:47.509274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.993 [2024-11-20 12:42:47.509282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:41.993 [2024-11-20 12:42:47.514627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.993 [2024-11-20 12:42:47.514648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.993 [2024-11-20 12:42:47.514656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:41.993 [2024-11-20 12:42:47.520333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.993 [2024-11-20 12:42:47.520355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.993 [2024-11-20 12:42:47.520363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:41.993 [2024-11-20 12:42:47.525944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.993 [2024-11-20 12:42:47.525966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.993 [2024-11-20 12:42:47.525974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:41.993 [2024-11-20 12:42:47.531539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.993 [2024-11-20 12:42:47.531561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.993 [2024-11-20 12:42:47.531568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:41.993 [2024-11-20 12:42:47.536910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.993 [2024-11-20 12:42:47.536932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.993 [2024-11-20 12:42:47.536940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:41.993 [2024-11-20 12:42:47.542440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.993 [2024-11-20 12:42:47.542461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.993 [2024-11-20 12:42:47.542470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:41.993 [2024-11-20 12:42:47.547961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.993 [2024-11-20 12:42:47.547984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.993 [2024-11-20 12:42:47.547992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:41.993 [2024-11-20 12:42:47.553596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.993 [2024-11-20 12:42:47.553619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.993 [2024-11-20 12:42:47.553627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:41.993 [2024-11-20 12:42:47.559046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.993 [2024-11-20 12:42:47.559067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.993 [2024-11-20 12:42:47.559076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:41.993 [2024-11-20 12:42:47.564714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.993 [2024-11-20 12:42:47.564736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.993 [2024-11-20 12:42:47.564748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:41.993 [2024-11-20 12:42:47.570145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.993 [2024-11-20 12:42:47.570166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.993 [2024-11-20 12:42:47.570175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:41.993 [2024-11-20 12:42:47.575803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.993 [2024-11-20 12:42:47.575825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.993 [2024-11-20 12:42:47.575833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:41.993 [2024-11-20 12:42:47.581240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.993 [2024-11-20 12:42:47.581262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.994 [2024-11-20 12:42:47.581270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:41.994 [2024-11-20 12:42:47.586847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.994 [2024-11-20 12:42:47.586868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.994 [2024-11-20 12:42:47.586876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:41.994 [2024-11-20 12:42:47.592294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.994 [2024-11-20 12:42:47.592315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.994 [2024-11-20 12:42:47.592323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:41.994 [2024-11-20 12:42:47.597878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.994 [2024-11-20 12:42:47.597900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.994 [2024-11-20 12:42:47.597908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:41.994 [2024-11-20 12:42:47.603330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.994 [2024-11-20 12:42:47.603353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.994 [2024-11-20 12:42:47.603361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:41.994 [2024-11-20 12:42:47.608753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.994 [2024-11-20 12:42:47.608776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.994 [2024-11-20 12:42:47.608784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:41.994 [2024-11-20 12:42:47.614222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.994 [2024-11-20 12:42:47.614249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.994 [2024-11-20 12:42:47.614257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:41.994 [2024-11-20 12:42:47.619890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.994 [2024-11-20 12:42:47.619912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.994 [2024-11-20 12:42:47.619920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:41.994 [2024-11-20 12:42:47.625476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.994 [2024-11-20 12:42:47.625499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.994 [2024-11-20 12:42:47.625509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:41.994 [2024-11-20 12:42:47.630656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.994 [2024-11-20 12:42:47.630679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.994 [2024-11-20 12:42:47.630687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:41.994 [2024-11-20 12:42:47.636146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.994 [2024-11-20 12:42:47.636169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.994 [2024-11-20 12:42:47.636177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:41.994 [2024-11-20 12:42:47.641969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.994 [2024-11-20 12:42:47.641992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.994 [2024-11-20 12:42:47.642002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:41.994 [2024-11-20 12:42:47.647458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.994 [2024-11-20 12:42:47.647481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.994 [2024-11-20 12:42:47.647492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:41.994 [2024-11-20 12:42:47.653113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.994 [2024-11-20 12:42:47.653135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.994 [2024-11-20 12:42:47.653144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:41.994 [2024-11-20 12:42:47.659238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.994 [2024-11-20 12:42:47.659261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.994 [2024-11-20 12:42:47.659270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:41.994 [2024-11-20 12:42:47.664746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.994 [2024-11-20 12:42:47.664769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.994 [2024-11-20 12:42:47.664778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:41.994 [2024-11-20 12:42:47.670298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.994 [2024-11-20 12:42:47.670321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.994 [2024-11-20 12:42:47.670328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:41.994 [2024-11-20 12:42:47.676535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.994 [2024-11-20 12:42:47.676557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.994 [2024-11-20 12:42:47.676566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:41.994 [2024-11-20 12:42:47.683894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.994 [2024-11-20 12:42:47.683918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.994 [2024-11-20 12:42:47.683926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:41.994 [2024-11-20 12:42:47.691513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.994 [2024-11-20 12:42:47.691537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.994 [2024-11-20 12:42:47.691545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:41.994 [2024-11-20 12:42:47.699506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.994 [2024-11-20 12:42:47.699530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.994 [2024-11-20 12:42:47.699538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:41.994 [2024-11-20 12:42:47.707511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.994 [2024-11-20 12:42:47.707535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.994 [2024-11-20 12:42:47.707544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:41.994 [2024-11-20 12:42:47.715849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.994 [2024-11-20 12:42:47.715872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.994 [2024-11-20 12:42:47.715882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:41.994 [2024-11-20 12:42:47.724655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:41.994 [2024-11-20 12:42:47.724680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.994 [2024-11-20 12:42:47.724695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:42.309 5315.00 IOPS, 664.38 MiB/s [2024-11-20T11:42:48.075Z] [2024-11-20 12:42:47.734112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.309 [2024-11-20 12:42:47.734137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.309 [2024-11-20 12:42:47.734146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:42.309 [2024-11-20 12:42:47.742401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.309 [2024-11-20 12:42:47.742426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.309 [2024-11-20 12:42:47.742435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:42.309 [2024-11-20 12:42:47.751283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.309 [2024-11-20 12:42:47.751308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.309 [2024-11-20 12:42:47.751317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.309 [2024-11-20 12:42:47.759199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.309 [2024-11-20 12:42:47.759230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.309 [2024-11-20 12:42:47.759238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:42.309 [2024-11-20 12:42:47.766994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.309 [2024-11-20 12:42:47.767017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.309 [2024-11-20 12:42:47.767026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:42.309 [2024-11-20 12:42:47.774810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.309 [2024-11-20 12:42:47.774833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.309 [2024-11-20 12:42:47.774843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:42.309 [2024-11-20 12:42:47.782510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.309 [2024-11-20 12:42:47.782533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.309 [2024-11-20 12:42:47.782541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.309 [2024-11-20 12:42:47.790547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.309 [2024-11-20 12:42:47.790570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.309 [2024-11-20 12:42:47.790578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:42.309 [2024-11-20 12:42:47.798450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.309 [2024-11-20 12:42:47.798473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.309 [2024-11-20 12:42:47.798481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:42.309 [2024-11-20 12:42:47.806721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.309 [2024-11-20 12:42:47.806744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.309 [2024-11-20 12:42:47.806753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:42.309 [2024-11-20 12:42:47.813636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.309 [2024-11-20 12:42:47.813659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.309 [2024-11-20 12:42:47.813667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.309 [2024-11-20 12:42:47.817320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.309 [2024-11-20 12:42:47.817342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.309 [2024-11-20 12:42:47.817351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:42.309 [2024-11-20 12:42:47.824698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.309 [2024-11-20 12:42:47.824722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.309 [2024-11-20 12:42:47.824730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:42.309 [2024-11-20 12:42:47.830296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.309 [2024-11-20 12:42:47.830318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.309 [2024-11-20 12:42:47.830327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:42.309 [2024-11-20 12:42:47.835877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.309 [2024-11-20 12:42:47.835899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.310 [2024-11-20 12:42:47.835907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.310 [2024-11-20 12:42:47.840612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.310 [2024-11-20 12:42:47.840635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.310 [2024-11-20 12:42:47.840644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:42.310 [2024-11-20 12:42:47.846033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.310 [2024-11-20 12:42:47.846055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.310 [2024-11-20 12:42:47.846068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:42.310 [2024-11-20 12:42:47.851551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.310 [2024-11-20 12:42:47.851573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.310 [2024-11-20 12:42:47.851582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:42.310 [2024-11-20 12:42:47.857596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.310 [2024-11-20 12:42:47.857619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.310 [2024-11-20 12:42:47.857627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.310 [2024-11-20 12:42:47.863165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.310 [2024-11-20 12:42:47.863187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.310 [2024-11-20 12:42:47.863196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:42.310 [2024-11-20 12:42:47.868876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.310 [2024-11-20 12:42:47.868898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.310 [2024-11-20 12:42:47.868907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:42.310 [2024-11-20 12:42:47.874385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.310 [2024-11-20 12:42:47.874408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.310 [2024-11-20 12:42:47.874416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:42.310 [2024-11-20 12:42:47.879978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.310 [2024-11-20 12:42:47.880000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.310 [2024-11-20 12:42:47.880008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.310 [2024-11-20 12:42:47.885498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.310 [2024-11-20 12:42:47.885519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.310 [2024-11-20 12:42:47.885527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:42.310 [2024-11-20 12:42:47.891088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.310 [2024-11-20 12:42:47.891109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.310 [2024-11-20 12:42:47.891117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:42.310 [2024-11-20 12:42:47.896779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.310 [2024-11-20 12:42:47.896807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.310 [2024-11-20 12:42:47.896815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:42.310 [2024-11-20 12:42:47.902267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.310 [2024-11-20 12:42:47.902291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.310 [2024-11-20 12:42:47.902299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.310 [2024-11-20 12:42:47.907170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.310 [2024-11-20 12:42:47.907192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.310 [2024-11-20 12:42:47.907207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:42.310 [2024-11-20 12:42:47.914367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.310 [2024-11-20 12:42:47.914390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.310 [2024-11-20 12:42:47.914399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:42.310 [2024-11-20 12:42:47.920862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.310 [2024-11-20 12:42:47.920885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.310 [2024-11-20 12:42:47.920893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:42.310 [2024-11-20 12:42:47.928754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.310 [2024-11-20 12:42:47.928777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.310 [2024-11-20 12:42:47.928786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.310 [2024-11-20 12:42:47.936591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.310 [2024-11-20 12:42:47.936614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.310 [2024-11-20 12:42:47.936622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:42.310 [2024-11-20 12:42:47.942949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.310 [2024-11-20 12:42:47.942973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.310 [2024-11-20 12:42:47.942982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:42.310 [2024-11-20 12:42:47.948881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.310 [2024-11-20 12:42:47.948904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.310 [2024-11-20 12:42:47.948913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:42.310 [2024-11-20 12:42:47.956502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.310 [2024-11-20 12:42:47.956526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.310 [2024-11-20 12:42:47.956535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.310 [2024-11-20 12:42:47.963981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.310 [2024-11-20 12:42:47.964005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.310 [2024-11-20 12:42:47.964014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:42.310 [2024-11-20 12:42:47.971304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.310 [2024-11-20 12:42:47.971327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.310 [2024-11-20 12:42:47.971336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:42.310 [2024-11-20 12:42:47.978270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.310 [2024-11-20 12:42:47.978293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.310 [2024-11-20 12:42:47.978302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:42.310 [2024-11-20 12:42:47.985686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.310 [2024-11-20 12:42:47.985711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.310 [2024-11-20 12:42:47.985719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.310 [2024-11-20 12:42:47.994035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.310 [2024-11-20 12:42:47.994059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.310 [2024-11-20 12:42:47.994067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:42.310 [2024-11-20 12:42:48.002609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.310 [2024-11-20 12:42:48.002633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.310 [2024-11-20 12:42:48.002642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:42.310 [2024-11-20 12:42:48.009516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.310 [2024-11-20 12:42:48.009539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.311 [2024-11-20 12:42:48.009548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:42.311 [2024-11-20 12:42:48.016785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.311 [2024-11-20 12:42:48.016808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.311 [2024-11-20 12:42:48.016820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.311 [2024-11-20 12:42:48.022996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.311 [2024-11-20 12:42:48.023019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.311 [2024-11-20 12:42:48.023028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:42.311 [2024-11-20 12:42:48.029278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.311 [2024-11-20 12:42:48.029303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.311 [2024-11-20 12:42:48.029311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:42.311 [2024-11-20 12:42:48.036382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.311 [2024-11-20 12:42:48.036406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.311 [2024-11-20 12:42:48.036414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:42.311 [2024-11-20 12:42:48.044998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.311 [2024-11-20 12:42:48.045022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.311 [2024-11-20 12:42:48.045031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.605 [2024-11-20 12:42:48.052985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.605 [2024-11-20 12:42:48.053008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.605 [2024-11-20 12:42:48.053016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:42.606 [2024-11-20 12:42:48.060997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.606 [2024-11-20 12:42:48.061020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.606 [2024-11-20 12:42:48.061029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:42.606 [2024-11-20 12:42:48.067731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.606 [2024-11-20 12:42:48.067754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.606 [2024-11-20 12:42:48.067762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:42.606 [2024-11-20 12:42:48.075848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.606 [2024-11-20 12:42:48.075872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.606 [2024-11-20 12:42:48.075882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.606 [2024-11-20 12:42:48.083842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.606 [2024-11-20 12:42:48.083870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.606 [2024-11-20 12:42:48.083878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:42.606 [2024-11-20 12:42:48.091130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.606 [2024-11-20 12:42:48.091154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.606 [2024-11-20 12:42:48.091164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:42.606 [2024-11-20 12:42:48.097096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.606 [2024-11-20 12:42:48.097118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.606 [2024-11-20 12:42:48.097127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:42.606 [2024-11-20 12:42:48.102487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.606 [2024-11-20 12:42:48.102508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.606 [2024-11-20 12:42:48.102517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.606 [2024-11-20 12:42:48.107892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.606 [2024-11-20 12:42:48.107913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.606 [2024-11-20 12:42:48.107921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:42.606 [2024-11-20 12:42:48.113321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.606 [2024-11-20 12:42:48.113343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.606 [2024-11-20 12:42:48.113352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:42.606 [2024-11-20 12:42:48.118911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.606 [2024-11-20 12:42:48.118933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.606 [2024-11-20 12:42:48.118940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:42.606 [2024-11-20 12:42:48.125013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.606 [2024-11-20 12:42:48.125036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.606 [2024-11-20 12:42:48.125044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.606 [2024-11-20 12:42:48.131002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.606 [2024-11-20 12:42:48.131024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.606 [2024-11-20 12:42:48.131033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:42.606 [2024-11-20 12:42:48.136658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.606 [2024-11-20 12:42:48.136681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.606 [2024-11-20 12:42:48.136690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:42.606 [2024-11-20 12:42:48.142650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.606 [2024-11-20 12:42:48.142673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.606 [2024-11-20 12:42:48.142681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:42.606 [2024-11-20 12:42:48.148356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.606 [2024-11-20 12:42:48.148378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.606 [2024-11-20 12:42:48.148387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.606 [2024-11-20 12:42:48.153904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.606 [2024-11-20 12:42:48.153926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.606 [2024-11-20 12:42:48.153934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:42.606 [2024-11-20 12:42:48.159818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.606 [2024-11-20 12:42:48.159840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.606 [2024-11-20 12:42:48.159848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:42.606 [2024-11-20 12:42:48.164739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.606 [2024-11-20 12:42:48.164762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.606 [2024-11-20 12:42:48.164770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:42.606 [2024-11-20 12:42:48.170314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.606 [2024-11-20 12:42:48.170335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.606 [2024-11-20 12:42:48.170343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.606 [2024-11-20 12:42:48.175552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.606 [2024-11-20 12:42:48.175573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.606 [2024-11-20 12:42:48.175581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:42.606 [2024-11-20 12:42:48.181034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.606 [2024-11-20 12:42:48.181059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.606 [2024-11-20 12:42:48.181068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:42.606 [2024-11-20 12:42:48.186375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.606 [2024-11-20 12:42:48.186397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.606 [2024-11-20 12:42:48.186405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:42.606 [2024-11-20 12:42:48.189932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.606 [2024-11-20 12:42:48.189953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.606 [2024-11-20 12:42:48.189961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.607 [2024-11-20 12:42:48.194354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.607 [2024-11-20 12:42:48.194376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.607 [2024-11-20 12:42:48.194384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:42.607 [2024-11-20 12:42:48.199781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.607 [2024-11-20 12:42:48.199802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.607 [2024-11-20 12:42:48.199811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:42.607 [2024-11-20 12:42:48.205482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.607 [2024-11-20 12:42:48.205504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.607 [2024-11-20 12:42:48.205512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:42.607 [2024-11-20 12:42:48.211402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.607 [2024-11-20 12:42:48.211423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.607 [2024-11-20 12:42:48.211432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.607 [2024-11-20 12:42:48.216993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.607 [2024-11-20 12:42:48.217014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.607 [2024-11-20 12:42:48.217022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:42.607 [2024-11-20 12:42:48.222628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.607 [2024-11-20 12:42:48.222651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.607 [2024-11-20 12:42:48.222659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:42.607 [2024-11-20 12:42:48.228315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.607 [2024-11-20 12:42:48.228337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.607 [2024-11-20 12:42:48.228345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:42.607 [2024-11-20 12:42:48.234048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.607 [2024-11-20 12:42:48.234070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.607 [2024-11-20 12:42:48.234078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.607 [2024-11-20 12:42:48.239597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.607 [2024-11-20 12:42:48.239619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.607 [2024-11-20 12:42:48.239627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:42.607 [2024-11-20 12:42:48.245305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.607 [2024-11-20 12:42:48.245327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.607 [2024-11-20 12:42:48.245335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:42.607 [2024-11-20 12:42:48.251087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.607 [2024-11-20 12:42:48.251109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.607 [2024-11-20 12:42:48.251117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:42.607 [2024-11-20 12:42:48.256877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.607 [2024-11-20 12:42:48.256899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.607 [2024-11-20 12:42:48.256908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.607 [2024-11-20 12:42:48.262692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.607 [2024-11-20 12:42:48.262713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.607 [2024-11-20 12:42:48.262721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:42.607 [2024-11-20 12:42:48.268179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.607 [2024-11-20 12:42:48.268206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.607 [2024-11-20 12:42:48.268215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:42.607 [2024-11-20 12:42:48.273596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.607 [2024-11-20 12:42:48.273618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.607 [2024-11-20 12:42:48.273629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:42.607 [2024-11-20 12:42:48.279634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.607 [2024-11-20 12:42:48.279656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.607 [2024-11-20 12:42:48.279665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.607 [2024-11-20 12:42:48.286855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.607 [2024-11-20 12:42:48.286879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.607 [2024-11-20 12:42:48.286887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:42.607 [2024-11-20 12:42:48.294855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.607 [2024-11-20 12:42:48.294878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.607 [2024-11-20 12:42:48.294887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:42.607 [2024-11-20 12:42:48.301405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.607 [2024-11-20 12:42:48.301428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.607 [2024-11-20 12:42:48.301436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:42.607 [2024-11-20 12:42:48.308043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.607 [2024-11-20 12:42:48.308068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.607 [2024-11-20 12:42:48.308076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.607 [2024-11-20 12:42:48.316266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.607 [2024-11-20 12:42:48.316289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.607 [2024-11-20 12:42:48.316298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:42.607 [2024-11-20 12:42:48.323906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.607 [2024-11-20 12:42:48.323929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.607 [2024-11-20 12:42:48.323938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:42.607 [2024-11-20 12:42:48.330932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.607 [2024-11-20 12:42:48.330956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.607 [2024-11-20 12:42:48.330964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:42.607 [2024-11-20 12:42:48.337143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.607 [2024-11-20 12:42:48.337170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.607 [2024-11-20 12:42:48.337179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.607 [2024-11-20 12:42:48.342673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.607 [2024-11-20 12:42:48.342695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.607 [2024-11-20 12:42:48.342704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:42.607 [2024-11-20 12:42:48.347918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.607 [2024-11-20 12:42:48.347940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.607 [2024-11-20 12:42:48.347949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:42.607 [2024-11-20 12:42:48.353245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.607 [2024-11-20 12:42:48.353268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.608 [2024-11-20 12:42:48.353276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:42.608 [2024-11-20 12:42:48.359140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.608 [2024-11-20 12:42:48.359163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.608 [2024-11-20 12:42:48.359171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.608 [2024-11-20 12:42:48.364594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.608 [2024-11-20 12:42:48.364616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.608 [2024-11-20 12:42:48.364624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:42.868 [2024-11-20 12:42:48.370008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.868 [2024-11-20 12:42:48.370029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.868 [2024-11-20 12:42:48.370037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:42.868 [2024-11-20 12:42:48.375413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.868 [2024-11-20 12:42:48.375434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.868 [2024-11-20 12:42:48.375442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:42.868 [2024-11-20 12:42:48.380772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.868 [2024-11-20 12:42:48.380794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.868 [2024-11-20 12:42:48.380801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.868 [2024-11-20 12:42:48.386057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.868 [2024-11-20 12:42:48.386078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.868 [2024-11-20 12:42:48.386086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:42.868 [2024-11-20 12:42:48.391351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.868 [2024-11-20 12:42:48.391372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.868 [2024-11-20 12:42:48.391380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:42.868 [2024-11-20 12:42:48.396616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.868 [2024-11-20 12:42:48.396638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.868 [2024-11-20 12:42:48.396646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:42.868 [2024-11-20 12:42:48.401943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.868 [2024-11-20 12:42:48.401964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.868 [2024-11-20 12:42:48.401972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.868 [2024-11-20 12:42:48.407316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.868 [2024-11-20 12:42:48.407337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.868 [2024-11-20 12:42:48.407346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:42.868 [2024-11-20 12:42:48.412988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.868 [2024-11-20 12:42:48.413011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.868 [2024-11-20 12:42:48.413019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:42.868 [2024-11-20 12:42:48.418841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.868 [2024-11-20 12:42:48.418864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.869 [2024-11-20 12:42:48.418871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:42.869 [2024-11-20 12:42:48.424148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.869 [2024-11-20 12:42:48.424169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.869 [2024-11-20 12:42:48.424177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.869 [2024-11-20 12:42:48.429319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.869 [2024-11-20 12:42:48.429340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.869 [2024-11-20 12:42:48.429352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:42.869 [2024-11-20 12:42:48.434427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.869 [2024-11-20 12:42:48.434448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.869 [2024-11-20 12:42:48.434456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:42.869 [2024-11-20 12:42:48.439532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.869 [2024-11-20 12:42:48.439553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.869 [2024-11-20 12:42:48.439561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:42.869 [2024-11-20 12:42:48.444627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.869 [2024-11-20 12:42:48.444648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.869 [2024-11-20 12:42:48.444656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.869 [2024-11-20 12:42:48.449739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.869 [2024-11-20 12:42:48.449760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.869 [2024-11-20 12:42:48.449768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:42.869 [2024-11-20 12:42:48.454847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.869 [2024-11-20 12:42:48.454868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.869 [2024-11-20 12:42:48.454876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:42.869 [2024-11-20 12:42:48.460137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.869 [2024-11-20 12:42:48.460158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.869 [2024-11-20 12:42:48.460167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:42.869 [2024-11-20 12:42:48.465411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.869 [2024-11-20 12:42:48.465431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.869 [2024-11-20 12:42:48.465439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.869 [2024-11-20 12:42:48.470776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.869 [2024-11-20 12:42:48.470797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.869 [2024-11-20 12:42:48.470805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:42.869 [2024-11-20 12:42:48.476101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.869 [2024-11-20 12:42:48.476123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.869 [2024-11-20 12:42:48.476131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:42.869 [2024-11-20 12:42:48.481348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.869 [2024-11-20 12:42:48.481369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.869 [2024-11-20 12:42:48.481377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:42.869 [2024-11-20 12:42:48.486706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.869 [2024-11-20 12:42:48.486727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.869 [2024-11-20 12:42:48.486734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.869 [2024-11-20 12:42:48.491982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.869 [2024-11-20 12:42:48.492004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.869 [2024-11-20 12:42:48.492012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:42.869 [2024-11-20 12:42:48.497215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.869 [2024-11-20 12:42:48.497236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.869 [2024-11-20 12:42:48.497243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:42.869 [2024-11-20 12:42:48.502596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.869 [2024-11-20 12:42:48.502617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.869 [2024-11-20 12:42:48.502625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:42.869 [2024-11-20 12:42:48.507873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.869 [2024-11-20 12:42:48.507894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.869 [2024-11-20 12:42:48.507902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.869 [2024-11-20 12:42:48.513174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.869 [2024-11-20 12:42:48.513196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.869 [2024-11-20 12:42:48.513210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:42.869 [2024-11-20 12:42:48.518469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.869 [2024-11-20 12:42:48.518491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.869 [2024-11-20 12:42:48.518504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:42.869 [2024-11-20 12:42:48.523716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.869 [2024-11-20 12:42:48.523738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.869 [2024-11-20 12:42:48.523746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:42.869 [2024-11-20 12:42:48.528964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.869 [2024-11-20 12:42:48.528986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.869 [2024-11-20 12:42:48.528993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.869 [2024-11-20 12:42:48.534256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.869 [2024-11-20 12:42:48.534277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.869 [2024-11-20 12:42:48.534285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:42.869 [2024-11-20 12:42:48.539545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.869 [2024-11-20 12:42:48.539566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.869 [2024-11-20 12:42:48.539574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:42.869 [2024-11-20 12:42:48.544843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.869 [2024-11-20 12:42:48.544865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.869 [2024-11-20 12:42:48.544873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:42.869 [2024-11-20 12:42:48.550094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.869 [2024-11-20 12:42:48.550115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.869 [2024-11-20 12:42:48.550123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.869 [2024-11-20 12:42:48.555406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.869 [2024-11-20 12:42:48.555427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.869 [2024-11-20 12:42:48.555436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:42.869 [2024-11-20 12:42:48.560734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.869 [2024-11-20 12:42:48.560756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.870 [2024-11-20 12:42:48.560764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:42.870 [2024-11-20 12:42:48.566053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.870 [2024-11-20 12:42:48.566078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.870 [2024-11-20 12:42:48.566086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:42.870 [2024-11-20 12:42:48.571369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.870 [2024-11-20 12:42:48.571390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.870 [2024-11-20 12:42:48.571398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.870 [2024-11-20 12:42:48.576626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.870 [2024-11-20 12:42:48.576647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.870 [2024-11-20 12:42:48.576655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:42.870 [2024-11-20 12:42:48.581897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.870 [2024-11-20 12:42:48.581918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.870 [2024-11-20 12:42:48.581927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:42.870 [2024-11-20 12:42:48.587209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.870 [2024-11-20 12:42:48.587231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.870 [2024-11-20 12:42:48.587239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:42.870 [2024-11-20 12:42:48.592500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.870 [2024-11-20 12:42:48.592521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.870 [2024-11-20 12:42:48.592529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.870 [2024-11-20 12:42:48.597810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.870 [2024-11-20 12:42:48.597831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.870 [2024-11-20 12:42:48.597839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:42.870 [2024-11-20 12:42:48.603166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.870 [2024-11-20 12:42:48.603186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.870 [2024-11-20 12:42:48.603194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:42.870 [2024-11-20 12:42:48.608476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.870 [2024-11-20 12:42:48.608497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.870 [2024-11-20 12:42:48.608505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:42.870 [2024-11-20 12:42:48.613760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.870 [2024-11-20 12:42:48.613781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.870 [2024-11-20 12:42:48.613789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.870 [2024-11-20 12:42:48.619079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.870 [2024-11-20 12:42:48.619100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.870 [2024-11-20 12:42:48.619108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:42.870 [2024-11-20 12:42:48.624361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.870 [2024-11-20 12:42:48.624382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.870 [2024-11-20 12:42:48.624391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:42.870 [2024-11-20 12:42:48.629751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:42.870 [2024-11-20 12:42:48.629772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.870 [2024-11-20 12:42:48.629781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:43.130 [2024-11-20 12:42:48.635075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:43.130 [2024-11-20 12:42:48.635096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.130 [2024-11-20 12:42:48.635104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:43.130 [2024-11-20 12:42:48.640418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:43.130 [2024-11-20 12:42:48.640440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.130 [2024-11-20 12:42:48.640448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:43.130 [2024-11-20 12:42:48.645687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:43.130 [2024-11-20 12:42:48.645708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.130 [2024-11-20 12:42:48.645716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:43.130 [2024-11-20 12:42:48.650899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:43.130 [2024-11-20 12:42:48.650920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.130 [2024-11-20 12:42:48.650928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:43.130 [2024-11-20 12:42:48.656084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:43.130 [2024-11-20 12:42:48.656105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.130 [2024-11-20 12:42:48.656117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:43.130 [2024-11-20 12:42:48.661312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:43.130 [2024-11-20 12:42:48.661333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.130 [2024-11-20 12:42:48.661341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:43.130 [2024-11-20 12:42:48.666509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:43.130 [2024-11-20 12:42:48.666531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.130 [2024-11-20 12:42:48.666540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:43.130 [2024-11-20 12:42:48.671778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:43.130 [2024-11-20 12:42:48.671799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.130 [2024-11-20 12:42:48.671807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:43.130 [2024-11-20 12:42:48.677056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:43.130 [2024-11-20 12:42:48.677077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.130 [2024-11-20 12:42:48.677085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:43.130 [2024-11-20 12:42:48.682355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:43.130 [2024-11-20 12:42:48.682378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.130 [2024-11-20 12:42:48.682388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:43.130 [2024-11-20 12:42:48.687717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:43.130 [2024-11-20 12:42:48.687738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.130 [2024-11-20 12:42:48.687747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:43.130 [2024-11-20 12:42:48.692995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:43.130 [2024-11-20 12:42:48.693016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.130 [2024-11-20 12:42:48.693024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:43.130 [2024-11-20 12:42:48.698274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:43.130 [2024-11-20 12:42:48.698295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.130 [2024-11-20 12:42:48.698303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:43.130 [2024-11-20 12:42:48.703564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:43.130 [2024-11-20 12:42:48.703589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.130 [2024-11-20 12:42:48.703596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:43.130 [2024-11-20 12:42:48.708875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:43.130 [2024-11-20 12:42:48.708896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.130 [2024-11-20 12:42:48.708905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:43.130 [2024-11-20 12:42:48.714301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:43.130 [2024-11-20 12:42:48.714322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.130 [2024-11-20 12:42:48.714330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:43.130 [2024-11-20 12:42:48.719528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:43.130 [2024-11-20 12:42:48.719550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.130 [2024-11-20 12:42:48.719558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:43.130 [2024-11-20 12:42:48.724830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:43.130 [2024-11-20 12:42:48.724851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.130 [2024-11-20 12:42:48.724860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:43.130 5265.00 IOPS, 658.12 MiB/s [2024-11-20T11:42:48.896Z] [2024-11-20 12:42:48.731230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff9580) 00:28:43.130 [2024-11-20 12:42:48.731251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.130 [2024-11-20 12:42:48.731259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:43.130 00:28:43.130 Latency(us) 00:28:43.130 [2024-11-20T11:42:48.896Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.130 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:43.130 nvme0n1 : 2.00 5265.81 658.23 0.00 0.00 3035.25 655.36 12545.46 00:28:43.130 [2024-11-20T11:42:48.896Z] =================================================================================================================== 00:28:43.130 [2024-11-20T11:42:48.896Z] Total : 5265.81 658.23 0.00 0.00 3035.25 655.36 12545.46 00:28:43.130 { 00:28:43.130 "results": [ 00:28:43.130 { 00:28:43.130 "job": "nvme0n1", 00:28:43.130 "core_mask": "0x2", 00:28:43.130 "workload": "randread", 00:28:43.130 "status": "finished", 00:28:43.130 "queue_depth": 16, 00:28:43.130 "io_size": 131072, 00:28:43.130 "runtime": 2.002731, 00:28:43.130 "iops": 5265.809537077122, 00:28:43.130 "mibps": 658.2261921346402, 00:28:43.130 "io_failed": 0, 00:28:43.130 "io_timeout": 0, 00:28:43.130 "avg_latency_us": 3035.250557647675, 00:28:43.130 "min_latency_us": 655.36, 00:28:43.130 "max_latency_us": 12545.462857142857 00:28:43.130 } 00:28:43.130 ], 00:28:43.130 "core_count": 1 00:28:43.130 } 00:28:43.130 12:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:43.130 12:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:43.130 12:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:43.130 | .driver_specific 00:28:43.131 | .nvme_error 00:28:43.131 | .status_code 00:28:43.131 | .command_transient_transport_error' 00:28:43.131 12:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:43.390 12:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 341 > 0 )) 00:28:43.390 12:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 331324 00:28:43.390 12:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 331324 ']' 00:28:43.390 12:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 331324 00:28:43.390 12:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:43.390 12:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:43.390 12:42:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 331324 00:28:43.390 12:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:43.390 12:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:43.390 12:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 331324' 00:28:43.390 killing process with pid 331324 00:28:43.390 12:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 331324 00:28:43.390 Received shutdown signal, test time was about 2.000000 seconds 00:28:43.390 00:28:43.390 Latency(us) 00:28:43.390 [2024-11-20T11:42:49.156Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.390 [2024-11-20T11:42:49.156Z] =================================================================================================================== 00:28:43.390 [2024-11-20T11:42:49.156Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:43.390 12:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 331324 00:28:43.670 12:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:43.670 12:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:43.670 12:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:43.670 12:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:43.670 12:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:43.670 12:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=331850 00:28:43.670 12:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 331850 /var/tmp/bperf.sock 00:28:43.670 12:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:43.670 12:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 331850 ']' 00:28:43.670 12:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:43.670 12:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:43.670 12:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:43.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:43.670 12:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:43.670 12:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:43.670 [2024-11-20 12:42:49.210240] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:28:43.670 [2024-11-20 12:42:49.210286] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid331850 ] 00:28:43.670 [2024-11-20 12:42:49.284623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.670 [2024-11-20 12:42:49.326150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:43.670 12:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:43.670 12:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:43.670 12:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:43.670 12:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:43.929 12:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:43.929 12:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.929 12:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:43.929 12:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.929 12:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:43.929 12:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:44.188 nvme0n1 00:28:44.188 12:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:44.188 12:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.188 12:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:44.188 12:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.188 12:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:44.188 12:42:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:44.447 Running I/O for 2 seconds... 00:28:44.447 [2024-11-20 12:42:49.986957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166ebfd0 00:28:44.447 [2024-11-20 12:42:49.987842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.447 [2024-11-20 12:42:49.987871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:44.447 [2024-11-20 12:42:49.996034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166ec408 00:28:44.447 [2024-11-20 12:42:49.996985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.447 [2024-11-20 12:42:49.997009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:44.447 [2024-11-20 12:42:50.006046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f8618 00:28:44.447 [2024-11-20 12:42:50.007316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:17709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.447 [2024-11-20 12:42:50.007343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:44.447 [2024-11-20 12:42:50.016313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166ec408 00:28:44.447 [2024-11-20 12:42:50.017548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.447 [2024-11-20 12:42:50.017571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:44.447 [2024-11-20 12:42:50.023718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f1ca0 00:28:44.447 [2024-11-20 12:42:50.024472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.447 [2024-11-20 12:42:50.024492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:44.447 [2024-11-20 12:42:50.036289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e73e0 00:28:44.447 [2024-11-20 12:42:50.037430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.447 [2024-11-20 12:42:50.037452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:44.447 [2024-11-20 12:42:50.046705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e99d8 00:28:44.447 [2024-11-20 12:42:50.047847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.447 [2024-11-20 12:42:50.047868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:44.447 [2024-11-20 12:42:50.060026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e0ea0 00:28:44.447 [2024-11-20 12:42:50.061859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.447 [2024-11-20 12:42:50.061879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:44.447 [2024-11-20 12:42:50.071003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166fc560 00:28:44.447 [2024-11-20 12:42:50.073710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.447 [2024-11-20 12:42:50.073735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:44.447 [2024-11-20 12:42:50.079679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166de8a8 00:28:44.447 [2024-11-20 12:42:50.080755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.447 [2024-11-20 12:42:50.080775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:44.447 [2024-11-20 12:42:50.089412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f0788 00:28:44.447 [2024-11-20 12:42:50.090604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.447 [2024-11-20 12:42:50.090623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:44.447 [2024-11-20 12:42:50.096824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166fcdd0 00:28:44.447 [2024-11-20 12:42:50.097525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:17459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.447 [2024-11-20 12:42:50.097546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:44.447 [2024-11-20 12:42:50.108951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f7100 00:28:44.447 [2024-11-20 12:42:50.110381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.447 [2024-11-20 12:42:50.110402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:44.447 [2024-11-20 12:42:50.116362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e1f80 00:28:44.447 [2024-11-20 12:42:50.117301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.447 [2024-11-20 12:42:50.117320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:44.447 [2024-11-20 12:42:50.126481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f96f8 00:28:44.447 [2024-11-20 12:42:50.127215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.447 [2024-11-20 12:42:50.127235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:44.447 [2024-11-20 12:42:50.135726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f81e0 00:28:44.447 [2024-11-20 12:42:50.136756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.447 [2024-11-20 12:42:50.136776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:44.447 [2024-11-20 12:42:50.146323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f9f68 00:28:44.447 [2024-11-20 12:42:50.147816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.447 [2024-11-20 12:42:50.147835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:44.447 [2024-11-20 12:42:50.153696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e27f0 00:28:44.447 [2024-11-20 12:42:50.154701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.447 [2024-11-20 12:42:50.154720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:44.447 [2024-11-20 12:42:50.162780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f8e88 00:28:44.447 [2024-11-20 12:42:50.164020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.447 [2024-11-20 12:42:50.164039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:44.447 [2024-11-20 12:42:50.173022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e9e10 00:28:44.447 [2024-11-20 12:42:50.174138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.447 [2024-11-20 12:42:50.174162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:44.447 [2024-11-20 12:42:50.180830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f4298 00:28:44.447 [2024-11-20 12:42:50.181290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.447 [2024-11-20 12:42:50.181310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:44.447 [2024-11-20 12:42:50.192473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166ebb98 00:28:44.447 [2024-11-20 12:42:50.193946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.447 [2024-11-20 12:42:50.193966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:44.447 [2024-11-20 12:42:50.199846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f8e88 00:28:44.448 [2024-11-20 12:42:50.200853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.448 [2024-11-20 12:42:50.200873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:44.448 [2024-11-20 12:42:50.209209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166fb8b8 00:28:44.706 [2024-11-20 12:42:50.210194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.706 [2024-11-20 12:42:50.210220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:44.706 [2024-11-20 12:42:50.219387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e12d8 00:28:44.706 [2024-11-20 12:42:50.220616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:18511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.706 [2024-11-20 12:42:50.220635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:44.706 [2024-11-20 12:42:50.226759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166fdeb0 00:28:44.706 [2024-11-20 12:42:50.227527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.706 [2024-11-20 12:42:50.227547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:44.706 [2024-11-20 12:42:50.236786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f8618 00:28:44.706 [2024-11-20 12:42:50.237340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.706 [2024-11-20 12:42:50.237359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:44.706 [2024-11-20 12:42:50.246238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166ee5c8 00:28:44.706 [2024-11-20 12:42:50.247199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.706 [2024-11-20 12:42:50.247222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:44.706 [2024-11-20 12:42:50.255512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166fdeb0 00:28:44.706 [2024-11-20 12:42:50.256473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.706 [2024-11-20 12:42:50.256494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:44.706 [2024-11-20 12:42:50.265547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f8618 00:28:44.706 [2024-11-20 12:42:50.266326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.706 [2024-11-20 12:42:50.266347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:44.706 [2024-11-20 12:42:50.274820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166dfdc0 00:28:44.706 [2024-11-20 12:42:50.275923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.706 [2024-11-20 12:42:50.275942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:44.706 [2024-11-20 12:42:50.284032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166df988 00:28:44.706 [2024-11-20 12:42:50.285215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.707 [2024-11-20 12:42:50.285234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:44.707 [2024-11-20 12:42:50.293712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f8a50 00:28:44.707 [2024-11-20 12:42:50.295061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.707 [2024-11-20 12:42:50.295083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:44.707 [2024-11-20 12:42:50.301078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e0ea0 00:28:44.707 [2024-11-20 12:42:50.301839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.707 [2024-11-20 12:42:50.301859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:44.707 [2024-11-20 12:42:50.310940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e4140 00:28:44.707 [2024-11-20 12:42:50.311692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.707 [2024-11-20 12:42:50.311713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:44.707 [2024-11-20 12:42:50.320137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e6300 00:28:44.707 [2024-11-20 12:42:50.320879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.707 [2024-11-20 12:42:50.320899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:44.707 [2024-11-20 12:42:50.328637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166fb8b8 00:28:44.707 [2024-11-20 12:42:50.329364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.707 [2024-11-20 12:42:50.329384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:44.707 [2024-11-20 12:42:50.338486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166eb328 00:28:44.707 [2024-11-20 12:42:50.339222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.707 [2024-11-20 12:42:50.339242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:44.707 [2024-11-20 12:42:50.347555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e9e10 00:28:44.707 [2024-11-20 12:42:50.348274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.707 [2024-11-20 12:42:50.348295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:44.707 [2024-11-20 12:42:50.357071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f8618 00:28:44.707 [2024-11-20 12:42:50.357987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:11062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.707 [2024-11-20 12:42:50.358008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:44.707 [2024-11-20 12:42:50.366460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f3a28 00:28:44.707 [2024-11-20 12:42:50.367412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.707 [2024-11-20 12:42:50.367431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.707 [2024-11-20 12:42:50.376906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f92c0 00:28:44.707 [2024-11-20 12:42:50.378268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.707 [2024-11-20 12:42:50.378288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.707 [2024-11-20 12:42:50.384265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e1b48 00:28:44.707 [2024-11-20 12:42:50.385158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.707 [2024-11-20 12:42:50.385177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:44.707 [2024-11-20 12:42:50.393940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e84c0 00:28:44.707 [2024-11-20 12:42:50.395002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.707 [2024-11-20 12:42:50.395022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:44.707 [2024-11-20 12:42:50.403408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e6300 00:28:44.707 [2024-11-20 12:42:50.403994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.707 [2024-11-20 12:42:50.404015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.707 [2024-11-20 12:42:50.414024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166ea248 00:28:44.707 [2024-11-20 12:42:50.415420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.707 [2024-11-20 12:42:50.415447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.707 [2024-11-20 12:42:50.421425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166eaef0 00:28:44.707 [2024-11-20 12:42:50.422332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.707 [2024-11-20 12:42:50.422352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:44.707 [2024-11-20 12:42:50.430771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f1430 00:28:44.707 [2024-11-20 12:42:50.431229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.707 [2024-11-20 12:42:50.431250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:44.707 [2024-11-20 12:42:50.440045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e5a90 00:28:44.707 [2024-11-20 12:42:50.440842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.707 [2024-11-20 12:42:50.440863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:44.707 [2024-11-20 12:42:50.448844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166ee5c8 00:28:44.707 [2024-11-20 12:42:50.449628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.707 [2024-11-20 12:42:50.449647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:44.707 [2024-11-20 12:42:50.461006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e5a90 00:28:44.707 [2024-11-20 12:42:50.462499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.707 [2024-11-20 12:42:50.462518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:44.707 [2024-11-20 12:42:50.467522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e38d0 00:28:44.707 [2024-11-20 12:42:50.468178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.707 [2024-11-20 12:42:50.468197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:44.966 [2024-11-20 12:42:50.476777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e0630 00:28:44.966 [2024-11-20 12:42:50.477548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.966 [2024-11-20 12:42:50.477568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:44.966 [2024-11-20 12:42:50.486449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f6020 00:28:44.966 [2024-11-20 12:42:50.487363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.966 [2024-11-20 12:42:50.487383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:44.966 [2024-11-20 12:42:50.495697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f31b8 00:28:44.966 [2024-11-20 12:42:50.496299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.966 [2024-11-20 12:42:50.496320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:44.966 [2024-11-20 12:42:50.504215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e0a68 00:28:44.966 [2024-11-20 12:42:50.504785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.966 [2024-11-20 12:42:50.504808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.966 [2024-11-20 12:42:50.516232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166ea680 00:28:44.966 [2024-11-20 12:42:50.517540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.966 [2024-11-20 12:42:50.517560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:44.966 [2024-11-20 12:42:50.522806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e49b0 00:28:44.966 [2024-11-20 12:42:50.523413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.966 [2024-11-20 12:42:50.523434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.966 [2024-11-20 12:42:50.534020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e6b70 00:28:44.966 [2024-11-20 12:42:50.535098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.966 [2024-11-20 12:42:50.535118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:44.966 [2024-11-20 12:42:50.541382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e9168 00:28:44.966 [2024-11-20 12:42:50.541988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.966 [2024-11-20 12:42:50.542008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:44.967 [2024-11-20 12:42:50.553387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e5ec8 00:28:44.967 [2024-11-20 12:42:50.554714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:10070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.967 [2024-11-20 12:42:50.554735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:44.967 [2024-11-20 12:42:50.559971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166ea680 00:28:44.967 [2024-11-20 12:42:50.560572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.967 [2024-11-20 12:42:50.560592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:44.967 [2024-11-20 12:42:50.572045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166eaab8 00:28:44.967 [2024-11-20 12:42:50.573350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.967 [2024-11-20 12:42:50.573370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:44.967 [2024-11-20 12:42:50.579273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f96f8 00:28:44.967 [2024-11-20 12:42:50.580118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.967 [2024-11-20 12:42:50.580138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:44.967 [2024-11-20 12:42:50.588229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166ef270 00:28:44.967 [2024-11-20 12:42:50.588808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.967 [2024-11-20 12:42:50.588827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:44.967 [2024-11-20 12:42:50.598290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e9168 00:28:44.967 [2024-11-20 12:42:50.599323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.967 [2024-11-20 12:42:50.599343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.967 [2024-11-20 12:42:50.606052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f31b8 00:28:44.967 [2024-11-20 12:42:50.606632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.967 [2024-11-20 12:42:50.606651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:44.967 [2024-11-20 12:42:50.616135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e0a68 00:28:44.967 [2024-11-20 12:42:50.617158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.967 [2024-11-20 12:42:50.617178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:44.967 [2024-11-20 12:42:50.625299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166ea680 00:28:44.967 [2024-11-20 12:42:50.626335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.967 [2024-11-20 12:42:50.626355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:44.967 [2024-11-20 12:42:50.633887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166eb328 00:28:44.967 [2024-11-20 12:42:50.634770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.967 [2024-11-20 12:42:50.634790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:44.967 [2024-11-20 12:42:50.642998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166ff3c8 00:28:44.967 [2024-11-20 12:42:50.643933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.967 [2024-11-20 12:42:50.643952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:44.967 [2024-11-20 12:42:50.654098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166fc998 00:28:44.967 [2024-11-20 12:42:50.655523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.967 [2024-11-20 12:42:50.655546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:44.967 [2024-11-20 12:42:50.660582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f0bc0 00:28:44.967 [2024-11-20 12:42:50.661300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.967 [2024-11-20 12:42:50.661319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:44.967 [2024-11-20 12:42:50.669723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166eee38 00:28:44.967 [2024-11-20 12:42:50.670454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.967 [2024-11-20 12:42:50.670474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:44.967 [2024-11-20 12:42:50.678823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e73e0 00:28:44.967 [2024-11-20 12:42:50.679437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.967 [2024-11-20 12:42:50.679457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:44.967 [2024-11-20 12:42:50.688774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e2c28 00:28:44.967 [2024-11-20 12:42:50.689840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.967 [2024-11-20 12:42:50.689860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:44.967 [2024-11-20 12:42:50.697802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166fda78 00:28:44.967 [2024-11-20 12:42:50.698640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.967 [2024-11-20 12:42:50.698660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:44.967 [2024-11-20 12:42:50.706664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166fda78 00:28:44.967 [2024-11-20 12:42:50.707497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.967 [2024-11-20 12:42:50.707517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:44.967 [2024-11-20 12:42:50.715933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e6fa8 00:28:44.967 [2024-11-20 12:42:50.716881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.967 [2024-11-20 12:42:50.716901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:44.967 [2024-11-20 12:42:50.725086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e0630 00:28:44.967 [2024-11-20 12:42:50.726074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:23693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.967 [2024-11-20 12:42:50.726094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:45.227 [2024-11-20 12:42:50.733711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e5658 00:28:45.227 [2024-11-20 12:42:50.734669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.227 [2024-11-20 12:42:50.734692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:45.227 [2024-11-20 12:42:50.744481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e5658 00:28:45.227 [2024-11-20 12:42:50.746066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.227 [2024-11-20 12:42:50.746086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:45.227 [2024-11-20 12:42:50.751193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166fbcf0 00:28:45.227 [2024-11-20 12:42:50.751991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.227 [2024-11-20 12:42:50.752011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:45.227 [2024-11-20 12:42:50.761918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e9168 00:28:45.227 [2024-11-20 12:42:50.762946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:9114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.227 [2024-11-20 12:42:50.762966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:45.227 [2024-11-20 12:42:50.770653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f6890 00:28:45.228 [2024-11-20 12:42:50.771683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.228 [2024-11-20 12:42:50.771704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:45.228 [2024-11-20 12:42:50.781905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f7100 00:28:45.228 [2024-11-20 12:42:50.783436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.228 [2024-11-20 12:42:50.783455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:45.228 [2024-11-20 12:42:50.788257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166ec408 00:28:45.228 [2024-11-20 12:42:50.788861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.228 [2024-11-20 12:42:50.788880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:45.228 [2024-11-20 12:42:50.797704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166ea248 00:28:45.228 [2024-11-20 12:42:50.798419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.228 [2024-11-20 12:42:50.798438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:45.228 [2024-11-20 12:42:50.807352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166ef270 00:28:45.228 [2024-11-20 12:42:50.808403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.228 [2024-11-20 12:42:50.808422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:45.228 [2024-11-20 12:42:50.816791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f8a50 00:28:45.228 [2024-11-20 12:42:50.817863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.228 [2024-11-20 12:42:50.817883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:45.228 [2024-11-20 12:42:50.824824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e9168 00:28:45.228 [2024-11-20 12:42:50.825436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.228 [2024-11-20 12:42:50.825456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:45.228 [2024-11-20 12:42:50.833199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e0a68 00:28:45.228 [2024-11-20 12:42:50.833794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.228 [2024-11-20 12:42:50.833813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:45.228 [2024-11-20 12:42:50.842775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e0a68 00:28:45.228 [2024-11-20 12:42:50.843378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.228 [2024-11-20 12:42:50.843397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:45.228 [2024-11-20 12:42:50.851852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f6890 00:28:45.228 [2024-11-20 12:42:50.852458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.228 [2024-11-20 12:42:50.852478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:45.228 [2024-11-20 12:42:50.860097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f92c0 00:28:45.228 [2024-11-20 12:42:50.860680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.228 [2024-11-20 12:42:50.860704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:45.228 [2024-11-20 12:42:50.869064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f46d0 00:28:45.228 [2024-11-20 12:42:50.869740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.228 [2024-11-20 12:42:50.869760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:45.228 [2024-11-20 12:42:50.878235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e23b8 00:28:45.228 [2024-11-20 12:42:50.878907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.228 [2024-11-20 12:42:50.878926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:45.228 [2024-11-20 12:42:50.888739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f1868 00:28:45.228 [2024-11-20 12:42:50.889526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.228 [2024-11-20 12:42:50.889546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:45.228 [2024-11-20 12:42:50.897339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f7538 00:28:45.228 [2024-11-20 12:42:50.898078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.228 [2024-11-20 12:42:50.898098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:45.228 [2024-11-20 12:42:50.905863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e9168 00:28:45.228 [2024-11-20 12:42:50.906430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.228 [2024-11-20 12:42:50.906450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:45.228 [2024-11-20 12:42:50.914942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166ddc00 00:28:45.228 [2024-11-20 12:42:50.915755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.228 [2024-11-20 12:42:50.915775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:45.228 [2024-11-20 12:42:50.924943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f5be8 00:28:45.228 [2024-11-20 12:42:50.926086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.228 [2024-11-20 12:42:50.926105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:45.228 [2024-11-20 12:42:50.933722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f2510 00:28:45.228 [2024-11-20 12:42:50.934861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.228 [2024-11-20 12:42:50.934880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:45.228 [2024-11-20 12:42:50.942940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166df118 00:28:45.228 [2024-11-20 12:42:50.943628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.228 [2024-11-20 12:42:50.943648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:45.228 [2024-11-20 12:42:50.951419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e5220 00:28:45.228 [2024-11-20 12:42:50.952668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.228 [2024-11-20 12:42:50.952688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:45.228 [2024-11-20 12:42:50.959120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166ee190 00:28:45.228 [2024-11-20 12:42:50.959787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.228 [2024-11-20 12:42:50.959806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:45.228 [2024-11-20 12:42:50.968562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f4b08 00:28:45.228 [2024-11-20 12:42:50.969341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.228 [2024-11-20 12:42:50.969363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:45.228 27365.00 IOPS, 106.89 MiB/s [2024-11-20T11:42:50.994Z] [2024-11-20 12:42:50.978417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e7c50 00:28:45.228 [2024-11-20 12:42:50.979306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.228 [2024-11-20 12:42:50.979326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:45.228 [2024-11-20 12:42:50.988186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e5ec8 00:28:45.228 [2024-11-20 12:42:50.989380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.228 [2024-11-20 12:42:50.989399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:45.487 [2024-11-20 12:42:50.997561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f7970 00:28:45.487 [2024-11-20 12:42:50.998755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.487 [2024-11-20 12:42:50.998775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:45.487 [2024-11-20 12:42:51.006070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f1430 00:28:45.487 [2024-11-20 12:42:51.007356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.487 [2024-11-20 12:42:51.007375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:45.487 [2024-11-20 12:42:51.013814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e3060 00:28:45.487 [2024-11-20 12:42:51.014491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.487 [2024-11-20 12:42:51.014510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:45.487 [2024-11-20 12:42:51.023243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f7da8 00:28:45.487 [2024-11-20 12:42:51.024030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.487 [2024-11-20 12:42:51.024050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:45.487 [2024-11-20 12:42:51.032623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e6b70 00:28:45.487 [2024-11-20 12:42:51.033570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:11772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.487 [2024-11-20 12:42:51.033589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:45.487 [2024-11-20 12:42:51.041929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e73e0 00:28:45.487 [2024-11-20 12:42:51.042955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:6076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.487 [2024-11-20 12:42:51.042974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:45.487 [2024-11-20 12:42:51.051325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f0ff8 00:28:45.487 [2024-11-20 12:42:51.052503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.487 [2024-11-20 12:42:51.052522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:45.487 [2024-11-20 12:42:51.060413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e4140 00:28:45.487 [2024-11-20 12:42:51.061117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.487 [2024-11-20 12:42:51.061136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:45.487 [2024-11-20 12:42:51.068885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f1ca0 00:28:45.487 [2024-11-20 12:42:51.070171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:24062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.487 [2024-11-20 12:42:51.070190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:45.487 [2024-11-20 12:42:51.078285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e1b48 00:28:45.487 [2024-11-20 12:42:51.079037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.487 [2024-11-20 12:42:51.079057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:45.488 [2024-11-20 12:42:51.087608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f96f8 00:28:45.488 [2024-11-20 12:42:51.088775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.488 [2024-11-20 12:42:51.088802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:45.488 [2024-11-20 12:42:51.097029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166fbcf0 00:28:45.488 [2024-11-20 12:42:51.098306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.488 [2024-11-20 12:42:51.098325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:45.488 [2024-11-20 12:42:51.106409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f8618 00:28:45.488 [2024-11-20 12:42:51.107808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:17650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.488 [2024-11-20 12:42:51.107827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:45.488 [2024-11-20 12:42:51.112866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e0630 00:28:45.488 [2024-11-20 12:42:51.113578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:10355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.488 [2024-11-20 12:42:51.113597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:45.488 [2024-11-20 12:42:51.122271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f96f8 00:28:45.488 [2024-11-20 12:42:51.123095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.488 [2024-11-20 12:42:51.123114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:45.488 [2024-11-20 12:42:51.131660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e23b8 00:28:45.488 [2024-11-20 12:42:51.132588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.488 [2024-11-20 12:42:51.132608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:45.488 [2024-11-20 12:42:51.141076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166fa7d8 00:28:45.488 [2024-11-20 12:42:51.142156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.488 [2024-11-20 12:42:51.142175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:45.488 [2024-11-20 12:42:51.150240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f8a50 00:28:45.488 [2024-11-20 12:42:51.150860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.488 [2024-11-20 12:42:51.150880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:45.488 [2024-11-20 12:42:51.159643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f0788 00:28:45.488 [2024-11-20 12:42:51.160386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.488 [2024-11-20 12:42:51.160405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:45.488 [2024-11-20 12:42:51.168415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e23b8 00:28:45.488 [2024-11-20 12:42:51.169453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.488 [2024-11-20 12:42:51.169473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:45.488 [2024-11-20 12:42:51.178291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e84c0 00:28:45.488 [2024-11-20 12:42:51.179598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.488 [2024-11-20 12:42:51.179617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:45.488 [2024-11-20 12:42:51.187701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166fda78 00:28:45.488 [2024-11-20 12:42:51.189111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.488 [2024-11-20 12:42:51.189130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:45.488 [2024-11-20 12:42:51.196131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f46d0 00:28:45.488 [2024-11-20 12:42:51.197180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.488 [2024-11-20 12:42:51.197200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:45.488 [2024-11-20 12:42:51.205670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e6300 00:28:45.488 [2024-11-20 12:42:51.206945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.488 [2024-11-20 12:42:51.206967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:45.488 [2024-11-20 12:42:51.215089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e27f0 00:28:45.488 [2024-11-20 12:42:51.216503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.488 [2024-11-20 12:42:51.216522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:45.488 [2024-11-20 12:42:51.224485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f0350 00:28:45.488 [2024-11-20 12:42:51.226012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.488 [2024-11-20 12:42:51.226030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:45.488 [2024-11-20 12:42:51.230806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f0ff8 00:28:45.488 [2024-11-20 12:42:51.231458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.488 [2024-11-20 12:42:51.231478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:45.488 [2024-11-20 12:42:51.240303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166ddc00 00:28:45.488 [2024-11-20 12:42:51.241188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.488 [2024-11-20 12:42:51.241211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:45.488 [2024-11-20 12:42:51.249541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e49b0 00:28:45.749 [2024-11-20 12:42:51.250539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.749 [2024-11-20 12:42:51.250558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:45.749 [2024-11-20 12:42:51.259333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166feb58 00:28:45.749 [2024-11-20 12:42:51.260421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.749 [2024-11-20 12:42:51.260440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:45.749 [2024-11-20 12:42:51.268539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166fdeb0 00:28:45.749 [2024-11-20 12:42:51.269172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.749 [2024-11-20 12:42:51.269191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:45.749 [2024-11-20 12:42:51.277287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f2d80 00:28:45.749 [2024-11-20 12:42:51.278200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.749 [2024-11-20 12:42:51.278226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:45.749 [2024-11-20 12:42:51.286568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f8618 00:28:45.749 [2024-11-20 12:42:51.287534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.749 [2024-11-20 12:42:51.287556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:45.749 [2024-11-20 12:42:51.295988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e6300 00:28:45.749 [2024-11-20 12:42:51.297084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.749 [2024-11-20 12:42:51.297104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:45.749 [2024-11-20 12:42:51.305384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166df550 00:28:45.749 [2024-11-20 12:42:51.306581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.749 [2024-11-20 12:42:51.306601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:45.749 [2024-11-20 12:42:51.314811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f0bc0 00:28:45.749 [2024-11-20 12:42:51.316145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.749 [2024-11-20 12:42:51.316164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:45.749 [2024-11-20 12:42:51.321393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f7da8 00:28:45.749 [2024-11-20 12:42:51.322003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.749 [2024-11-20 12:42:51.322022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.749 [2024-11-20 12:42:51.330788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e6300 00:28:45.749 [2024-11-20 12:42:51.331547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.749 [2024-11-20 12:42:51.331566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:45.749 [2024-11-20 12:42:51.341698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166ebb98 00:28:45.749 [2024-11-20 12:42:51.342715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.749 [2024-11-20 12:42:51.342734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:45.749 [2024-11-20 12:42:51.349246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166efae0 00:28:45.749 [2024-11-20 12:42:51.349671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.749 [2024-11-20 12:42:51.349690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:45.749 [2024-11-20 12:42:51.360602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f0350 00:28:45.749 [2024-11-20 12:42:51.362075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.749 [2024-11-20 12:42:51.362094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:45.749 [2024-11-20 12:42:51.367183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166eee38 00:28:45.749 [2024-11-20 12:42:51.367952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.749 [2024-11-20 12:42:51.367971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:45.749 [2024-11-20 12:42:51.376583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e6738 00:28:45.749 [2024-11-20 12:42:51.377467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.749 [2024-11-20 12:42:51.377488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.749 [2024-11-20 12:42:51.386013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f92c0 00:28:45.749 [2024-11-20 12:42:51.387026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.749 [2024-11-20 12:42:51.387046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:45.749 [2024-11-20 12:42:51.395136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e6b70 00:28:45.749 [2024-11-20 12:42:51.396143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.749 [2024-11-20 12:42:51.396162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.749 [2024-11-20 12:42:51.404353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166ed4e8 00:28:45.749 [2024-11-20 12:42:51.405029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.749 [2024-11-20 12:42:51.405048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:45.749 [2024-11-20 12:42:51.412851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f1ca0 00:28:45.749 [2024-11-20 12:42:51.414078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.749 [2024-11-20 12:42:51.414097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:45.749 [2024-11-20 12:42:51.420570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f96f8 00:28:45.749 [2024-11-20 12:42:51.421223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.749 [2024-11-20 12:42:51.421243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:45.749 [2024-11-20 12:42:51.429969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166de470 00:28:45.749 [2024-11-20 12:42:51.430768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.749 [2024-11-20 12:42:51.430787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:45.749 [2024-11-20 12:42:51.441019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166fd640 00:28:45.749 [2024-11-20 12:42:51.442065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:10998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.749 [2024-11-20 12:42:51.442087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:45.749 [2024-11-20 12:42:51.448575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166ed0b0 00:28:45.749 [2024-11-20 12:42:51.449028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.749 [2024-11-20 12:42:51.449047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:45.749 [2024-11-20 12:42:51.457993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e9e10 00:28:45.749 [2024-11-20 12:42:51.458575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.749 [2024-11-20 12:42:51.458595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:45.749 [2024-11-20 12:42:51.467410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f8a50 00:28:45.749 [2024-11-20 12:42:51.468099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.749 [2024-11-20 12:42:51.468119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:45.749 [2024-11-20 12:42:51.475880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166df988 00:28:45.750 [2024-11-20 12:42:51.477123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.750 [2024-11-20 12:42:51.477142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:45.750 [2024-11-20 12:42:51.483618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e4de8 00:28:45.750 [2024-11-20 12:42:51.484295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.750 [2024-11-20 12:42:51.484314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:45.750 [2024-11-20 12:42:51.494766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f7da8 00:28:45.750 [2024-11-20 12:42:51.495923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.750 [2024-11-20 12:42:51.495942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:45.750 [2024-11-20 12:42:51.504002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f3e60 00:28:45.750 [2024-11-20 12:42:51.504754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.750 [2024-11-20 12:42:51.504774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:46.010 [2024-11-20 12:42:51.512742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166fb8b8 00:28:46.010 [2024-11-20 12:42:51.514060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.010 [2024-11-20 12:42:51.514079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:46.010 [2024-11-20 12:42:51.520674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166ed920 00:28:46.010 [2024-11-20 12:42:51.521364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.010 [2024-11-20 12:42:51.521384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:46.010 [2024-11-20 12:42:51.529817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166fe720 00:28:46.010 [2024-11-20 12:42:51.530509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.010 [2024-11-20 12:42:51.530528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:46.010 [2024-11-20 12:42:51.539267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e4140 00:28:46.010 [2024-11-20 12:42:51.539928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.010 [2024-11-20 12:42:51.539948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:46.010 [2024-11-20 12:42:51.548648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f92c0 00:28:46.010 [2024-11-20 12:42:51.549571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.010 [2024-11-20 12:42:51.549590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:46.010 [2024-11-20 12:42:51.558068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f5378 00:28:46.010 [2024-11-20 12:42:51.559101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:9051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.010 [2024-11-20 12:42:51.559120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:46.010 [2024-11-20 12:42:51.566437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166ecc78 00:28:46.010 [2024-11-20 12:42:51.567115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.010 [2024-11-20 12:42:51.567134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:46.010 [2024-11-20 12:42:51.576501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166ed0b0 00:28:46.010 [2024-11-20 12:42:51.577635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.010 [2024-11-20 12:42:51.577654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:46.010 [2024-11-20 12:42:51.585930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166fa3a0 00:28:46.010 [2024-11-20 12:42:51.587188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.010 [2024-11-20 12:42:51.587213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:46.010 [2024-11-20 12:42:51.595313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e99d8 00:28:46.010 [2024-11-20 12:42:51.596671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.010 [2024-11-20 12:42:51.596690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:46.010 [2024-11-20 12:42:51.604727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166eb328 00:28:46.010 [2024-11-20 12:42:51.606204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.010 [2024-11-20 12:42:51.606223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:46.010 [2024-11-20 12:42:51.611052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e7c50 00:28:46.010 [2024-11-20 12:42:51.611743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.010 [2024-11-20 12:42:51.611763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:46.010 [2024-11-20 12:42:51.619953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166ebb98 00:28:46.010 [2024-11-20 12:42:51.620744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.010 [2024-11-20 12:42:51.620763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:46.010 [2024-11-20 12:42:51.629392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e4140 00:28:46.010 [2024-11-20 12:42:51.630350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.010 [2024-11-20 12:42:51.630368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:46.010 [2024-11-20 12:42:51.640415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166ef270 00:28:46.010 [2024-11-20 12:42:51.641787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.010 [2024-11-20 12:42:51.641808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:46.010 [2024-11-20 12:42:51.649818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166de470 00:28:46.010 [2024-11-20 12:42:51.651317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.010 [2024-11-20 12:42:51.651336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:46.010 [2024-11-20 12:42:51.656151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166ecc78 00:28:46.010 [2024-11-20 12:42:51.656771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.010 [2024-11-20 12:42:51.656790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:46.010 [2024-11-20 12:42:51.665545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166ee190 00:28:46.010 [2024-11-20 12:42:51.666402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.010 [2024-11-20 12:42:51.666422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:46.010 [2024-11-20 12:42:51.674507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f8618 00:28:46.010 [2024-11-20 12:42:51.675419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.010 [2024-11-20 12:42:51.675438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:46.010 [2024-11-20 12:42:51.684534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f2d80 00:28:46.010 [2024-11-20 12:42:51.685581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.010 [2024-11-20 12:42:51.685600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:46.010 [2024-11-20 12:42:51.693803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f7970 00:28:46.010 [2024-11-20 12:42:51.694888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.010 [2024-11-20 12:42:51.694907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:46.010 [2024-11-20 12:42:51.701371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e1710 00:28:46.010 [2024-11-20 12:42:51.701963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.010 [2024-11-20 12:42:51.701983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:46.010 [2024-11-20 12:42:51.710546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e01f8 00:28:46.010 [2024-11-20 12:42:51.711377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.010 [2024-11-20 12:42:51.711396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:46.010 [2024-11-20 12:42:51.719699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e6738 00:28:46.010 [2024-11-20 12:42:51.720522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.010 [2024-11-20 12:42:51.720542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:46.010 [2024-11-20 12:42:51.728711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e6fa8 00:28:46.010 [2024-11-20 12:42:51.729527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.011 [2024-11-20 12:42:51.729547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:46.011 [2024-11-20 12:42:51.737930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166df988 00:28:46.011 [2024-11-20 12:42:51.738557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:4155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.011 [2024-11-20 12:42:51.738577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:46.011 [2024-11-20 12:42:51.747351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f6cc8 00:28:46.011 [2024-11-20 12:42:51.748077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.011 [2024-11-20 12:42:51.748097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:46.011 [2024-11-20 12:42:51.755849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f6458 00:28:46.011 [2024-11-20 12:42:51.757154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.011 [2024-11-20 12:42:51.757177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:46.011 [2024-11-20 12:42:51.763794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f4298 00:28:46.011 [2024-11-20 12:42:51.764492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.011 [2024-11-20 12:42:51.764511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:46.271 [2024-11-20 12:42:51.774033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166ee5c8 00:28:46.271 [2024-11-20 12:42:51.774947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.271 [2024-11-20 12:42:51.774966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:46.271 [2024-11-20 12:42:51.783590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166efae0 00:28:46.271 [2024-11-20 12:42:51.784505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:18688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.271 [2024-11-20 12:42:51.784524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:46.271 [2024-11-20 12:42:51.792688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166fe720 00:28:46.271 [2024-11-20 12:42:51.793651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.271 [2024-11-20 12:42:51.793670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:46.271 [2024-11-20 12:42:51.801704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166eb760 00:28:46.271 [2024-11-20 12:42:51.802646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.271 [2024-11-20 12:42:51.802666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:46.271 [2024-11-20 12:42:51.810700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f9f68 00:28:46.271 [2024-11-20 12:42:51.811653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.271 [2024-11-20 12:42:51.811672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:46.271 [2024-11-20 12:42:51.819709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e7c50 00:28:46.271 [2024-11-20 12:42:51.820656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.271 [2024-11-20 12:42:51.820676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:46.271 [2024-11-20 12:42:51.828872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f6cc8 00:28:46.271 [2024-11-20 12:42:51.829834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.271 [2024-11-20 12:42:51.829854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:46.271 [2024-11-20 12:42:51.837884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e5220 00:28:46.271 [2024-11-20 12:42:51.838859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.271 [2024-11-20 12:42:51.838878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:46.271 [2024-11-20 12:42:51.848053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166eea00 00:28:46.271 [2024-11-20 12:42:51.849454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.271 [2024-11-20 12:42:51.849474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:46.271 [2024-11-20 12:42:51.855272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e7818 00:28:46.271 [2024-11-20 12:42:51.856192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.271 [2024-11-20 12:42:51.856217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:46.271 [2024-11-20 12:42:51.864685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e4de8 00:28:46.271 [2024-11-20 12:42:51.865717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:6109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.271 [2024-11-20 12:42:51.865736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:46.271 [2024-11-20 12:42:51.874112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166ef270 00:28:46.271 [2024-11-20 12:42:51.875239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.271 [2024-11-20 12:42:51.875258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:46.271 [2024-11-20 12:42:51.882472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e01f8 00:28:46.271 [2024-11-20 12:42:51.883275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.271 [2024-11-20 12:42:51.883294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:46.271 [2024-11-20 12:42:51.891600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166fe2e8 00:28:46.271 [2024-11-20 12:42:51.892309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.271 [2024-11-20 12:42:51.892329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:46.271 [2024-11-20 12:42:51.901968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f81e0 00:28:46.271 [2024-11-20 12:42:51.903337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.271 [2024-11-20 12:42:51.903357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:46.271 [2024-11-20 12:42:51.909131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166fda78 00:28:46.271 [2024-11-20 12:42:51.910043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.271 [2024-11-20 12:42:51.910062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:46.271 [2024-11-20 12:42:51.918907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f4b08 00:28:46.271 [2024-11-20 12:42:51.919619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.271 [2024-11-20 12:42:51.919640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:46.271 [2024-11-20 12:42:51.927414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166ee190 00:28:46.271 [2024-11-20 12:42:51.928636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.271 [2024-11-20 12:42:51.928656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:46.271 [2024-11-20 12:42:51.937308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f0788 00:28:46.272 [2024-11-20 12:42:51.938360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.272 [2024-11-20 12:42:51.938379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:46.272 [2024-11-20 12:42:51.944736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166fac10 00:28:46.272 [2024-11-20 12:42:51.945327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.272 [2024-11-20 12:42:51.945347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:46.272 [2024-11-20 12:42:51.955791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166e3060 00:28:46.272 [2024-11-20 12:42:51.957067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.272 [2024-11-20 12:42:51.957086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:46.272 [2024-11-20 12:42:51.964812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166eb328 00:28:46.272 [2024-11-20 12:42:51.966115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.272 [2024-11-20 12:42:51.966134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:46.272 [2024-11-20 12:42:51.973908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166f4f40 00:28:46.272 [2024-11-20 12:42:51.975168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.272 [2024-11-20 12:42:51.975186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.272 27842.00 IOPS, 108.76 MiB/s [2024-11-20T11:42:52.038Z] [2024-11-20 12:42:51.982007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889640) with pdu=0x2000166eee38 00:28:46.272 [2024-11-20 12:42:51.982915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.272 [2024-11-20 12:42:51.982935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:46.272 00:28:46.272 Latency(us) 00:28:46.272 [2024-11-20T11:42:52.038Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:46.272 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:46.272 nvme0n1 : 2.01 27840.39 108.75 0.00 0.00 4591.84 1763.23 12295.80 00:28:46.272 [2024-11-20T11:42:52.038Z] =================================================================================================================== 00:28:46.272 [2024-11-20T11:42:52.038Z] Total : 27840.39 108.75 0.00 0.00 4591.84 1763.23 12295.80 00:28:46.272 { 00:28:46.272 "results": [ 00:28:46.272 { 00:28:46.272 "job": "nvme0n1", 00:28:46.272 "core_mask": "0x2", 00:28:46.272 "workload": "randwrite", 00:28:46.272 "status": "finished", 00:28:46.272 "queue_depth": 128, 00:28:46.272 "io_size": 4096, 00:28:46.272 "runtime": 2.007012, 00:28:46.272 "iops": 27840.391587095644, 00:28:46.272 "mibps": 108.75152963709236, 00:28:46.272 "io_failed": 0, 00:28:46.272 "io_timeout": 0, 00:28:46.272 "avg_latency_us": 4591.835231345598, 00:28:46.272 "min_latency_us": 1763.230476190476, 00:28:46.272 "max_latency_us": 12295.801904761905 00:28:46.272 } 00:28:46.272 ], 00:28:46.272 "core_count": 1 00:28:46.272 } 00:28:46.272 12:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:46.272 12:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:46.272 12:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:46.272 | .driver_specific 00:28:46.272 | .nvme_error 00:28:46.272 | .status_code 00:28:46.272 | .command_transient_transport_error' 00:28:46.272 12:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:46.531 12:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 219 > 0 )) 00:28:46.531 12:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 331850 00:28:46.531 12:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 331850 ']' 00:28:46.531 12:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 331850 00:28:46.531 12:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:46.531 12:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:46.531 12:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 331850 00:28:46.531 12:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:46.531 12:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:46.531 12:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 331850' 00:28:46.531 killing process with pid 331850 00:28:46.531 12:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 331850 00:28:46.531 Received shutdown signal, test time was about 2.000000 seconds 00:28:46.531 00:28:46.531 Latency(us) 00:28:46.531 [2024-11-20T11:42:52.297Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:46.531 [2024-11-20T11:42:52.297Z] =================================================================================================================== 00:28:46.531 [2024-11-20T11:42:52.297Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:46.531 12:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 331850 00:28:46.789 12:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:46.789 12:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:46.790 12:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:46.790 12:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:46.790 12:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:46.790 12:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=332485 00:28:46.790 12:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 332485 /var/tmp/bperf.sock 00:28:46.790 12:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:46.790 12:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 332485 ']' 00:28:46.790 12:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:46.790 12:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:46.790 12:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:46.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:46.790 12:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:46.790 12:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:46.790 [2024-11-20 12:42:52.453982] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:28:46.790 [2024-11-20 12:42:52.454030] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid332485 ] 00:28:46.790 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:46.790 Zero copy mechanism will not be used. 00:28:46.790 [2024-11-20 12:42:52.527509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.048 [2024-11-20 12:42:52.570151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:47.048 12:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:47.048 12:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:47.048 12:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:47.048 12:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:47.306 12:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:47.306 12:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.306 12:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:47.306 12:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.306 12:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:47.306 12:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:47.564 nvme0n1 00:28:47.564 12:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:47.564 12:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.564 12:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:47.564 12:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.564 12:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:47.564 12:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:47.824 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:47.824 Zero copy mechanism will not be used. 00:28:47.824 Running I/O for 2 seconds... 00:28:47.824 [2024-11-20 12:42:53.352030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:47.824 [2024-11-20 12:42:53.352101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.824 [2024-11-20 12:42:53.352129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:47.824 [2024-11-20 12:42:53.356666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:47.824 [2024-11-20 12:42:53.356727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.824 [2024-11-20 12:42:53.356749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:47.824 [2024-11-20 12:42:53.361026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:47.824 [2024-11-20 12:42:53.361090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.824 [2024-11-20 12:42:53.361113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:47.824 [2024-11-20 12:42:53.365425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:47.824 [2024-11-20 12:42:53.365487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.825 [2024-11-20 12:42:53.365507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:47.825 [2024-11-20 12:42:53.369755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:47.825 [2024-11-20 12:42:53.369808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.825 [2024-11-20 12:42:53.369827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:47.825 [2024-11-20 12:42:53.374025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:47.825 [2024-11-20 12:42:53.374092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.825 [2024-11-20 12:42:53.374113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:47.825 [2024-11-20 12:42:53.378343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:47.825 [2024-11-20 12:42:53.378398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.825 [2024-11-20 12:42:53.378417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:47.825 [2024-11-20 12:42:53.382617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:47.825 [2024-11-20 12:42:53.382675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.825 [2024-11-20 12:42:53.382695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:47.825 [2024-11-20 12:42:53.386891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:47.825 [2024-11-20 12:42:53.386952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.825 [2024-11-20 12:42:53.386971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:47.825 [2024-11-20 12:42:53.391071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:47.825 [2024-11-20 12:42:53.391127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.825 [2024-11-20 12:42:53.391146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:47.825 [2024-11-20 12:42:53.395295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:47.825 [2024-11-20 12:42:53.395358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.825 [2024-11-20 12:42:53.395377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:47.825 [2024-11-20 12:42:53.399555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:47.825 [2024-11-20 12:42:53.399627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.825 [2024-11-20 12:42:53.399647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:47.825 [2024-11-20 12:42:53.403798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:47.825 [2024-11-20 12:42:53.403870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.825 [2024-11-20 12:42:53.403889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:47.825 [2024-11-20 12:42:53.408189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:47.825 [2024-11-20 12:42:53.408266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.825 [2024-11-20 12:42:53.408284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:47.825 [2024-11-20 12:42:53.412791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:47.825 [2024-11-20 12:42:53.412842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.825 [2024-11-20 12:42:53.412859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:47.825 [2024-11-20 12:42:53.417931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:47.825 [2024-11-20 12:42:53.417989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.825 [2024-11-20 12:42:53.418007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:47.825 [2024-11-20 12:42:53.422854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:47.825 [2024-11-20 12:42:53.422994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.825 [2024-11-20 12:42:53.423013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:47.825 [2024-11-20 12:42:53.428247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:47.825 [2024-11-20 12:42:53.428302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.825 [2024-11-20 12:42:53.428321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:47.825 [2024-11-20 12:42:53.433515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:47.825 [2024-11-20 12:42:53.433622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.825 [2024-11-20 12:42:53.433641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:47.825 [2024-11-20 12:42:53.438819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:47.825 [2024-11-20 12:42:53.438879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.825 [2024-11-20 12:42:53.438897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:47.825 [2024-11-20 12:42:53.444090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:47.825 [2024-11-20 12:42:53.444141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.825 [2024-11-20 12:42:53.444159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:47.825 [2024-11-20 12:42:53.449182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:47.825 [2024-11-20 12:42:53.449261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.825 [2024-11-20 12:42:53.449280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:47.825 [2024-11-20 12:42:53.453809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:47.825 [2024-11-20 12:42:53.453861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.825 [2024-11-20 12:42:53.453880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:47.825 [2024-11-20 12:42:53.458695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:47.825 [2024-11-20 12:42:53.458749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.825 [2024-11-20 12:42:53.458767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:47.825 [2024-11-20 12:42:53.463727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:47.825 [2024-11-20 12:42:53.463860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.825 [2024-11-20 12:42:53.463881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:47.825 [2024-11-20 12:42:53.469580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:47.825 [2024-11-20 12:42:53.469666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.825 [2024-11-20 12:42:53.469690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:47.825 [2024-11-20 12:42:53.476940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:47.825 [2024-11-20 12:42:53.477068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.825 [2024-11-20 12:42:53.477087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:47.825 [2024-11-20 12:42:53.484236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:47.825 [2024-11-20 12:42:53.484373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.825 [2024-11-20 12:42:53.484392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:47.825 [2024-11-20 12:42:53.491816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:47.825 [2024-11-20 12:42:53.491955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.825 [2024-11-20 12:42:53.491974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:47.825 [2024-11-20 12:42:53.498850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:47.825 [2024-11-20 12:42:53.499003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.825 [2024-11-20 12:42:53.499022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:47.825 [2024-11-20 12:42:53.505911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:47.825 [2024-11-20 12:42:53.506096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.826 [2024-11-20 12:42:53.506115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:47.826 [2024-11-20 12:42:53.513180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:47.826 [2024-11-20 12:42:53.513337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.826 [2024-11-20 12:42:53.513356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:47.826 [2024-11-20 12:42:53.519925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:47.826 [2024-11-20 12:42:53.520072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.826 [2024-11-20 12:42:53.520091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:47.826 [2024-11-20 12:42:53.526891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:47.826 [2024-11-20 12:42:53.527066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.826 [2024-11-20 12:42:53.527084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:47.826 [2024-11-20 12:42:53.534265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:47.826 [2024-11-20 12:42:53.534420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.826 [2024-11-20 12:42:53.534439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:47.826 [2024-11-20 12:42:53.541886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:47.826 [2024-11-20 12:42:53.542021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.826 [2024-11-20 12:42:53.542044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:47.826 [2024-11-20 12:42:53.549274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:47.826 [2024-11-20 12:42:53.549399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.826 [2024-11-20 12:42:53.549420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:47.826 [2024-11-20 12:42:53.557188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:47.826 [2024-11-20 12:42:53.557342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.826 [2024-11-20 12:42:53.557362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:47.826 [2024-11-20 12:42:53.564940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:47.826 [2024-11-20 12:42:53.565128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.826 [2024-11-20 12:42:53.565149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:47.826 [2024-11-20 12:42:53.571926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:47.826 [2024-11-20 12:42:53.571981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.826 [2024-11-20 12:42:53.572001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:47.826 [2024-11-20 12:42:53.577566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:47.826 [2024-11-20 12:42:53.577648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.826 [2024-11-20 12:42:53.577669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:47.826 [2024-11-20 12:42:53.582885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:47.826 [2024-11-20 12:42:53.583023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.826 [2024-11-20 12:42:53.583042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.087 [2024-11-20 12:42:53.588113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.087 [2024-11-20 12:42:53.588179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.087 [2024-11-20 12:42:53.588197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.087 [2024-11-20 12:42:53.592939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.087 [2024-11-20 12:42:53.593075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.087 [2024-11-20 12:42:53.593094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.087 [2024-11-20 12:42:53.598847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.087 [2024-11-20 12:42:53.599024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.087 [2024-11-20 12:42:53.599043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.087 [2024-11-20 12:42:53.604889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.087 [2024-11-20 12:42:53.605037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.087 [2024-11-20 12:42:53.605055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.087 [2024-11-20 12:42:53.610799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.087 [2024-11-20 12:42:53.610987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.087 [2024-11-20 12:42:53.611007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.087 [2024-11-20 12:42:53.617233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.087 [2024-11-20 12:42:53.617306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.087 [2024-11-20 12:42:53.617326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.087 [2024-11-20 12:42:53.622591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.087 [2024-11-20 12:42:53.622919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.087 [2024-11-20 12:42:53.622940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.087 [2024-11-20 12:42:53.628631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.088 [2024-11-20 12:42:53.628948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.088 [2024-11-20 12:42:53.628969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.088 [2024-11-20 12:42:53.634265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.088 [2024-11-20 12:42:53.634530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.088 [2024-11-20 12:42:53.634550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.088 [2024-11-20 12:42:53.640046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.088 [2024-11-20 12:42:53.640379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.088 [2024-11-20 12:42:53.640406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.088 [2024-11-20 12:42:53.646150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.088 [2024-11-20 12:42:53.646502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.088 [2024-11-20 12:42:53.646523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.088 [2024-11-20 12:42:53.652025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.088 [2024-11-20 12:42:53.652333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.088 [2024-11-20 12:42:53.652353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.088 [2024-11-20 12:42:53.657758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.088 [2024-11-20 12:42:53.658039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.088 [2024-11-20 12:42:53.658059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.088 [2024-11-20 12:42:53.663702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.088 [2024-11-20 12:42:53.664014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.088 [2024-11-20 12:42:53.664034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.088 [2024-11-20 12:42:53.669608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.088 [2024-11-20 12:42:53.669929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.088 [2024-11-20 12:42:53.669950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.088 [2024-11-20 12:42:53.675449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.088 [2024-11-20 12:42:53.675774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.088 [2024-11-20 12:42:53.675794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.088 [2024-11-20 12:42:53.681058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.088 [2024-11-20 12:42:53.681354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.088 [2024-11-20 12:42:53.681374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.088 [2024-11-20 12:42:53.686706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.088 [2024-11-20 12:42:53.686953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.088 [2024-11-20 12:42:53.686974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.088 [2024-11-20 12:42:53.692340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.088 [2024-11-20 12:42:53.692659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.088 [2024-11-20 12:42:53.692679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.088 [2024-11-20 12:42:53.698510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.088 [2024-11-20 12:42:53.698736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.088 [2024-11-20 12:42:53.698757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.088 [2024-11-20 12:42:53.704389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.088 [2024-11-20 12:42:53.704682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.088 [2024-11-20 12:42:53.704703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.088 [2024-11-20 12:42:53.710795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.088 [2024-11-20 12:42:53.711134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.088 [2024-11-20 12:42:53.711155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.088 [2024-11-20 12:42:53.716960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.088 [2024-11-20 12:42:53.717239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.088 [2024-11-20 12:42:53.717259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.088 [2024-11-20 12:42:53.722879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.088 [2024-11-20 12:42:53.723219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.088 [2024-11-20 12:42:53.723240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.088 [2024-11-20 12:42:53.728825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.088 [2024-11-20 12:42:53.729061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.088 [2024-11-20 12:42:53.729081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.088 [2024-11-20 12:42:53.734454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.088 [2024-11-20 12:42:53.734770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.088 [2024-11-20 12:42:53.734791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.088 [2024-11-20 12:42:53.740054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.088 [2024-11-20 12:42:53.740359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.088 [2024-11-20 12:42:53.740379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.088 [2024-11-20 12:42:53.745572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.088 [2024-11-20 12:42:53.745779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.088 [2024-11-20 12:42:53.745799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.088 [2024-11-20 12:42:53.750620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.088 [2024-11-20 12:42:53.750902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.088 [2024-11-20 12:42:53.750922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.088 [2024-11-20 12:42:53.756328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.088 [2024-11-20 12:42:53.756630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.088 [2024-11-20 12:42:53.756650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.088 [2024-11-20 12:42:53.761900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.088 [2024-11-20 12:42:53.762198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.088 [2024-11-20 12:42:53.762226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.088 [2024-11-20 12:42:53.767616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.088 [2024-11-20 12:42:53.767862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.088 [2024-11-20 12:42:53.767883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.088 [2024-11-20 12:42:53.773419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.088 [2024-11-20 12:42:53.773692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.088 [2024-11-20 12:42:53.773712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.088 [2024-11-20 12:42:53.779249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.088 [2024-11-20 12:42:53.779536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.088 [2024-11-20 12:42:53.779556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.088 [2024-11-20 12:42:53.785118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.088 [2024-11-20 12:42:53.785396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.089 [2024-11-20 12:42:53.785417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.089 [2024-11-20 12:42:53.790804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.089 [2024-11-20 12:42:53.791010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.089 [2024-11-20 12:42:53.791033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.089 [2024-11-20 12:42:53.795103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.089 [2024-11-20 12:42:53.795325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.089 [2024-11-20 12:42:53.795345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.089 [2024-11-20 12:42:53.799620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.089 [2024-11-20 12:42:53.799842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.089 [2024-11-20 12:42:53.799862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.089 [2024-11-20 12:42:53.804159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.089 [2024-11-20 12:42:53.804396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.089 [2024-11-20 12:42:53.804416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.089 [2024-11-20 12:42:53.808705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.089 [2024-11-20 12:42:53.808927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.089 [2024-11-20 12:42:53.808947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.089 [2024-11-20 12:42:53.813390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.089 [2024-11-20 12:42:53.813596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.089 [2024-11-20 12:42:53.813617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.089 [2024-11-20 12:42:53.817863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.089 [2024-11-20 12:42:53.818078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.089 [2024-11-20 12:42:53.818098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.089 [2024-11-20 12:42:53.822722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.089 [2024-11-20 12:42:53.822872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.089 [2024-11-20 12:42:53.822893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.089 [2024-11-20 12:42:53.827300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.089 [2024-11-20 12:42:53.827512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.089 [2024-11-20 12:42:53.827532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.089 [2024-11-20 12:42:53.831830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.089 [2024-11-20 12:42:53.832048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.089 [2024-11-20 12:42:53.832068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.089 [2024-11-20 12:42:53.836504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.089 [2024-11-20 12:42:53.836728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.089 [2024-11-20 12:42:53.836749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.089 [2024-11-20 12:42:53.840891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.089 [2024-11-20 12:42:53.841102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.089 [2024-11-20 12:42:53.841123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.089 [2024-11-20 12:42:53.845416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.089 [2024-11-20 12:42:53.845640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.089 [2024-11-20 12:42:53.845659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.350 [2024-11-20 12:42:53.849836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.350 [2024-11-20 12:42:53.850071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.350 [2024-11-20 12:42:53.850091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.350 [2024-11-20 12:42:53.854700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.350 [2024-11-20 12:42:53.854910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.350 [2024-11-20 12:42:53.854930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.350 [2024-11-20 12:42:53.859160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.350 [2024-11-20 12:42:53.859381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.350 [2024-11-20 12:42:53.859400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.350 [2024-11-20 12:42:53.863712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.350 [2024-11-20 12:42:53.863963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.350 [2024-11-20 12:42:53.863983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.350 [2024-11-20 12:42:53.868450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.350 [2024-11-20 12:42:53.868657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.350 [2024-11-20 12:42:53.868677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.350 [2024-11-20 12:42:53.872792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.350 [2024-11-20 12:42:53.873003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.350 [2024-11-20 12:42:53.873024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.350 [2024-11-20 12:42:53.877509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.350 [2024-11-20 12:42:53.877713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.350 [2024-11-20 12:42:53.877733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.350 [2024-11-20 12:42:53.882880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.351 [2024-11-20 12:42:53.883118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.351 [2024-11-20 12:42:53.883138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.351 [2024-11-20 12:42:53.887403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.351 [2024-11-20 12:42:53.887602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.351 [2024-11-20 12:42:53.887623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.351 [2024-11-20 12:42:53.892135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.351 [2024-11-20 12:42:53.892373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.351 [2024-11-20 12:42:53.892393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.351 [2024-11-20 12:42:53.896900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.351 [2024-11-20 12:42:53.897105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.351 [2024-11-20 12:42:53.897125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.351 [2024-11-20 12:42:53.901490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.351 [2024-11-20 12:42:53.901707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.351 [2024-11-20 12:42:53.901727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.351 [2024-11-20 12:42:53.906163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.351 [2024-11-20 12:42:53.906376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.351 [2024-11-20 12:42:53.906394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.351 [2024-11-20 12:42:53.910575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.351 [2024-11-20 12:42:53.910784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.351 [2024-11-20 12:42:53.910805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.351 [2024-11-20 12:42:53.915149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.351 [2024-11-20 12:42:53.915361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.351 [2024-11-20 12:42:53.915381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.351 [2024-11-20 12:42:53.919860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.351 [2024-11-20 12:42:53.920048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.351 [2024-11-20 12:42:53.920067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.351 [2024-11-20 12:42:53.924511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.351 [2024-11-20 12:42:53.924718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.351 [2024-11-20 12:42:53.924737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.351 [2024-11-20 12:42:53.929290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.351 [2024-11-20 12:42:53.929502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.351 [2024-11-20 12:42:53.929522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.351 [2024-11-20 12:42:53.934076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.351 [2024-11-20 12:42:53.934290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.351 [2024-11-20 12:42:53.934309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.351 [2024-11-20 12:42:53.938465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.351 [2024-11-20 12:42:53.938668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.351 [2024-11-20 12:42:53.938688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.351 [2024-11-20 12:42:53.942929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.351 [2024-11-20 12:42:53.943256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.351 [2024-11-20 12:42:53.943276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.351 [2024-11-20 12:42:53.947908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.351 [2024-11-20 12:42:53.948103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.351 [2024-11-20 12:42:53.948121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.351 [2024-11-20 12:42:53.952248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.351 [2024-11-20 12:42:53.952428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.351 [2024-11-20 12:42:53.952451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.351 [2024-11-20 12:42:53.956763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.351 [2024-11-20 12:42:53.956974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.351 [2024-11-20 12:42:53.956992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.351 [2024-11-20 12:42:53.960813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.351 [2024-11-20 12:42:53.961021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.351 [2024-11-20 12:42:53.961041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.351 [2024-11-20 12:42:53.964823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.351 [2024-11-20 12:42:53.965010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.351 [2024-11-20 12:42:53.965028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.351 [2024-11-20 12:42:53.969587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.351 [2024-11-20 12:42:53.969782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.351 [2024-11-20 12:42:53.969800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.351 [2024-11-20 12:42:53.974034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.351 [2024-11-20 12:42:53.974370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.351 [2024-11-20 12:42:53.974390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.351 [2024-11-20 12:42:53.978369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.351 [2024-11-20 12:42:53.978544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.351 [2024-11-20 12:42:53.978563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.351 [2024-11-20 12:42:53.982844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.351 [2024-11-20 12:42:53.983036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.351 [2024-11-20 12:42:53.983054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.351 [2024-11-20 12:42:53.987450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.351 [2024-11-20 12:42:53.987622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.351 [2024-11-20 12:42:53.987641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.351 [2024-11-20 12:42:53.991902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.351 [2024-11-20 12:42:53.992106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.351 [2024-11-20 12:42:53.992126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.351 [2024-11-20 12:42:53.995887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.351 [2024-11-20 12:42:53.996086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.351 [2024-11-20 12:42:53.996106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.351 [2024-11-20 12:42:53.999721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.351 [2024-11-20 12:42:53.999913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.351 [2024-11-20 12:42:53.999932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.351 [2024-11-20 12:42:54.003638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.352 [2024-11-20 12:42:54.003825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.352 [2024-11-20 12:42:54.003845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.352 [2024-11-20 12:42:54.007675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.352 [2024-11-20 12:42:54.007873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.352 [2024-11-20 12:42:54.007893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.352 [2024-11-20 12:42:54.011643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.352 [2024-11-20 12:42:54.011832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.352 [2024-11-20 12:42:54.011852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.352 [2024-11-20 12:42:54.015621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.352 [2024-11-20 12:42:54.015811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.352 [2024-11-20 12:42:54.015831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.352 [2024-11-20 12:42:54.019721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.352 [2024-11-20 12:42:54.019905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.352 [2024-11-20 12:42:54.019925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.352 [2024-11-20 12:42:54.023723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.352 [2024-11-20 12:42:54.023927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.352 [2024-11-20 12:42:54.023947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.352 [2024-11-20 12:42:54.027624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.352 [2024-11-20 12:42:54.027829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.352 [2024-11-20 12:42:54.027849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.352 [2024-11-20 12:42:54.031600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.352 [2024-11-20 12:42:54.031800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.352 [2024-11-20 12:42:54.031820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.352 [2024-11-20 12:42:54.035756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.352 [2024-11-20 12:42:54.035966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.352 [2024-11-20 12:42:54.035984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.352 [2024-11-20 12:42:54.040675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.352 [2024-11-20 12:42:54.040843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.352 [2024-11-20 12:42:54.040861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.352 [2024-11-20 12:42:54.045258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.352 [2024-11-20 12:42:54.045448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.352 [2024-11-20 12:42:54.045466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.352 [2024-11-20 12:42:54.049243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.352 [2024-11-20 12:42:54.049432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.352 [2024-11-20 12:42:54.049452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.352 [2024-11-20 12:42:54.053359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.352 [2024-11-20 12:42:54.053543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.352 [2024-11-20 12:42:54.053562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.352 [2024-11-20 12:42:54.057268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.352 [2024-11-20 12:42:54.057462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.352 [2024-11-20 12:42:54.057482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.352 [2024-11-20 12:42:54.061199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.352 [2024-11-20 12:42:54.061381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.352 [2024-11-20 12:42:54.061404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.352 [2024-11-20 12:42:54.065220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.352 [2024-11-20 12:42:54.065414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.352 [2024-11-20 12:42:54.065435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.352 [2024-11-20 12:42:54.070041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.352 [2024-11-20 12:42:54.070211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.352 [2024-11-20 12:42:54.070229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.352 [2024-11-20 12:42:54.074501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.352 [2024-11-20 12:42:54.074697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.352 [2024-11-20 12:42:54.074716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.352 [2024-11-20 12:42:54.078542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.352 [2024-11-20 12:42:54.078732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.352 [2024-11-20 12:42:54.078751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.352 [2024-11-20 12:42:54.082789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.352 [2024-11-20 12:42:54.082979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.352 [2024-11-20 12:42:54.082997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.352 [2024-11-20 12:42:54.086772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.352 [2024-11-20 12:42:54.086963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.352 [2024-11-20 12:42:54.086982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.352 [2024-11-20 12:42:54.090815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.352 [2024-11-20 12:42:54.091017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.352 [2024-11-20 12:42:54.091037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.352 [2024-11-20 12:42:54.094880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.352 [2024-11-20 12:42:54.095056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.352 [2024-11-20 12:42:54.095074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.352 [2024-11-20 12:42:54.099018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.352 [2024-11-20 12:42:54.099216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.352 [2024-11-20 12:42:54.099235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.352 [2024-11-20 12:42:54.103239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.352 [2024-11-20 12:42:54.103438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.352 [2024-11-20 12:42:54.103458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.352 [2024-11-20 12:42:54.107198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.352 [2024-11-20 12:42:54.107363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.352 [2024-11-20 12:42:54.107382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.352 [2024-11-20 12:42:54.111176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.352 [2024-11-20 12:42:54.111343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.352 [2024-11-20 12:42:54.111362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.614 [2024-11-20 12:42:54.115264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.614 [2024-11-20 12:42:54.115439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.614 [2024-11-20 12:42:54.115460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.614 [2024-11-20 12:42:54.119366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.614 [2024-11-20 12:42:54.119529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.614 [2024-11-20 12:42:54.119549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.614 [2024-11-20 12:42:54.123352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.614 [2024-11-20 12:42:54.123526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.614 [2024-11-20 12:42:54.123547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.614 [2024-11-20 12:42:54.127445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.614 [2024-11-20 12:42:54.127613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.614 [2024-11-20 12:42:54.127633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.614 [2024-11-20 12:42:54.131729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.614 [2024-11-20 12:42:54.131897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.614 [2024-11-20 12:42:54.131916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.614 [2024-11-20 12:42:54.136437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.614 [2024-11-20 12:42:54.136632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.614 [2024-11-20 12:42:54.136651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.614 [2024-11-20 12:42:54.140392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.614 [2024-11-20 12:42:54.140564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.614 [2024-11-20 12:42:54.140584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.614 [2024-11-20 12:42:54.144399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.614 [2024-11-20 12:42:54.144568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.614 [2024-11-20 12:42:54.144588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.614 [2024-11-20 12:42:54.148340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.614 [2024-11-20 12:42:54.148513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.614 [2024-11-20 12:42:54.148533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.614 [2024-11-20 12:42:54.152300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.614 [2024-11-20 12:42:54.152451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.614 [2024-11-20 12:42:54.152471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.614 [2024-11-20 12:42:54.156083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.614 [2024-11-20 12:42:54.156270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.614 [2024-11-20 12:42:54.156288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.614 [2024-11-20 12:42:54.159908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.615 [2024-11-20 12:42:54.160091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.615 [2024-11-20 12:42:54.160109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.615 [2024-11-20 12:42:54.164447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.615 [2024-11-20 12:42:54.164518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.615 [2024-11-20 12:42:54.164537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.615 [2024-11-20 12:42:54.169264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.615 [2024-11-20 12:42:54.169416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.615 [2024-11-20 12:42:54.169440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.615 [2024-11-20 12:42:54.173459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.615 [2024-11-20 12:42:54.173614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.615 [2024-11-20 12:42:54.173633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.615 [2024-11-20 12:42:54.177473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.615 [2024-11-20 12:42:54.177648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.615 [2024-11-20 12:42:54.177669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.615 [2024-11-20 12:42:54.181533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.615 [2024-11-20 12:42:54.181703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.615 [2024-11-20 12:42:54.181721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.615 [2024-11-20 12:42:54.185436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.615 [2024-11-20 12:42:54.185596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.615 [2024-11-20 12:42:54.185616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.615 [2024-11-20 12:42:54.189290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.615 [2024-11-20 12:42:54.189437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.615 [2024-11-20 12:42:54.189457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.615 [2024-11-20 12:42:54.193288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.615 [2024-11-20 12:42:54.193469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.615 [2024-11-20 12:42:54.193487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.615 [2024-11-20 12:42:54.197607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.615 [2024-11-20 12:42:54.197760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.615 [2024-11-20 12:42:54.197780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.615 [2024-11-20 12:42:54.202109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.615 [2024-11-20 12:42:54.202269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.615 [2024-11-20 12:42:54.202288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.615 [2024-11-20 12:42:54.206720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.615 [2024-11-20 12:42:54.206889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.615 [2024-11-20 12:42:54.206907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.615 [2024-11-20 12:42:54.210693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.615 [2024-11-20 12:42:54.210856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.615 [2024-11-20 12:42:54.210874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.615 [2024-11-20 12:42:54.214664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.615 [2024-11-20 12:42:54.214817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.615 [2024-11-20 12:42:54.214838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.615 [2024-11-20 12:42:54.218699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.615 [2024-11-20 12:42:54.218861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.615 [2024-11-20 12:42:54.218881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.615 [2024-11-20 12:42:54.222598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.615 [2024-11-20 12:42:54.222754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.615 [2024-11-20 12:42:54.222773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.615 [2024-11-20 12:42:54.226503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.615 [2024-11-20 12:42:54.226667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.615 [2024-11-20 12:42:54.226687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.615 [2024-11-20 12:42:54.230442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.615 [2024-11-20 12:42:54.230608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.615 [2024-11-20 12:42:54.230629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.615 [2024-11-20 12:42:54.235084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.615 [2024-11-20 12:42:54.235252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.615 [2024-11-20 12:42:54.235270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.615 [2024-11-20 12:42:54.240191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.615 [2024-11-20 12:42:54.240329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.615 [2024-11-20 12:42:54.240347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.615 [2024-11-20 12:42:54.244213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.615 [2024-11-20 12:42:54.244385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.615 [2024-11-20 12:42:54.244403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.615 [2024-11-20 12:42:54.248159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.615 [2024-11-20 12:42:54.248323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.615 [2024-11-20 12:42:54.248341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.615 [2024-11-20 12:42:54.252195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.615 [2024-11-20 12:42:54.252354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.615 [2024-11-20 12:42:54.252372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.615 [2024-11-20 12:42:54.256180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.615 [2024-11-20 12:42:54.256355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.615 [2024-11-20 12:42:54.256373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.615 [2024-11-20 12:42:54.260144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.615 [2024-11-20 12:42:54.260312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.615 [2024-11-20 12:42:54.260332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.615 [2024-11-20 12:42:54.264225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.615 [2024-11-20 12:42:54.264569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.615 [2024-11-20 12:42:54.264589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.615 [2024-11-20 12:42:54.269319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.615 [2024-11-20 12:42:54.269453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.615 [2024-11-20 12:42:54.269472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.615 [2024-11-20 12:42:54.273753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.615 [2024-11-20 12:42:54.273913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.616 [2024-11-20 12:42:54.273932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.616 [2024-11-20 12:42:54.278058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.616 [2024-11-20 12:42:54.278222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.616 [2024-11-20 12:42:54.278243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.616 [2024-11-20 12:42:54.282322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.616 [2024-11-20 12:42:54.282474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.616 [2024-11-20 12:42:54.282493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.616 [2024-11-20 12:42:54.286949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.616 [2024-11-20 12:42:54.287093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.616 [2024-11-20 12:42:54.287112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.616 [2024-11-20 12:42:54.291258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.616 [2024-11-20 12:42:54.291413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.616 [2024-11-20 12:42:54.291431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.616 [2024-11-20 12:42:54.295341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.616 [2024-11-20 12:42:54.295505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.616 [2024-11-20 12:42:54.295525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.616 [2024-11-20 12:42:54.299955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.616 [2024-11-20 12:42:54.300092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.616 [2024-11-20 12:42:54.300111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.616 [2024-11-20 12:42:54.304554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.616 [2024-11-20 12:42:54.304718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.616 [2024-11-20 12:42:54.304737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.616 [2024-11-20 12:42:54.308565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.616 [2024-11-20 12:42:54.308739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.616 [2024-11-20 12:42:54.308758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.616 [2024-11-20 12:42:54.312426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.616 [2024-11-20 12:42:54.312598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.616 [2024-11-20 12:42:54.312618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.616 [2024-11-20 12:42:54.316375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.616 [2024-11-20 12:42:54.316529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.616 [2024-11-20 12:42:54.316549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.616 [2024-11-20 12:42:54.320456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.616 [2024-11-20 12:42:54.320693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.616 [2024-11-20 12:42:54.320713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.616 [2024-11-20 12:42:54.324458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.616 [2024-11-20 12:42:54.324639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.616 [2024-11-20 12:42:54.324658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.616 [2024-11-20 12:42:54.328278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.616 [2024-11-20 12:42:54.328436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.616 [2024-11-20 12:42:54.328456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.616 [2024-11-20 12:42:54.332226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.616 [2024-11-20 12:42:54.332395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.616 [2024-11-20 12:42:54.332416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.616 [2024-11-20 12:42:54.336170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.616 [2024-11-20 12:42:54.336334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.616 [2024-11-20 12:42:54.336353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.616 [2024-11-20 12:42:54.340050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.616 [2024-11-20 12:42:54.340229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.616 [2024-11-20 12:42:54.340246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.616 [2024-11-20 12:42:54.344070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.616 [2024-11-20 12:42:54.344247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.616 [2024-11-20 12:42:54.344266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.616 6401.00 IOPS, 800.12 MiB/s [2024-11-20T11:42:54.382Z] [2024-11-20 12:42:54.349188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.616 [2024-11-20 12:42:54.349343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.616 [2024-11-20 12:42:54.349362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.616 [2024-11-20 12:42:54.352946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.616 [2024-11-20 12:42:54.353127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.616 [2024-11-20 12:42:54.353146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.616 [2024-11-20 12:42:54.356758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.616 [2024-11-20 12:42:54.356933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.616 [2024-11-20 12:42:54.356952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.616 [2024-11-20 12:42:54.360547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.616 [2024-11-20 12:42:54.360699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.616 [2024-11-20 12:42:54.360718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.616 [2024-11-20 12:42:54.364380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.616 [2024-11-20 12:42:54.364556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.616 [2024-11-20 12:42:54.364573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.616 [2024-11-20 12:42:54.368182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.616 [2024-11-20 12:42:54.368367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.616 [2024-11-20 12:42:54.368385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.616 [2024-11-20 12:42:54.372089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.616 [2024-11-20 12:42:54.372271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.616 [2024-11-20 12:42:54.372290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.877 [2024-11-20 12:42:54.376594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.877 [2024-11-20 12:42:54.376730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.877 [2024-11-20 12:42:54.376749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.877 [2024-11-20 12:42:54.381112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.877 [2024-11-20 12:42:54.381254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.877 [2024-11-20 12:42:54.381273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.877 [2024-11-20 12:42:54.385803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.877 [2024-11-20 12:42:54.385970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.877 [2024-11-20 12:42:54.385992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.877 [2024-11-20 12:42:54.389792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.877 [2024-11-20 12:42:54.389966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.877 [2024-11-20 12:42:54.389984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.877 [2024-11-20 12:42:54.393863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.878 [2024-11-20 12:42:54.394033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.878 [2024-11-20 12:42:54.394052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.878 [2024-11-20 12:42:54.397856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.878 [2024-11-20 12:42:54.398015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.878 [2024-11-20 12:42:54.398033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.878 [2024-11-20 12:42:54.401797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.878 [2024-11-20 12:42:54.401971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.878 [2024-11-20 12:42:54.401990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.878 [2024-11-20 12:42:54.405720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.878 [2024-11-20 12:42:54.405895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.878 [2024-11-20 12:42:54.405914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.878 [2024-11-20 12:42:54.409825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.878 [2024-11-20 12:42:54.409994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.878 [2024-11-20 12:42:54.410012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.878 [2024-11-20 12:42:54.414572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.878 [2024-11-20 12:42:54.414744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.878 [2024-11-20 12:42:54.414762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.878 [2024-11-20 12:42:54.419078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.878 [2024-11-20 12:42:54.419241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.878 [2024-11-20 12:42:54.419260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.878 [2024-11-20 12:42:54.423265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.878 [2024-11-20 12:42:54.423433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.878 [2024-11-20 12:42:54.423452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.878 [2024-11-20 12:42:54.427217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.878 [2024-11-20 12:42:54.427387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.878 [2024-11-20 12:42:54.427405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.878 [2024-11-20 12:42:54.431117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.878 [2024-11-20 12:42:54.431297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.878 [2024-11-20 12:42:54.431316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.878 [2024-11-20 12:42:54.435054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.878 [2024-11-20 12:42:54.435215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.878 [2024-11-20 12:42:54.435234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.878 [2024-11-20 12:42:54.439024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.878 [2024-11-20 12:42:54.439194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.878 [2024-11-20 12:42:54.439218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.878 [2024-11-20 12:42:54.442902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.878 [2024-11-20 12:42:54.443077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.878 [2024-11-20 12:42:54.443096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.878 [2024-11-20 12:42:54.446897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.878 [2024-11-20 12:42:54.447066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.878 [2024-11-20 12:42:54.447084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.878 [2024-11-20 12:42:54.450754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.878 [2024-11-20 12:42:54.450916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.878 [2024-11-20 12:42:54.450934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.878 [2024-11-20 12:42:54.454677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.878 [2024-11-20 12:42:54.454839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.878 [2024-11-20 12:42:54.454857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.878 [2024-11-20 12:42:54.459421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.878 [2024-11-20 12:42:54.459589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.878 [2024-11-20 12:42:54.459607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.878 [2024-11-20 12:42:54.463874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.878 [2024-11-20 12:42:54.464036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.878 [2024-11-20 12:42:54.464054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.878 [2024-11-20 12:42:54.468074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.878 [2024-11-20 12:42:54.468247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.878 [2024-11-20 12:42:54.468266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.878 [2024-11-20 12:42:54.472709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.878 [2024-11-20 12:42:54.472880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.878 [2024-11-20 12:42:54.472900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.878 [2024-11-20 12:42:54.477124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.878 [2024-11-20 12:42:54.477285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.878 [2024-11-20 12:42:54.477304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.878 [2024-11-20 12:42:54.481374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.878 [2024-11-20 12:42:54.481533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.878 [2024-11-20 12:42:54.481553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.878 [2024-11-20 12:42:54.485930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.878 [2024-11-20 12:42:54.486059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.878 [2024-11-20 12:42:54.486078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.878 [2024-11-20 12:42:54.490593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.878 [2024-11-20 12:42:54.490763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.878 [2024-11-20 12:42:54.490783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.878 [2024-11-20 12:42:54.494772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.878 [2024-11-20 12:42:54.494930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.878 [2024-11-20 12:42:54.494952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.878 [2024-11-20 12:42:54.498799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.878 [2024-11-20 12:42:54.498963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.878 [2024-11-20 12:42:54.498982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.878 [2024-11-20 12:42:54.502664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.878 [2024-11-20 12:42:54.502837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.879 [2024-11-20 12:42:54.502856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.879 [2024-11-20 12:42:54.506724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.879 [2024-11-20 12:42:54.506903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.879 [2024-11-20 12:42:54.506923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.879 [2024-11-20 12:42:54.510708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.879 [2024-11-20 12:42:54.510864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.879 [2024-11-20 12:42:54.510885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.879 [2024-11-20 12:42:54.514647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.879 [2024-11-20 12:42:54.514797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.879 [2024-11-20 12:42:54.514815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.879 [2024-11-20 12:42:54.518648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.879 [2024-11-20 12:42:54.518824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.879 [2024-11-20 12:42:54.518842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.879 [2024-11-20 12:42:54.522578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.879 [2024-11-20 12:42:54.522744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.879 [2024-11-20 12:42:54.522764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.879 [2024-11-20 12:42:54.526581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.879 [2024-11-20 12:42:54.526740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.879 [2024-11-20 12:42:54.526760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.879 [2024-11-20 12:42:54.530645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.879 [2024-11-20 12:42:54.530796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.879 [2024-11-20 12:42:54.530816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.879 [2024-11-20 12:42:54.535315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.879 [2024-11-20 12:42:54.535481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.879 [2024-11-20 12:42:54.535499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.879 [2024-11-20 12:42:54.539770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.879 [2024-11-20 12:42:54.539949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.879 [2024-11-20 12:42:54.539968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.879 [2024-11-20 12:42:54.544596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.879 [2024-11-20 12:42:54.544732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.879 [2024-11-20 12:42:54.544751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.879 [2024-11-20 12:42:54.549984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.879 [2024-11-20 12:42:54.550187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.879 [2024-11-20 12:42:54.550229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.879 [2024-11-20 12:42:54.555102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.879 [2024-11-20 12:42:54.555301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.879 [2024-11-20 12:42:54.555320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.879 [2024-11-20 12:42:54.561259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.879 [2024-11-20 12:42:54.561451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.879 [2024-11-20 12:42:54.561470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.879 [2024-11-20 12:42:54.567388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.879 [2024-11-20 12:42:54.567668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.879 [2024-11-20 12:42:54.567688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.879 [2024-11-20 12:42:54.573999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.879 [2024-11-20 12:42:54.574216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.879 [2024-11-20 12:42:54.574236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.879 [2024-11-20 12:42:54.580504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.879 [2024-11-20 12:42:54.580736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.879 [2024-11-20 12:42:54.580757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.879 [2024-11-20 12:42:54.586481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.879 [2024-11-20 12:42:54.586758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.879 [2024-11-20 12:42:54.586780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.879 [2024-11-20 12:42:54.592213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.879 [2024-11-20 12:42:54.592408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.879 [2024-11-20 12:42:54.592427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.879 [2024-11-20 12:42:54.597660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.879 [2024-11-20 12:42:54.597839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.879 [2024-11-20 12:42:54.597858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.879 [2024-11-20 12:42:54.603141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.879 [2024-11-20 12:42:54.603310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.879 [2024-11-20 12:42:54.603329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.879 [2024-11-20 12:42:54.608619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.879 [2024-11-20 12:42:54.608904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.879 [2024-11-20 12:42:54.608924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.879 [2024-11-20 12:42:54.614103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.879 [2024-11-20 12:42:54.614284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.879 [2024-11-20 12:42:54.614303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:48.879 [2024-11-20 12:42:54.619580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.879 [2024-11-20 12:42:54.619882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.879 [2024-11-20 12:42:54.619903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:48.879 [2024-11-20 12:42:54.625242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.879 [2024-11-20 12:42:54.625444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.879 [2024-11-20 12:42:54.625468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:48.879 [2024-11-20 12:42:54.630787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.879 [2024-11-20 12:42:54.631058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.879 [2024-11-20 12:42:54.631079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:48.879 [2024-11-20 12:42:54.635953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:48.879 [2024-11-20 12:42:54.636142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.879 [2024-11-20 12:42:54.636162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.141 [2024-11-20 12:42:54.641172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.141 [2024-11-20 12:42:54.641507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.141 [2024-11-20 12:42:54.641528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.141 [2024-11-20 12:42:54.646530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.141 [2024-11-20 12:42:54.646806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.141 [2024-11-20 12:42:54.646828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.141 [2024-11-20 12:42:54.651618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.141 [2024-11-20 12:42:54.651866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.141 [2024-11-20 12:42:54.651887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.141 [2024-11-20 12:42:54.657671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.141 [2024-11-20 12:42:54.657869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.141 [2024-11-20 12:42:54.657888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.141 [2024-11-20 12:42:54.663838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.141 [2024-11-20 12:42:54.664108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.141 [2024-11-20 12:42:54.664129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.141 [2024-11-20 12:42:54.670809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.141 [2024-11-20 12:42:54.671001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.141 [2024-11-20 12:42:54.671021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.141 [2024-11-20 12:42:54.676819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.141 [2024-11-20 12:42:54.677033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.141 [2024-11-20 12:42:54.677053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.141 [2024-11-20 12:42:54.682970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.141 [2024-11-20 12:42:54.683185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.141 [2024-11-20 12:42:54.683214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.141 [2024-11-20 12:42:54.689064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.141 [2024-11-20 12:42:54.689310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.141 [2024-11-20 12:42:54.689331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.141 [2024-11-20 12:42:54.695578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.141 [2024-11-20 12:42:54.695856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.141 [2024-11-20 12:42:54.695877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.141 [2024-11-20 12:42:54.702082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.141 [2024-11-20 12:42:54.702394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.141 [2024-11-20 12:42:54.702415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.141 [2024-11-20 12:42:54.707446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.141 [2024-11-20 12:42:54.707725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.141 [2024-11-20 12:42:54.707746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.141 [2024-11-20 12:42:54.712778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.141 [2024-11-20 12:42:54.712993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.141 [2024-11-20 12:42:54.713013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.141 [2024-11-20 12:42:54.716971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.141 [2024-11-20 12:42:54.717157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.141 [2024-11-20 12:42:54.717176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.141 [2024-11-20 12:42:54.721631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.141 [2024-11-20 12:42:54.721812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.141 [2024-11-20 12:42:54.721831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.141 [2024-11-20 12:42:54.726099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.141 [2024-11-20 12:42:54.726308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.141 [2024-11-20 12:42:54.726328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.141 [2024-11-20 12:42:54.731022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.141 [2024-11-20 12:42:54.731339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.141 [2024-11-20 12:42:54.731361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.141 [2024-11-20 12:42:54.736737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.141 [2024-11-20 12:42:54.737055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.141 [2024-11-20 12:42:54.737075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.141 [2024-11-20 12:42:54.742741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.141 [2024-11-20 12:42:54.742949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.141 [2024-11-20 12:42:54.742970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.141 [2024-11-20 12:42:54.748674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.141 [2024-11-20 12:42:54.748843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.141 [2024-11-20 12:42:54.748861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.141 [2024-11-20 12:42:54.754804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.141 [2024-11-20 12:42:54.754933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.142 [2024-11-20 12:42:54.754952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.142 [2024-11-20 12:42:54.761289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.142 [2024-11-20 12:42:54.761425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.142 [2024-11-20 12:42:54.761444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.142 [2024-11-20 12:42:54.766871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.142 [2024-11-20 12:42:54.766924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.142 [2024-11-20 12:42:54.766943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.142 [2024-11-20 12:42:54.771841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.142 [2024-11-20 12:42:54.771948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.142 [2024-11-20 12:42:54.771971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.142 [2024-11-20 12:42:54.777171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.142 [2024-11-20 12:42:54.777273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.142 [2024-11-20 12:42:54.777292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.142 [2024-11-20 12:42:54.782334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.142 [2024-11-20 12:42:54.782424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.142 [2024-11-20 12:42:54.782444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.142 [2024-11-20 12:42:54.787494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.142 [2024-11-20 12:42:54.787592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.142 [2024-11-20 12:42:54.787613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.142 [2024-11-20 12:42:54.792792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.142 [2024-11-20 12:42:54.792876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.142 [2024-11-20 12:42:54.792898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.142 [2024-11-20 12:42:54.796962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.142 [2024-11-20 12:42:54.797031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.142 [2024-11-20 12:42:54.797050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.142 [2024-11-20 12:42:54.800949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.142 [2024-11-20 12:42:54.801017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.142 [2024-11-20 12:42:54.801036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.142 [2024-11-20 12:42:54.804857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.142 [2024-11-20 12:42:54.804928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.142 [2024-11-20 12:42:54.804947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.142 [2024-11-20 12:42:54.808765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.142 [2024-11-20 12:42:54.808834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.142 [2024-11-20 12:42:54.808854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.142 [2024-11-20 12:42:54.813101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.142 [2024-11-20 12:42:54.813171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.142 [2024-11-20 12:42:54.813194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.142 [2024-11-20 12:42:54.818137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.142 [2024-11-20 12:42:54.818210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.142 [2024-11-20 12:42:54.818230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.142 [2024-11-20 12:42:54.822358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.142 [2024-11-20 12:42:54.822429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.142 [2024-11-20 12:42:54.822448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.142 [2024-11-20 12:42:54.826229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.142 [2024-11-20 12:42:54.826297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.142 [2024-11-20 12:42:54.826315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.142 [2024-11-20 12:42:54.830139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.142 [2024-11-20 12:42:54.830216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.142 [2024-11-20 12:42:54.830235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.142 [2024-11-20 12:42:54.834038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.142 [2024-11-20 12:42:54.834108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.142 [2024-11-20 12:42:54.834127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.142 [2024-11-20 12:42:54.837968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.142 [2024-11-20 12:42:54.838039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.142 [2024-11-20 12:42:54.838059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.142 [2024-11-20 12:42:54.841853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.142 [2024-11-20 12:42:54.841925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.142 [2024-11-20 12:42:54.841944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.142 [2024-11-20 12:42:54.845612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.142 [2024-11-20 12:42:54.845680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.142 [2024-11-20 12:42:54.845699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.142 [2024-11-20 12:42:54.849521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.142 [2024-11-20 12:42:54.849594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.142 [2024-11-20 12:42:54.849613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.142 [2024-11-20 12:42:54.853722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.142 [2024-11-20 12:42:54.853789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.142 [2024-11-20 12:42:54.853807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.142 [2024-11-20 12:42:54.858275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.142 [2024-11-20 12:42:54.858348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.142 [2024-11-20 12:42:54.858366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.142 [2024-11-20 12:42:54.862212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.142 [2024-11-20 12:42:54.862300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.142 [2024-11-20 12:42:54.862318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.142 [2024-11-20 12:42:54.866094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.142 [2024-11-20 12:42:54.866164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.142 [2024-11-20 12:42:54.866182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.142 [2024-11-20 12:42:54.870053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.142 [2024-11-20 12:42:54.870131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.142 [2024-11-20 12:42:54.870150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.143 [2024-11-20 12:42:54.873912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.143 [2024-11-20 12:42:54.873984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.143 [2024-11-20 12:42:54.874002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.143 [2024-11-20 12:42:54.877707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.143 [2024-11-20 12:42:54.877785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.143 [2024-11-20 12:42:54.877804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.143 [2024-11-20 12:42:54.881717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.143 [2024-11-20 12:42:54.881785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.143 [2024-11-20 12:42:54.881804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.143 [2024-11-20 12:42:54.886283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.143 [2024-11-20 12:42:54.886352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.143 [2024-11-20 12:42:54.886370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.143 [2024-11-20 12:42:54.890816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.143 [2024-11-20 12:42:54.890896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.143 [2024-11-20 12:42:54.890915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.143 [2024-11-20 12:42:54.894731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.143 [2024-11-20 12:42:54.894804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.143 [2024-11-20 12:42:54.894822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.143 [2024-11-20 12:42:54.898819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.143 [2024-11-20 12:42:54.898905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.143 [2024-11-20 12:42:54.898923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.403 [2024-11-20 12:42:54.902787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.403 [2024-11-20 12:42:54.902858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.403 [2024-11-20 12:42:54.902876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.403 [2024-11-20 12:42:54.906588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.403 [2024-11-20 12:42:54.906658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.403 [2024-11-20 12:42:54.906676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.403 [2024-11-20 12:42:54.910460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.403 [2024-11-20 12:42:54.910529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.403 [2024-11-20 12:42:54.910548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.403 [2024-11-20 12:42:54.914250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.403 [2024-11-20 12:42:54.914321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.403 [2024-11-20 12:42:54.914339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.403 [2024-11-20 12:42:54.918092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.403 [2024-11-20 12:42:54.918162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.403 [2024-11-20 12:42:54.918183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.403 [2024-11-20 12:42:54.922250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.403 [2024-11-20 12:42:54.922318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.403 [2024-11-20 12:42:54.922337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.403 [2024-11-20 12:42:54.926561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.403 [2024-11-20 12:42:54.926628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.403 [2024-11-20 12:42:54.926646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.403 [2024-11-20 12:42:54.931145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.403 [2024-11-20 12:42:54.931221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.403 [2024-11-20 12:42:54.931240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.403 [2024-11-20 12:42:54.935841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.403 [2024-11-20 12:42:54.935917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.403 [2024-11-20 12:42:54.935936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.403 [2024-11-20 12:42:54.940934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.403 [2024-11-20 12:42:54.941002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.403 [2024-11-20 12:42:54.941021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.403 [2024-11-20 12:42:54.945670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.403 [2024-11-20 12:42:54.945738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.403 [2024-11-20 12:42:54.945758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.403 [2024-11-20 12:42:54.949752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.403 [2024-11-20 12:42:54.949821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.403 [2024-11-20 12:42:54.949840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.403 [2024-11-20 12:42:54.953751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.403 [2024-11-20 12:42:54.953819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.403 [2024-11-20 12:42:54.953838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.403 [2024-11-20 12:42:54.957765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.403 [2024-11-20 12:42:54.957843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.403 [2024-11-20 12:42:54.957862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.403 [2024-11-20 12:42:54.961917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.403 [2024-11-20 12:42:54.961985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.403 [2024-11-20 12:42:54.962004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.403 [2024-11-20 12:42:54.965998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.403 [2024-11-20 12:42:54.966078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.403 [2024-11-20 12:42:54.966097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.403 [2024-11-20 12:42:54.970151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.403 [2024-11-20 12:42:54.970229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.403 [2024-11-20 12:42:54.970248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.403 [2024-11-20 12:42:54.974099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.403 [2024-11-20 12:42:54.974172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.403 [2024-11-20 12:42:54.974191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.403 [2024-11-20 12:42:54.978130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.403 [2024-11-20 12:42:54.978208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.403 [2024-11-20 12:42:54.978228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.403 [2024-11-20 12:42:54.982335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.403 [2024-11-20 12:42:54.982404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.403 [2024-11-20 12:42:54.982422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.403 [2024-11-20 12:42:54.987043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.403 [2024-11-20 12:42:54.987112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.403 [2024-11-20 12:42:54.987131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.403 [2024-11-20 12:42:54.991296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.403 [2024-11-20 12:42:54.991371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.403 [2024-11-20 12:42:54.991391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.403 [2024-11-20 12:42:54.995228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.403 [2024-11-20 12:42:54.995307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.404 [2024-11-20 12:42:54.995325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.404 [2024-11-20 12:42:54.999736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.404 [2024-11-20 12:42:54.999863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.404 [2024-11-20 12:42:54.999881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.404 [2024-11-20 12:42:55.005255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.404 [2024-11-20 12:42:55.005440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.404 [2024-11-20 12:42:55.005459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.404 [2024-11-20 12:42:55.010664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.404 [2024-11-20 12:42:55.010852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.404 [2024-11-20 12:42:55.010871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.404 [2024-11-20 12:42:55.016810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.404 [2024-11-20 12:42:55.017002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.404 [2024-11-20 12:42:55.017021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.404 [2024-11-20 12:42:55.023215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.404 [2024-11-20 12:42:55.023383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.404 [2024-11-20 12:42:55.023403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.404 [2024-11-20 12:42:55.029720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.404 [2024-11-20 12:42:55.029913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.404 [2024-11-20 12:42:55.029934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.404 [2024-11-20 12:42:55.035965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.404 [2024-11-20 12:42:55.036169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.404 [2024-11-20 12:42:55.036189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.404 [2024-11-20 12:42:55.041824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.404 [2024-11-20 12:42:55.042020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.404 [2024-11-20 12:42:55.042044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.404 [2024-11-20 12:42:55.047623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.404 [2024-11-20 12:42:55.047779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.404 [2024-11-20 12:42:55.047798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.404 [2024-11-20 12:42:55.053901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.404 [2024-11-20 12:42:55.054083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.404 [2024-11-20 12:42:55.054102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.404 [2024-11-20 12:42:55.059785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.404 [2024-11-20 12:42:55.059921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.404 [2024-11-20 12:42:55.059940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.404 [2024-11-20 12:42:55.066102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.404 [2024-11-20 12:42:55.066277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.404 [2024-11-20 12:42:55.066296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.404 [2024-11-20 12:42:55.073210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.404 [2024-11-20 12:42:55.073394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.404 [2024-11-20 12:42:55.073412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.404 [2024-11-20 12:42:55.080215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.404 [2024-11-20 12:42:55.080371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.404 [2024-11-20 12:42:55.080390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.404 [2024-11-20 12:42:55.087055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.404 [2024-11-20 12:42:55.087256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.404 [2024-11-20 12:42:55.087274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.404 [2024-11-20 12:42:55.093264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.404 [2024-11-20 12:42:55.093394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.404 [2024-11-20 12:42:55.093413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.404 [2024-11-20 12:42:55.098593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.404 [2024-11-20 12:42:55.098698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.404 [2024-11-20 12:42:55.098717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.404 [2024-11-20 12:42:55.103067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.404 [2024-11-20 12:42:55.103133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.404 [2024-11-20 12:42:55.103151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.404 [2024-11-20 12:42:55.107679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.404 [2024-11-20 12:42:55.107765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.404 [2024-11-20 12:42:55.107784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.404 [2024-11-20 12:42:55.111617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.404 [2024-11-20 12:42:55.111689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.404 [2024-11-20 12:42:55.111708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.404 [2024-11-20 12:42:55.115521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.404 [2024-11-20 12:42:55.115588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.404 [2024-11-20 12:42:55.115606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.404 [2024-11-20 12:42:55.119510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.404 [2024-11-20 12:42:55.119578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.404 [2024-11-20 12:42:55.119596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.404 [2024-11-20 12:42:55.123561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.404 [2024-11-20 12:42:55.123627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.404 [2024-11-20 12:42:55.123646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.404 [2024-11-20 12:42:55.127588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.404 [2024-11-20 12:42:55.127657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.404 [2024-11-20 12:42:55.127677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.404 [2024-11-20 12:42:55.131597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.404 [2024-11-20 12:42:55.131667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.404 [2024-11-20 12:42:55.131686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.404 [2024-11-20 12:42:55.135518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.404 [2024-11-20 12:42:55.135592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.404 [2024-11-20 12:42:55.135611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.404 [2024-11-20 12:42:55.139535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.404 [2024-11-20 12:42:55.139606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.404 [2024-11-20 12:42:55.139624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.405 [2024-11-20 12:42:55.144072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.405 [2024-11-20 12:42:55.144139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.405 [2024-11-20 12:42:55.144158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.405 [2024-11-20 12:42:55.148599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.405 [2024-11-20 12:42:55.148666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.405 [2024-11-20 12:42:55.148684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.405 [2024-11-20 12:42:55.152766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.405 [2024-11-20 12:42:55.152832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.405 [2024-11-20 12:42:55.152850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.405 [2024-11-20 12:42:55.156780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.405 [2024-11-20 12:42:55.156847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.405 [2024-11-20 12:42:55.156866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.405 [2024-11-20 12:42:55.160724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.405 [2024-11-20 12:42:55.160792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.405 [2024-11-20 12:42:55.160812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.665 [2024-11-20 12:42:55.164587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.665 [2024-11-20 12:42:55.164669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.665 [2024-11-20 12:42:55.164688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.665 [2024-11-20 12:42:55.168492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.665 [2024-11-20 12:42:55.168557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.665 [2024-11-20 12:42:55.168580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.665 [2024-11-20 12:42:55.172312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.665 [2024-11-20 12:42:55.172384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.665 [2024-11-20 12:42:55.172403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.665 [2024-11-20 12:42:55.176109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.665 [2024-11-20 12:42:55.176187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.665 [2024-11-20 12:42:55.176212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.665 [2024-11-20 12:42:55.179971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.665 [2024-11-20 12:42:55.180039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.665 [2024-11-20 12:42:55.180057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.665 [2024-11-20 12:42:55.183623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.665 [2024-11-20 12:42:55.183693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.665 [2024-11-20 12:42:55.183711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.665 [2024-11-20 12:42:55.187263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.665 [2024-11-20 12:42:55.187334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.665 [2024-11-20 12:42:55.187353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.665 [2024-11-20 12:42:55.190906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.665 [2024-11-20 12:42:55.190982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.665 [2024-11-20 12:42:55.191000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.665 [2024-11-20 12:42:55.194574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.665 [2024-11-20 12:42:55.194657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.665 [2024-11-20 12:42:55.194676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.665 [2024-11-20 12:42:55.198377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.665 [2024-11-20 12:42:55.198457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.665 [2024-11-20 12:42:55.198476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.665 [2024-11-20 12:42:55.201991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.665 [2024-11-20 12:42:55.202065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.665 [2024-11-20 12:42:55.202083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.665 [2024-11-20 12:42:55.205636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.665 [2024-11-20 12:42:55.205706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.665 [2024-11-20 12:42:55.205724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.665 [2024-11-20 12:42:55.209234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.665 [2024-11-20 12:42:55.209307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.665 [2024-11-20 12:42:55.209326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.665 [2024-11-20 12:42:55.212855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.665 [2024-11-20 12:42:55.212926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.665 [2024-11-20 12:42:55.212944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.665 [2024-11-20 12:42:55.216503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.665 [2024-11-20 12:42:55.216573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.665 [2024-11-20 12:42:55.216592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.665 [2024-11-20 12:42:55.220116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.665 [2024-11-20 12:42:55.220189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.665 [2024-11-20 12:42:55.220215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.665 [2024-11-20 12:42:55.223766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.665 [2024-11-20 12:42:55.223837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.665 [2024-11-20 12:42:55.223854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.665 [2024-11-20 12:42:55.227380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.665 [2024-11-20 12:42:55.227448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.665 [2024-11-20 12:42:55.227466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.665 [2024-11-20 12:42:55.230986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.665 [2024-11-20 12:42:55.231055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.665 [2024-11-20 12:42:55.231073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.665 [2024-11-20 12:42:55.234575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.665 [2024-11-20 12:42:55.234648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.665 [2024-11-20 12:42:55.234667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.665 [2024-11-20 12:42:55.238173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.665 [2024-11-20 12:42:55.238250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.665 [2024-11-20 12:42:55.238269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.665 [2024-11-20 12:42:55.241882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.665 [2024-11-20 12:42:55.241951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.665 [2024-11-20 12:42:55.241971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.665 [2024-11-20 12:42:55.245563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.665 [2024-11-20 12:42:55.245635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.665 [2024-11-20 12:42:55.245654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.665 [2024-11-20 12:42:55.249192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.665 [2024-11-20 12:42:55.249278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.665 [2024-11-20 12:42:55.249296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.665 [2024-11-20 12:42:55.252791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.665 [2024-11-20 12:42:55.252860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.665 [2024-11-20 12:42:55.252878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.665 [2024-11-20 12:42:55.256449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.665 [2024-11-20 12:42:55.256515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.666 [2024-11-20 12:42:55.256534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.666 [2024-11-20 12:42:55.259999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.666 [2024-11-20 12:42:55.260067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.666 [2024-11-20 12:42:55.260087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.666 [2024-11-20 12:42:55.263606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.666 [2024-11-20 12:42:55.263678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.666 [2024-11-20 12:42:55.263699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.666 [2024-11-20 12:42:55.267209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.666 [2024-11-20 12:42:55.267280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.666 [2024-11-20 12:42:55.267300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.666 [2024-11-20 12:42:55.270817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.666 [2024-11-20 12:42:55.270888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.666 [2024-11-20 12:42:55.270906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.666 [2024-11-20 12:42:55.274402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.666 [2024-11-20 12:42:55.274473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.666 [2024-11-20 12:42:55.274491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.666 [2024-11-20 12:42:55.278006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.666 [2024-11-20 12:42:55.278076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.666 [2024-11-20 12:42:55.278095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.666 [2024-11-20 12:42:55.281613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.666 [2024-11-20 12:42:55.281685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.666 [2024-11-20 12:42:55.281703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.666 [2024-11-20 12:42:55.285251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.666 [2024-11-20 12:42:55.285326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.666 [2024-11-20 12:42:55.285345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.666 [2024-11-20 12:42:55.289085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.666 [2024-11-20 12:42:55.289153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.666 [2024-11-20 12:42:55.289171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.666 [2024-11-20 12:42:55.292651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.666 [2024-11-20 12:42:55.292726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.666 [2024-11-20 12:42:55.292745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.666 [2024-11-20 12:42:55.296415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.666 [2024-11-20 12:42:55.296549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.666 [2024-11-20 12:42:55.296567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.666 [2024-11-20 12:42:55.301262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.666 [2024-11-20 12:42:55.301388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.666 [2024-11-20 12:42:55.301406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.666 [2024-11-20 12:42:55.306175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.666 [2024-11-20 12:42:55.306298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.666 [2024-11-20 12:42:55.306317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.666 [2024-11-20 12:42:55.310037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.666 [2024-11-20 12:42:55.310165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.666 [2024-11-20 12:42:55.310183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.666 [2024-11-20 12:42:55.313848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.666 [2024-11-20 12:42:55.313969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.666 [2024-11-20 12:42:55.313987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.666 [2024-11-20 12:42:55.317750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.666 [2024-11-20 12:42:55.317835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.666 [2024-11-20 12:42:55.317854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.666 [2024-11-20 12:42:55.321526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.666 [2024-11-20 12:42:55.321629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.666 [2024-11-20 12:42:55.321646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.666 [2024-11-20 12:42:55.325325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.666 [2024-11-20 12:42:55.325438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.666 [2024-11-20 12:42:55.325456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.666 [2024-11-20 12:42:55.329379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.666 [2024-11-20 12:42:55.329490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.666 [2024-11-20 12:42:55.329508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.666 [2024-11-20 12:42:55.333360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.666 [2024-11-20 12:42:55.333452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.666 [2024-11-20 12:42:55.333471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.666 [2024-11-20 12:42:55.337458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.666 [2024-11-20 12:42:55.337594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.666 [2024-11-20 12:42:55.337612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:49.666 [2024-11-20 12:42:55.341666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.666 [2024-11-20 12:42:55.341762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.666 [2024-11-20 12:42:55.341781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:49.666 [2024-11-20 12:42:55.345801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.666 [2024-11-20 12:42:55.345919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.666 [2024-11-20 12:42:55.345938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:49.666 6643.50 IOPS, 830.44 MiB/s [2024-11-20T11:42:55.432Z] [2024-11-20 12:42:55.350787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x889b20) with pdu=0x2000166ff3c8 00:28:49.666 [2024-11-20 12:42:55.350839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.666 [2024-11-20 12:42:55.350858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:49.666 00:28:49.666 Latency(us) 00:28:49.666 [2024-11-20T11:42:55.432Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:49.666 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:49.666 nvme0n1 : 2.00 6642.04 830.26 0.00 0.00 2404.91 1716.42 10860.25 00:28:49.666 [2024-11-20T11:42:55.432Z] =================================================================================================================== 00:28:49.666 [2024-11-20T11:42:55.432Z] Total : 6642.04 830.26 0.00 0.00 2404.91 1716.42 10860.25 00:28:49.666 { 00:28:49.666 "results": [ 00:28:49.666 { 00:28:49.666 "job": "nvme0n1", 00:28:49.666 "core_mask": "0x2", 00:28:49.666 "workload": "randwrite", 00:28:49.666 "status": "finished", 00:28:49.666 "queue_depth": 16, 00:28:49.666 "io_size": 131072, 00:28:49.666 "runtime": 2.002848, 00:28:49.667 "iops": 6642.041732572817, 00:28:49.667 "mibps": 830.2552165716021, 00:28:49.667 "io_failed": 0, 00:28:49.667 "io_timeout": 0, 00:28:49.667 "avg_latency_us": 2404.912950677076, 00:28:49.667 "min_latency_us": 1716.4190476190477, 00:28:49.667 "max_latency_us": 10860.251428571428 00:28:49.667 } 00:28:49.667 ], 00:28:49.667 "core_count": 1 00:28:49.667 } 00:28:49.667 12:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:49.667 12:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:49.667 12:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:49.667 | .driver_specific 00:28:49.667 | .nvme_error 00:28:49.667 | .status_code 00:28:49.667 | .command_transient_transport_error' 00:28:49.667 12:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:49.926 12:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 430 > 0 )) 00:28:49.926 12:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 332485 00:28:49.926 12:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 332485 ']' 00:28:49.926 12:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 332485 00:28:49.926 12:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:49.926 12:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:49.926 12:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 332485 00:28:49.926 12:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:49.926 12:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:49.926 12:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 332485' 00:28:49.926 killing process with pid 332485 00:28:49.926 12:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 332485 00:28:49.926 Received shutdown signal, test time was about 2.000000 seconds 00:28:49.926 00:28:49.926 Latency(us) 00:28:49.926 [2024-11-20T11:42:55.692Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:49.926 [2024-11-20T11:42:55.692Z] =================================================================================================================== 00:28:49.926 [2024-11-20T11:42:55.692Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:49.926 12:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 332485 00:28:50.185 12:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 330765 00:28:50.185 12:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 330765 ']' 00:28:50.186 12:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 330765 00:28:50.186 12:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:50.186 12:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:50.186 12:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 330765 00:28:50.186 12:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:50.186 12:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:50.186 12:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 330765' 00:28:50.186 killing process with pid 330765 00:28:50.186 12:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 330765 00:28:50.186 12:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 330765 00:28:50.445 00:28:50.445 real 0m13.795s 00:28:50.445 user 0m26.111s 00:28:50.445 sys 0m4.724s 00:28:50.445 12:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:50.445 12:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:50.445 ************************************ 00:28:50.445 END TEST nvmf_digest_error 00:28:50.445 ************************************ 00:28:50.445 12:42:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:50.445 12:42:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:50.445 12:42:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:50.445 12:42:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:28:50.445 12:42:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:50.445 12:42:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:28:50.445 12:42:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:50.445 12:42:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:50.445 rmmod nvme_tcp 00:28:50.445 rmmod nvme_fabrics 00:28:50.445 rmmod nvme_keyring 00:28:50.445 12:42:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:50.445 12:42:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:28:50.445 12:42:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:28:50.445 12:42:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 330765 ']' 00:28:50.445 12:42:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 330765 00:28:50.445 12:42:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 330765 ']' 00:28:50.445 12:42:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 330765 00:28:50.445 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (330765) - No such process 00:28:50.445 12:42:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 330765 is not found' 00:28:50.445 Process with pid 330765 is not found 00:28:50.445 12:42:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:50.445 12:42:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:50.445 12:42:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:50.445 12:42:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:28:50.445 12:42:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:28:50.445 12:42:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:50.445 12:42:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:28:50.445 12:42:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:50.445 12:42:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:50.445 12:42:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:50.445 12:42:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:50.445 12:42:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:52.982 00:28:52.982 real 0m36.105s 00:28:52.982 user 0m54.583s 00:28:52.982 sys 0m13.900s 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:52.982 ************************************ 00:28:52.982 END TEST nvmf_digest 00:28:52.982 ************************************ 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.982 ************************************ 00:28:52.982 START TEST nvmf_bdevperf 00:28:52.982 ************************************ 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:52.982 * Looking for test storage... 00:28:52.982 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:52.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.982 --rc genhtml_branch_coverage=1 00:28:52.982 --rc genhtml_function_coverage=1 00:28:52.982 --rc genhtml_legend=1 00:28:52.982 --rc geninfo_all_blocks=1 00:28:52.982 --rc geninfo_unexecuted_blocks=1 00:28:52.982 00:28:52.982 ' 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:52.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.982 --rc genhtml_branch_coverage=1 00:28:52.982 --rc genhtml_function_coverage=1 00:28:52.982 --rc genhtml_legend=1 00:28:52.982 --rc geninfo_all_blocks=1 00:28:52.982 --rc geninfo_unexecuted_blocks=1 00:28:52.982 00:28:52.982 ' 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:52.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.982 --rc genhtml_branch_coverage=1 00:28:52.982 --rc genhtml_function_coverage=1 00:28:52.982 --rc genhtml_legend=1 00:28:52.982 --rc geninfo_all_blocks=1 00:28:52.982 --rc geninfo_unexecuted_blocks=1 00:28:52.982 00:28:52.982 ' 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:52.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.982 --rc genhtml_branch_coverage=1 00:28:52.982 --rc genhtml_function_coverage=1 00:28:52.982 --rc genhtml_legend=1 00:28:52.982 --rc geninfo_all_blocks=1 00:28:52.982 --rc geninfo_unexecuted_blocks=1 00:28:52.982 00:28:52.982 ' 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.982 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.983 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:52.983 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.983 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:28:52.983 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:52.983 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:52.983 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:52.983 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:52.983 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:52.983 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:52.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:52.983 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:52.983 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:52.983 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:52.983 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:52.983 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:52.983 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:52.983 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:52.983 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:52.983 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:52.983 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:52.983 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:52.983 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:52.983 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:52.983 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:52.983 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:52.983 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:52.983 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:28:52.983 12:42:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:59.559 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:59.559 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:28:59.559 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:59.559 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:59.559 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:59.559 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:59.559 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:59.559 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:28:59.559 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:59.559 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:28:59.559 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:59.560 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:59.560 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:59.560 Found net devices under 0000:86:00.0: cvl_0_0 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:59.560 Found net devices under 0000:86:00.1: cvl_0_1 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:59.560 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:59.560 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.466 ms 00:28:59.560 00:28:59.560 --- 10.0.0.2 ping statistics --- 00:28:59.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.560 rtt min/avg/max/mdev = 0.466/0.466/0.466/0.000 ms 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:59.560 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:59.560 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:28:59.560 00:28:59.560 --- 10.0.0.1 ping statistics --- 00:28:59.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.560 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=336503 00:28:59.560 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:59.561 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 336503 00:28:59.561 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 336503 ']' 00:28:59.561 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:59.561 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:59.561 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:59.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:59.561 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:59.561 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:59.561 [2024-11-20 12:43:04.450397] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:28:59.561 [2024-11-20 12:43:04.450443] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:59.561 [2024-11-20 12:43:04.514646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:59.561 [2024-11-20 12:43:04.556980] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:59.561 [2024-11-20 12:43:04.557015] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:59.561 [2024-11-20 12:43:04.557022] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:59.561 [2024-11-20 12:43:04.557028] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:59.561 [2024-11-20 12:43:04.557033] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:59.561 [2024-11-20 12:43:04.558431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:59.561 [2024-11-20 12:43:04.558448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:59.561 [2024-11-20 12:43:04.558453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:59.561 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:59.561 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:28:59.561 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:59.561 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:59.561 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:59.561 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:59.561 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:59.561 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.561 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:59.561 [2024-11-20 12:43:04.702405] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:59.561 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.561 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:59.561 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.561 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:59.561 Malloc0 00:28:59.561 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.561 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:59.561 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.561 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:59.561 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.561 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:59.561 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.561 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:59.561 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.561 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:59.561 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.561 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:59.561 [2024-11-20 12:43:04.774530] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:59.561 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.561 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:59.561 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:59.561 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:28:59.561 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:28:59.561 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:59.561 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:59.561 { 00:28:59.561 "params": { 00:28:59.561 "name": "Nvme$subsystem", 00:28:59.561 "trtype": "$TEST_TRANSPORT", 00:28:59.561 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.561 "adrfam": "ipv4", 00:28:59.561 "trsvcid": "$NVMF_PORT", 00:28:59.561 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.561 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.561 "hdgst": ${hdgst:-false}, 00:28:59.561 "ddgst": ${ddgst:-false} 00:28:59.561 }, 00:28:59.561 "method": "bdev_nvme_attach_controller" 00:28:59.561 } 00:28:59.561 EOF 00:28:59.561 )") 00:28:59.561 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:28:59.561 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:28:59.561 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:28:59.561 12:43:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:59.561 "params": { 00:28:59.561 "name": "Nvme1", 00:28:59.561 "trtype": "tcp", 00:28:59.561 "traddr": "10.0.0.2", 00:28:59.561 "adrfam": "ipv4", 00:28:59.561 "trsvcid": "4420", 00:28:59.561 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:59.561 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:59.561 "hdgst": false, 00:28:59.561 "ddgst": false 00:28:59.561 }, 00:28:59.561 "method": "bdev_nvme_attach_controller" 00:28:59.561 }' 00:28:59.561 [2024-11-20 12:43:04.825873] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:28:59.561 [2024-11-20 12:43:04.825915] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid336527 ] 00:28:59.561 [2024-11-20 12:43:04.903489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.561 [2024-11-20 12:43:04.944286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.561 Running I/O for 1 seconds... 00:29:00.940 11472.00 IOPS, 44.81 MiB/s 00:29:00.940 Latency(us) 00:29:00.940 [2024-11-20T11:43:06.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:00.940 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:00.940 Verification LBA range: start 0x0 length 0x4000 00:29:00.940 Nvme1n1 : 1.01 11484.81 44.86 0.00 0.00 11102.64 1232.70 11858.90 00:29:00.940 [2024-11-20T11:43:06.706Z] =================================================================================================================== 00:29:00.940 [2024-11-20T11:43:06.706Z] Total : 11484.81 44.86 0.00 0.00 11102.64 1232.70 11858.90 00:29:00.940 12:43:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=336770 00:29:00.940 12:43:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:00.940 12:43:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:00.940 12:43:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:00.940 12:43:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:00.940 12:43:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:00.940 12:43:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:00.940 12:43:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:00.940 { 00:29:00.940 "params": { 00:29:00.940 "name": "Nvme$subsystem", 00:29:00.940 "trtype": "$TEST_TRANSPORT", 00:29:00.940 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.940 "adrfam": "ipv4", 00:29:00.940 "trsvcid": "$NVMF_PORT", 00:29:00.940 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.940 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.940 "hdgst": ${hdgst:-false}, 00:29:00.940 "ddgst": ${ddgst:-false} 00:29:00.940 }, 00:29:00.940 "method": "bdev_nvme_attach_controller" 00:29:00.940 } 00:29:00.940 EOF 00:29:00.940 )") 00:29:00.940 12:43:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:00.940 12:43:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:00.940 12:43:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:00.940 12:43:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:00.940 "params": { 00:29:00.940 "name": "Nvme1", 00:29:00.940 "trtype": "tcp", 00:29:00.940 "traddr": "10.0.0.2", 00:29:00.940 "adrfam": "ipv4", 00:29:00.940 "trsvcid": "4420", 00:29:00.940 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:00.940 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:00.940 "hdgst": false, 00:29:00.940 "ddgst": false 00:29:00.940 }, 00:29:00.940 "method": "bdev_nvme_attach_controller" 00:29:00.940 }' 00:29:00.940 [2024-11-20 12:43:06.487381] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:29:00.940 [2024-11-20 12:43:06.487431] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid336770 ] 00:29:00.940 [2024-11-20 12:43:06.564621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.940 [2024-11-20 12:43:06.602521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.198 Running I/O for 15 seconds... 00:29:03.070 11369.00 IOPS, 44.41 MiB/s [2024-11-20T11:43:09.777Z] 11324.00 IOPS, 44.23 MiB/s [2024-11-20T11:43:09.777Z] 12:43:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 336503 00:29:04.011 12:43:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:04.011 [2024-11-20 12:43:09.451523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:110576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.011 [2024-11-20 12:43:09.451561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.011 [2024-11-20 12:43:09.451580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:109624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.011 [2024-11-20 12:43:09.451590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.011 [2024-11-20 12:43:09.451600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:109632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.011 [2024-11-20 12:43:09.451608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.011 [2024-11-20 12:43:09.451618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:109640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.011 [2024-11-20 12:43:09.451624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.011 [2024-11-20 12:43:09.451633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:109648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.011 [2024-11-20 12:43:09.451641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.011 [2024-11-20 12:43:09.451649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:109656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.011 [2024-11-20 12:43:09.451657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.011 [2024-11-20 12:43:09.451666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:109664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.011 [2024-11-20 12:43:09.451674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.011 [2024-11-20 12:43:09.451682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:109672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.011 [2024-11-20 12:43:09.451689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.011 [2024-11-20 12:43:09.451703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:109680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.011 [2024-11-20 12:43:09.451711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.011 [2024-11-20 12:43:09.451719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:109688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.011 [2024-11-20 12:43:09.451726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.011 [2024-11-20 12:43:09.451735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:109696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.011 [2024-11-20 12:43:09.451742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.011 [2024-11-20 12:43:09.451756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:109704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.011 [2024-11-20 12:43:09.451763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.011 [2024-11-20 12:43:09.451771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:109712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.011 [2024-11-20 12:43:09.451780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.011 [2024-11-20 12:43:09.451788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:109720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.011 [2024-11-20 12:43:09.451795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.011 [2024-11-20 12:43:09.451806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:109728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.011 [2024-11-20 12:43:09.451814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.011 [2024-11-20 12:43:09.451822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:109736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.011 [2024-11-20 12:43:09.451830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.011 [2024-11-20 12:43:09.451838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:110584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.011 [2024-11-20 12:43:09.451848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.011 [2024-11-20 12:43:09.451857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:109744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.011 [2024-11-20 12:43:09.451864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.011 [2024-11-20 12:43:09.451875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:109752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.011 [2024-11-20 12:43:09.451885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.011 [2024-11-20 12:43:09.451896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:109760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.011 [2024-11-20 12:43:09.451904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.011 [2024-11-20 12:43:09.451914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:109768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.011 [2024-11-20 12:43:09.451925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.011 [2024-11-20 12:43:09.451935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:109776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.011 [2024-11-20 12:43:09.451945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.011 [2024-11-20 12:43:09.451954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:109784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.011 [2024-11-20 12:43:09.451964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.011 [2024-11-20 12:43:09.451975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:109792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.011 [2024-11-20 12:43:09.451984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.011 [2024-11-20 12:43:09.451993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:109800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.011 [2024-11-20 12:43:09.452004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.011 [2024-11-20 12:43:09.452012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:109808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.011 [2024-11-20 12:43:09.452019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.011 [2024-11-20 12:43:09.452026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:109816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.011 [2024-11-20 12:43:09.452033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.011 [2024-11-20 12:43:09.452041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:109824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.011 [2024-11-20 12:43:09.452047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.011 [2024-11-20 12:43:09.452055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:109832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.011 [2024-11-20 12:43:09.452061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.011 [2024-11-20 12:43:09.452069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:109840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.011 [2024-11-20 12:43:09.452075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.011 [2024-11-20 12:43:09.452083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:109848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.012 [2024-11-20 12:43:09.452090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.012 [2024-11-20 12:43:09.452098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:109856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.012 [2024-11-20 12:43:09.452104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.012 [2024-11-20 12:43:09.452113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:109864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.012 [2024-11-20 12:43:09.452121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.012 [2024-11-20 12:43:09.452131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.012 [2024-11-20 12:43:09.452138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.012 [2024-11-20 12:43:09.452146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.012 [2024-11-20 12:43:09.452152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.012 [2024-11-20 12:43:09.452160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:109888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.012 [2024-11-20 12:43:09.452167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.012 [2024-11-20 12:43:09.452176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:109896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.012 [2024-11-20 12:43:09.452182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.012 [2024-11-20 12:43:09.452191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:109904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.012 [2024-11-20 12:43:09.452198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.012 [2024-11-20 12:43:09.452327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:109912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.012 [2024-11-20 12:43:09.452333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.012 [2024-11-20 12:43:09.452341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:109920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.012 [2024-11-20 12:43:09.452348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.012 [2024-11-20 12:43:09.452356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:109928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.012 [2024-11-20 12:43:09.452362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.012 [2024-11-20 12:43:09.452369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:109936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.012 [2024-11-20 12:43:09.452376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.012 [2024-11-20 12:43:09.452383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:109944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.012 [2024-11-20 12:43:09.452389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.012 [2024-11-20 12:43:09.452397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:109952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.012 [2024-11-20 12:43:09.452403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.012 [2024-11-20 12:43:09.452411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:109960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.012 [2024-11-20 12:43:09.452417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.012 [2024-11-20 12:43:09.452425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:109968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.012 [2024-11-20 12:43:09.452433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.012 [2024-11-20 12:43:09.452441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:109976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.012 [2024-11-20 12:43:09.452447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.012 [2024-11-20 12:43:09.452455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:109984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.012 [2024-11-20 12:43:09.452462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.012 [2024-11-20 12:43:09.452469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:109992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.012 [2024-11-20 12:43:09.452476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.012 [2024-11-20 12:43:09.452485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:110000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.012 [2024-11-20 12:43:09.452491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.012 [2024-11-20 12:43:09.452499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:110008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.012 [2024-11-20 12:43:09.452506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.012 [2024-11-20 12:43:09.452515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:110016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.012 [2024-11-20 12:43:09.452521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.012 [2024-11-20 12:43:09.452529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:110024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.012 [2024-11-20 12:43:09.452535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.012 [2024-11-20 12:43:09.452543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:110032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.012 [2024-11-20 12:43:09.452549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.012 [2024-11-20 12:43:09.452556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.012 [2024-11-20 12:43:09.452562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.012 [2024-11-20 12:43:09.452570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:110048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.012 [2024-11-20 12:43:09.452577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.012 [2024-11-20 12:43:09.452585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:110056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.012 [2024-11-20 12:43:09.452591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.012 [2024-11-20 12:43:09.452599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:110064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.012 [2024-11-20 12:43:09.452605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.012 [2024-11-20 12:43:09.452614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:110072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.012 [2024-11-20 12:43:09.452621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.012 [2024-11-20 12:43:09.452629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.012 [2024-11-20 12:43:09.452635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.012 [2024-11-20 12:43:09.452643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:110088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.012 [2024-11-20 12:43:09.452649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.012 [2024-11-20 12:43:09.452657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:110096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.012 [2024-11-20 12:43:09.452663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.012 [2024-11-20 12:43:09.452671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.012 [2024-11-20 12:43:09.452677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.012 [2024-11-20 12:43:09.452684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.012 [2024-11-20 12:43:09.452691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.012 [2024-11-20 12:43:09.452699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:110120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.012 [2024-11-20 12:43:09.452706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.012 [2024-11-20 12:43:09.452714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:110128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.012 [2024-11-20 12:43:09.452721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.012 [2024-11-20 12:43:09.452729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.012 [2024-11-20 12:43:09.452735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.012 [2024-11-20 12:43:09.452742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:110144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.012 [2024-11-20 12:43:09.452749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.012 [2024-11-20 12:43:09.452757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:110152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.012 [2024-11-20 12:43:09.452763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.012 [2024-11-20 12:43:09.452771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:110160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.012 [2024-11-20 12:43:09.452777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.013 [2024-11-20 12:43:09.452784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:110168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.013 [2024-11-20 12:43:09.452790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.013 [2024-11-20 12:43:09.452799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.013 [2024-11-20 12:43:09.452806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.013 [2024-11-20 12:43:09.452814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:110184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.013 [2024-11-20 12:43:09.452820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.013 [2024-11-20 12:43:09.452828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:110192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.013 [2024-11-20 12:43:09.452834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.013 [2024-11-20 12:43:09.452842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:110200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.013 [2024-11-20 12:43:09.452848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.013 [2024-11-20 12:43:09.452856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:110208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.013 [2024-11-20 12:43:09.452862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.013 [2024-11-20 12:43:09.452871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.013 [2024-11-20 12:43:09.452877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.013 [2024-11-20 12:43:09.452884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:110224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.013 [2024-11-20 12:43:09.452890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.013 [2024-11-20 12:43:09.452898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:110232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.013 [2024-11-20 12:43:09.452904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.013 [2024-11-20 12:43:09.452912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:110240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.013 [2024-11-20 12:43:09.452919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.013 [2024-11-20 12:43:09.452926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:110248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.013 [2024-11-20 12:43:09.452934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.013 [2024-11-20 12:43:09.452942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:110256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.013 [2024-11-20 12:43:09.452948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.013 [2024-11-20 12:43:09.452956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:110264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.013 [2024-11-20 12:43:09.452962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.013 [2024-11-20 12:43:09.452971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:110272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.013 [2024-11-20 12:43:09.452979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.013 [2024-11-20 12:43:09.452987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:110280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.013 [2024-11-20 12:43:09.452993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.013 [2024-11-20 12:43:09.453001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:110288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.013 [2024-11-20 12:43:09.453007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.013 [2024-11-20 12:43:09.453015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:110296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.013 [2024-11-20 12:43:09.453021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.013 [2024-11-20 12:43:09.453030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:110304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.013 [2024-11-20 12:43:09.453036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.013 [2024-11-20 12:43:09.453045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:110312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.013 [2024-11-20 12:43:09.453051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.013 [2024-11-20 12:43:09.453058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:110320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.013 [2024-11-20 12:43:09.453064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.013 [2024-11-20 12:43:09.453072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:110328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.013 [2024-11-20 12:43:09.453080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.013 [2024-11-20 12:43:09.453089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:110336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.013 [2024-11-20 12:43:09.453095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.013 [2024-11-20 12:43:09.453103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.013 [2024-11-20 12:43:09.453109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.013 [2024-11-20 12:43:09.453117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:110352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.013 [2024-11-20 12:43:09.453123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.013 [2024-11-20 12:43:09.453130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:110360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.013 [2024-11-20 12:43:09.453137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.013 [2024-11-20 12:43:09.453145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:110368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.013 [2024-11-20 12:43:09.453151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.013 [2024-11-20 12:43:09.453160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:110376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.013 [2024-11-20 12:43:09.453168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.013 [2024-11-20 12:43:09.453177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:110384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.013 [2024-11-20 12:43:09.453184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.013 [2024-11-20 12:43:09.453192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:110392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.013 [2024-11-20 12:43:09.453199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.013 [2024-11-20 12:43:09.453212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:110400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.013 [2024-11-20 12:43:09.453218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.013 [2024-11-20 12:43:09.453226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:110408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.013 [2024-11-20 12:43:09.453232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.013 [2024-11-20 12:43:09.453239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:110416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.013 [2024-11-20 12:43:09.453246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.013 [2024-11-20 12:43:09.453254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:110424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.013 [2024-11-20 12:43:09.453261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.013 [2024-11-20 12:43:09.453268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:110432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.013 [2024-11-20 12:43:09.453274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.013 [2024-11-20 12:43:09.453282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:110440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.013 [2024-11-20 12:43:09.453288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.013 [2024-11-20 12:43:09.453296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.013 [2024-11-20 12:43:09.453302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.013 [2024-11-20 12:43:09.453310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:110600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.013 [2024-11-20 12:43:09.453317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.013 [2024-11-20 12:43:09.453324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:110608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.013 [2024-11-20 12:43:09.453331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.013 [2024-11-20 12:43:09.453338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:110616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.013 [2024-11-20 12:43:09.453349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.013 [2024-11-20 12:43:09.453356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:110624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.013 [2024-11-20 12:43:09.453364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.014 [2024-11-20 12:43:09.453372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.014 [2024-11-20 12:43:09.453378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.014 [2024-11-20 12:43:09.453386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:110640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.014 [2024-11-20 12:43:09.453392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.014 [2024-11-20 12:43:09.453400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:110448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.014 [2024-11-20 12:43:09.453407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.014 [2024-11-20 12:43:09.453417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:110456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.014 [2024-11-20 12:43:09.453424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.014 [2024-11-20 12:43:09.453432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:110464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.014 [2024-11-20 12:43:09.453438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.014 [2024-11-20 12:43:09.453446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:110472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.014 [2024-11-20 12:43:09.453452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.014 [2024-11-20 12:43:09.453459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:110480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.014 [2024-11-20 12:43:09.453465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.014 [2024-11-20 12:43:09.453473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:110488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.014 [2024-11-20 12:43:09.453480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.014 [2024-11-20 12:43:09.453488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.014 [2024-11-20 12:43:09.453494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.014 [2024-11-20 12:43:09.453502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:110504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.014 [2024-11-20 12:43:09.453508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.014 [2024-11-20 12:43:09.453515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:110512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.014 [2024-11-20 12:43:09.453521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.014 [2024-11-20 12:43:09.453531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:110520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.014 [2024-11-20 12:43:09.453538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.014 [2024-11-20 12:43:09.453546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:110528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.014 [2024-11-20 12:43:09.453552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.014 [2024-11-20 12:43:09.453560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:110536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.014 [2024-11-20 12:43:09.453566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.014 [2024-11-20 12:43:09.453573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:110544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.014 [2024-11-20 12:43:09.453579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.014 [2024-11-20 12:43:09.453588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:110552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.014 [2024-11-20 12:43:09.453594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.014 [2024-11-20 12:43:09.453602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:110560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.014 [2024-11-20 12:43:09.453608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.014 [2024-11-20 12:43:09.453615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701cf0 is same with the state(6) to be set 00:29:04.014 [2024-11-20 12:43:09.453623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:04.014 [2024-11-20 12:43:09.453628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:04.014 [2024-11-20 12:43:09.453637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110568 len:8 PRP1 0x0 PRP2 0x0 00:29:04.014 [2024-11-20 12:43:09.453645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.014 [2024-11-20 12:43:09.456481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.014 [2024-11-20 12:43:09.456537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.014 [2024-11-20 12:43:09.457133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.014 [2024-11-20 12:43:09.457154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.014 [2024-11-20 12:43:09.457162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.014 [2024-11-20 12:43:09.457342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.014 [2024-11-20 12:43:09.457516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.014 [2024-11-20 12:43:09.457525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.014 [2024-11-20 12:43:09.457533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.014 [2024-11-20 12:43:09.457540] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.014 [2024-11-20 12:43:09.469724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.014 [2024-11-20 12:43:09.470089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.014 [2024-11-20 12:43:09.470109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.014 [2024-11-20 12:43:09.470117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.014 [2024-11-20 12:43:09.470297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.014 [2024-11-20 12:43:09.470471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.014 [2024-11-20 12:43:09.470481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.014 [2024-11-20 12:43:09.470488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.014 [2024-11-20 12:43:09.470495] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.014 [2024-11-20 12:43:09.483010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.014 [2024-11-20 12:43:09.483385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.014 [2024-11-20 12:43:09.483404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.014 [2024-11-20 12:43:09.483412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.014 [2024-11-20 12:43:09.483594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.014 [2024-11-20 12:43:09.483779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.014 [2024-11-20 12:43:09.483789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.014 [2024-11-20 12:43:09.483796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.014 [2024-11-20 12:43:09.483803] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.014 [2024-11-20 12:43:09.496162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.014 [2024-11-20 12:43:09.496517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.014 [2024-11-20 12:43:09.496536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.014 [2024-11-20 12:43:09.496545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.014 [2024-11-20 12:43:09.496727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.014 [2024-11-20 12:43:09.496912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.014 [2024-11-20 12:43:09.496922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.014 [2024-11-20 12:43:09.496929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.014 [2024-11-20 12:43:09.496936] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.014 [2024-11-20 12:43:09.509487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.014 [2024-11-20 12:43:09.509921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.015 [2024-11-20 12:43:09.509940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.015 [2024-11-20 12:43:09.509951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.015 [2024-11-20 12:43:09.510141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.015 [2024-11-20 12:43:09.510333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.015 [2024-11-20 12:43:09.510344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.015 [2024-11-20 12:43:09.510351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.015 [2024-11-20 12:43:09.510358] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.015 [2024-11-20 12:43:09.522594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.015 [2024-11-20 12:43:09.523029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.015 [2024-11-20 12:43:09.523047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.015 [2024-11-20 12:43:09.523055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.015 [2024-11-20 12:43:09.523234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.015 [2024-11-20 12:43:09.523408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.015 [2024-11-20 12:43:09.523418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.015 [2024-11-20 12:43:09.523424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.015 [2024-11-20 12:43:09.523431] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.015 [2024-11-20 12:43:09.535594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.015 [2024-11-20 12:43:09.536027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.015 [2024-11-20 12:43:09.536045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.015 [2024-11-20 12:43:09.536053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.015 [2024-11-20 12:43:09.536234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.015 [2024-11-20 12:43:09.536408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.015 [2024-11-20 12:43:09.536418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.015 [2024-11-20 12:43:09.536425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.015 [2024-11-20 12:43:09.536431] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.015 [2024-11-20 12:43:09.548580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.015 [2024-11-20 12:43:09.549006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.015 [2024-11-20 12:43:09.549024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.015 [2024-11-20 12:43:09.549032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.015 [2024-11-20 12:43:09.549210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.015 [2024-11-20 12:43:09.549388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.015 [2024-11-20 12:43:09.549398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.015 [2024-11-20 12:43:09.549405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.015 [2024-11-20 12:43:09.549412] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.015 [2024-11-20 12:43:09.561793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.015 [2024-11-20 12:43:09.562236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.015 [2024-11-20 12:43:09.562255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.015 [2024-11-20 12:43:09.562263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.015 [2024-11-20 12:43:09.562446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.015 [2024-11-20 12:43:09.562631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.015 [2024-11-20 12:43:09.562641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.015 [2024-11-20 12:43:09.562648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.015 [2024-11-20 12:43:09.562655] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.015 [2024-11-20 12:43:09.574744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.015 [2024-11-20 12:43:09.575176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.015 [2024-11-20 12:43:09.575194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.015 [2024-11-20 12:43:09.575209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.015 [2024-11-20 12:43:09.575381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.015 [2024-11-20 12:43:09.575552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.015 [2024-11-20 12:43:09.575562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.015 [2024-11-20 12:43:09.575569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.015 [2024-11-20 12:43:09.575575] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.015 [2024-11-20 12:43:09.587806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.015 [2024-11-20 12:43:09.588233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.015 [2024-11-20 12:43:09.588252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.015 [2024-11-20 12:43:09.588260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.015 [2024-11-20 12:43:09.588443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.015 [2024-11-20 12:43:09.588628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.016 [2024-11-20 12:43:09.588638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.016 [2024-11-20 12:43:09.588649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.016 [2024-11-20 12:43:09.588657] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.016 [2024-11-20 12:43:09.601018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.016 [2024-11-20 12:43:09.601465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.016 [2024-11-20 12:43:09.601484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.016 [2024-11-20 12:43:09.601492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.016 [2024-11-20 12:43:09.601674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.016 [2024-11-20 12:43:09.601859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.016 [2024-11-20 12:43:09.601869] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.016 [2024-11-20 12:43:09.601877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.016 [2024-11-20 12:43:09.601884] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.016 [2024-11-20 12:43:09.614082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.016 [2024-11-20 12:43:09.614442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.016 [2024-11-20 12:43:09.614460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.016 [2024-11-20 12:43:09.614467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.016 [2024-11-20 12:43:09.614639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.016 [2024-11-20 12:43:09.614811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.016 [2024-11-20 12:43:09.614821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.016 [2024-11-20 12:43:09.614827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.016 [2024-11-20 12:43:09.614834] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.016 [2024-11-20 12:43:09.627151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.016 [2024-11-20 12:43:09.627588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.016 [2024-11-20 12:43:09.627607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.016 [2024-11-20 12:43:09.627615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.016 [2024-11-20 12:43:09.627786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.016 [2024-11-20 12:43:09.627958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.016 [2024-11-20 12:43:09.627967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.016 [2024-11-20 12:43:09.627974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.016 [2024-11-20 12:43:09.627981] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.016 [2024-11-20 12:43:09.640365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.016 [2024-11-20 12:43:09.640739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.016 [2024-11-20 12:43:09.640757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.016 [2024-11-20 12:43:09.640765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.016 [2024-11-20 12:43:09.640937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.016 [2024-11-20 12:43:09.641111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.016 [2024-11-20 12:43:09.641121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.016 [2024-11-20 12:43:09.641128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.016 [2024-11-20 12:43:09.641134] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.016 [2024-11-20 12:43:09.653451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.016 [2024-11-20 12:43:09.653874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.016 [2024-11-20 12:43:09.653892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.016 [2024-11-20 12:43:09.653900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.016 [2024-11-20 12:43:09.654071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.016 [2024-11-20 12:43:09.654251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.016 [2024-11-20 12:43:09.654261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.016 [2024-11-20 12:43:09.654268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.016 [2024-11-20 12:43:09.654274] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.016 [2024-11-20 12:43:09.666676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.016 [2024-11-20 12:43:09.667095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.016 [2024-11-20 12:43:09.667113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.016 [2024-11-20 12:43:09.667121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.016 [2024-11-20 12:43:09.667309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.016 [2024-11-20 12:43:09.667495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.017 [2024-11-20 12:43:09.667506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.017 [2024-11-20 12:43:09.667513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.017 [2024-11-20 12:43:09.667520] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.017 [2024-11-20 12:43:09.679877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.017 [2024-11-20 12:43:09.680319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.017 [2024-11-20 12:43:09.680340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.017 [2024-11-20 12:43:09.680352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.017 [2024-11-20 12:43:09.680535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.017 [2024-11-20 12:43:09.680720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.017 [2024-11-20 12:43:09.680730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.017 [2024-11-20 12:43:09.680737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.017 [2024-11-20 12:43:09.680744] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.017 [2024-11-20 12:43:09.693126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.017 [2024-11-20 12:43:09.693576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.017 [2024-11-20 12:43:09.693594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.017 [2024-11-20 12:43:09.693603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.017 [2024-11-20 12:43:09.693787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.017 [2024-11-20 12:43:09.693972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.017 [2024-11-20 12:43:09.693982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.017 [2024-11-20 12:43:09.693989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.017 [2024-11-20 12:43:09.693997] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.017 [2024-11-20 12:43:09.706278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.017 [2024-11-20 12:43:09.706743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.017 [2024-11-20 12:43:09.706762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.017 [2024-11-20 12:43:09.706771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.017 [2024-11-20 12:43:09.706954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.017 [2024-11-20 12:43:09.707139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.017 [2024-11-20 12:43:09.707149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.017 [2024-11-20 12:43:09.707157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.017 [2024-11-20 12:43:09.707163] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.017 [2024-11-20 12:43:09.719374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.017 [2024-11-20 12:43:09.719818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.017 [2024-11-20 12:43:09.719836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.017 [2024-11-20 12:43:09.719844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.017 [2024-11-20 12:43:09.720017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.017 [2024-11-20 12:43:09.720194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.017 [2024-11-20 12:43:09.720210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.017 [2024-11-20 12:43:09.720217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.017 [2024-11-20 12:43:09.720225] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.017 [2024-11-20 12:43:09.732387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.017 [2024-11-20 12:43:09.732744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.017 [2024-11-20 12:43:09.732762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.017 [2024-11-20 12:43:09.732770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.017 [2024-11-20 12:43:09.732940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.017 [2024-11-20 12:43:09.733113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.017 [2024-11-20 12:43:09.733123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.017 [2024-11-20 12:43:09.733130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.017 [2024-11-20 12:43:09.733136] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.017 [2024-11-20 12:43:09.745461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.017 [2024-11-20 12:43:09.745899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.017 [2024-11-20 12:43:09.745916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.017 [2024-11-20 12:43:09.745924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.017 [2024-11-20 12:43:09.746090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.017 [2024-11-20 12:43:09.746262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.017 [2024-11-20 12:43:09.746272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.017 [2024-11-20 12:43:09.746279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.017 [2024-11-20 12:43:09.746286] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.017 [2024-11-20 12:43:09.758357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.017 [2024-11-20 12:43:09.758717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.017 [2024-11-20 12:43:09.758734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.017 [2024-11-20 12:43:09.758742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.017 [2024-11-20 12:43:09.758900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.018 [2024-11-20 12:43:09.759059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.018 [2024-11-20 12:43:09.759068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.018 [2024-11-20 12:43:09.759078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.018 [2024-11-20 12:43:09.759085] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.278 [2024-11-20 12:43:09.771347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.278 [2024-11-20 12:43:09.771775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-11-20 12:43:09.771793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.278 [2024-11-20 12:43:09.771801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.278 [2024-11-20 12:43:09.771968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.278 [2024-11-20 12:43:09.772141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.278 [2024-11-20 12:43:09.772150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.278 [2024-11-20 12:43:09.772157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.278 [2024-11-20 12:43:09.772163] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.278 10029.00 IOPS, 39.18 MiB/s [2024-11-20T11:43:10.044Z] [2024-11-20 12:43:09.784324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.278 [2024-11-20 12:43:09.784746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-11-20 12:43:09.784792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.278 [2024-11-20 12:43:09.784815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.278 [2024-11-20 12:43:09.785407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.278 [2024-11-20 12:43:09.785991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.278 [2024-11-20 12:43:09.786000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.278 [2024-11-20 12:43:09.786007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.278 [2024-11-20 12:43:09.786013] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.278 [2024-11-20 12:43:09.797032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.278 [2024-11-20 12:43:09.797476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.278 [2024-11-20 12:43:09.797492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.278 [2024-11-20 12:43:09.797500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.278 [2024-11-20 12:43:09.797660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.278 [2024-11-20 12:43:09.797819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.279 [2024-11-20 12:43:09.797829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.279 [2024-11-20 12:43:09.797835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.279 [2024-11-20 12:43:09.797841] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.279 [2024-11-20 12:43:09.809880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.279 [2024-11-20 12:43:09.810243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-11-20 12:43:09.810261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.279 [2024-11-20 12:43:09.810268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.279 [2024-11-20 12:43:09.810426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.279 [2024-11-20 12:43:09.810585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.279 [2024-11-20 12:43:09.810594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.279 [2024-11-20 12:43:09.810600] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.279 [2024-11-20 12:43:09.810607] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.279 [2024-11-20 12:43:09.822650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.279 [2024-11-20 12:43:09.823140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-11-20 12:43:09.823159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.279 [2024-11-20 12:43:09.823166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.279 [2024-11-20 12:43:09.823354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.279 [2024-11-20 12:43:09.823524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.279 [2024-11-20 12:43:09.823534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.279 [2024-11-20 12:43:09.823540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.279 [2024-11-20 12:43:09.823547] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.279 [2024-11-20 12:43:09.835444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.279 [2024-11-20 12:43:09.835863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-11-20 12:43:09.835911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.279 [2024-11-20 12:43:09.835936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.279 [2024-11-20 12:43:09.836530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.279 [2024-11-20 12:43:09.837033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.279 [2024-11-20 12:43:09.837043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.279 [2024-11-20 12:43:09.837050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.279 [2024-11-20 12:43:09.837056] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.279 [2024-11-20 12:43:09.848177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.279 [2024-11-20 12:43:09.848527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-11-20 12:43:09.848575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.279 [2024-11-20 12:43:09.848606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.279 [2024-11-20 12:43:09.849078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.279 [2024-11-20 12:43:09.849260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.279 [2024-11-20 12:43:09.849270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.279 [2024-11-20 12:43:09.849277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.279 [2024-11-20 12:43:09.849283] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.279 [2024-11-20 12:43:09.860898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.279 [2024-11-20 12:43:09.861320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-11-20 12:43:09.861338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.279 [2024-11-20 12:43:09.861345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.279 [2024-11-20 12:43:09.861503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.279 [2024-11-20 12:43:09.861662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.279 [2024-11-20 12:43:09.861672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.279 [2024-11-20 12:43:09.861678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.279 [2024-11-20 12:43:09.861684] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.279 [2024-11-20 12:43:09.873626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.279 [2024-11-20 12:43:09.873974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-11-20 12:43:09.873991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.279 [2024-11-20 12:43:09.873998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.279 [2024-11-20 12:43:09.874155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.279 [2024-11-20 12:43:09.874336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.279 [2024-11-20 12:43:09.874344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.279 [2024-11-20 12:43:09.874350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.279 [2024-11-20 12:43:09.874356] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.279 [2024-11-20 12:43:09.886642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.279 [2024-11-20 12:43:09.886980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-11-20 12:43:09.886997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.279 [2024-11-20 12:43:09.887005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.279 [2024-11-20 12:43:09.887163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.279 [2024-11-20 12:43:09.887353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.279 [2024-11-20 12:43:09.887364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.279 [2024-11-20 12:43:09.887372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.279 [2024-11-20 12:43:09.887379] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.279 [2024-11-20 12:43:09.899446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.279 [2024-11-20 12:43:09.899854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-11-20 12:43:09.899899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.279 [2024-11-20 12:43:09.899922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.279 [2024-11-20 12:43:09.900418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.279 [2024-11-20 12:43:09.900578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.279 [2024-11-20 12:43:09.900587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.279 [2024-11-20 12:43:09.900594] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.279 [2024-11-20 12:43:09.900600] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.279 [2024-11-20 12:43:09.912200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.279 [2024-11-20 12:43:09.912599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.279 [2024-11-20 12:43:09.912616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.279 [2024-11-20 12:43:09.912625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.279 [2024-11-20 12:43:09.912782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.279 [2024-11-20 12:43:09.912941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.279 [2024-11-20 12:43:09.912951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.279 [2024-11-20 12:43:09.912957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.279 [2024-11-20 12:43:09.912963] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.279 [2024-11-20 12:43:09.925006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.279 [2024-11-20 12:43:09.925430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-11-20 12:43:09.925476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.280 [2024-11-20 12:43:09.925500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.280 [2024-11-20 12:43:09.926040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.280 [2024-11-20 12:43:09.926206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.280 [2024-11-20 12:43:09.926215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.280 [2024-11-20 12:43:09.926225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.280 [2024-11-20 12:43:09.926249] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.280 [2024-11-20 12:43:09.937828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.280 [2024-11-20 12:43:09.938186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-11-20 12:43:09.938209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.280 [2024-11-20 12:43:09.938217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.280 [2024-11-20 12:43:09.938384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.280 [2024-11-20 12:43:09.938553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.280 [2024-11-20 12:43:09.938564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.280 [2024-11-20 12:43:09.938570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.280 [2024-11-20 12:43:09.938576] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.280 [2024-11-20 12:43:09.950638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.280 [2024-11-20 12:43:09.951025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-11-20 12:43:09.951042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.280 [2024-11-20 12:43:09.951050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.280 [2024-11-20 12:43:09.951213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.280 [2024-11-20 12:43:09.951395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.280 [2024-11-20 12:43:09.951405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.280 [2024-11-20 12:43:09.951411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.280 [2024-11-20 12:43:09.951417] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.280 [2024-11-20 12:43:09.963567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.280 [2024-11-20 12:43:09.964004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-11-20 12:43:09.964049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.280 [2024-11-20 12:43:09.964072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.280 [2024-11-20 12:43:09.964466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.280 [2024-11-20 12:43:09.964636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.280 [2024-11-20 12:43:09.964645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.280 [2024-11-20 12:43:09.964651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.280 [2024-11-20 12:43:09.964658] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.280 [2024-11-20 12:43:09.976430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.280 [2024-11-20 12:43:09.976870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-11-20 12:43:09.976916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.280 [2024-11-20 12:43:09.976940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.280 [2024-11-20 12:43:09.977411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.280 [2024-11-20 12:43:09.977583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.280 [2024-11-20 12:43:09.977593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.280 [2024-11-20 12:43:09.977599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.280 [2024-11-20 12:43:09.977605] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.280 [2024-11-20 12:43:09.989269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.280 [2024-11-20 12:43:09.989700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-11-20 12:43:09.989744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.280 [2024-11-20 12:43:09.989768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.280 [2024-11-20 12:43:09.990224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.280 [2024-11-20 12:43:09.990410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.280 [2024-11-20 12:43:09.990419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.280 [2024-11-20 12:43:09.990426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.280 [2024-11-20 12:43:09.990433] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.280 [2024-11-20 12:43:10.002088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.280 [2024-11-20 12:43:10.002384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-11-20 12:43:10.002402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.280 [2024-11-20 12:43:10.002411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.280 [2024-11-20 12:43:10.002583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.280 [2024-11-20 12:43:10.002755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.280 [2024-11-20 12:43:10.002766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.280 [2024-11-20 12:43:10.002772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.280 [2024-11-20 12:43:10.002779] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.280 [2024-11-20 12:43:10.015566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.280 [2024-11-20 12:43:10.015986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-11-20 12:43:10.016004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.280 [2024-11-20 12:43:10.016015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.280 [2024-11-20 12:43:10.016187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.280 [2024-11-20 12:43:10.016365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.280 [2024-11-20 12:43:10.016376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.280 [2024-11-20 12:43:10.016382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.280 [2024-11-20 12:43:10.016389] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.280 [2024-11-20 12:43:10.028609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.280 [2024-11-20 12:43:10.028965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.280 [2024-11-20 12:43:10.028983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.280 [2024-11-20 12:43:10.028990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.280 [2024-11-20 12:43:10.029158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.280 [2024-11-20 12:43:10.029350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.280 [2024-11-20 12:43:10.029361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.280 [2024-11-20 12:43:10.029367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.280 [2024-11-20 12:43:10.029374] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.541 [2024-11-20 12:43:10.041849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.541 [2024-11-20 12:43:10.042216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.541 [2024-11-20 12:43:10.042233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.541 [2024-11-20 12:43:10.042241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.541 [2024-11-20 12:43:10.042413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.541 [2024-11-20 12:43:10.042596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.541 [2024-11-20 12:43:10.042607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.541 [2024-11-20 12:43:10.042613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.541 [2024-11-20 12:43:10.042620] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.541 [2024-11-20 12:43:10.054772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.541 [2024-11-20 12:43:10.055178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.541 [2024-11-20 12:43:10.055196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.541 [2024-11-20 12:43:10.055208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.541 [2024-11-20 12:43:10.055380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.541 [2024-11-20 12:43:10.055563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.541 [2024-11-20 12:43:10.055573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.541 [2024-11-20 12:43:10.055580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.541 [2024-11-20 12:43:10.055586] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.541 [2024-11-20 12:43:10.067683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.541 [2024-11-20 12:43:10.068109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.541 [2024-11-20 12:43:10.068127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.541 [2024-11-20 12:43:10.068136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.541 [2024-11-20 12:43:10.068326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.541 [2024-11-20 12:43:10.068499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.541 [2024-11-20 12:43:10.068509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.541 [2024-11-20 12:43:10.068516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.541 [2024-11-20 12:43:10.068523] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.541 [2024-11-20 12:43:10.080636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.541 [2024-11-20 12:43:10.080981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.541 [2024-11-20 12:43:10.080999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.541 [2024-11-20 12:43:10.081006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.541 [2024-11-20 12:43:10.081173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.541 [2024-11-20 12:43:10.081368] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.541 [2024-11-20 12:43:10.081378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.541 [2024-11-20 12:43:10.081385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.541 [2024-11-20 12:43:10.081392] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.541 [2024-11-20 12:43:10.093643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.541 [2024-11-20 12:43:10.094024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.541 [2024-11-20 12:43:10.094041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.541 [2024-11-20 12:43:10.094049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.541 [2024-11-20 12:43:10.094227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.541 [2024-11-20 12:43:10.094395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.541 [2024-11-20 12:43:10.094405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.541 [2024-11-20 12:43:10.094415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.541 [2024-11-20 12:43:10.094422] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.541 [2024-11-20 12:43:10.106520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.541 [2024-11-20 12:43:10.106857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.541 [2024-11-20 12:43:10.106874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.541 [2024-11-20 12:43:10.106881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.541 [2024-11-20 12:43:10.107048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.541 [2024-11-20 12:43:10.107221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.541 [2024-11-20 12:43:10.107232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.541 [2024-11-20 12:43:10.107239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.541 [2024-11-20 12:43:10.107246] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.541 [2024-11-20 12:43:10.119514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.541 [2024-11-20 12:43:10.119959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.541 [2024-11-20 12:43:10.120005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.541 [2024-11-20 12:43:10.120030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.541 [2024-11-20 12:43:10.120516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.541 [2024-11-20 12:43:10.120686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.541 [2024-11-20 12:43:10.120696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.541 [2024-11-20 12:43:10.120702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.541 [2024-11-20 12:43:10.120709] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.541 [2024-11-20 12:43:10.132499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.541 [2024-11-20 12:43:10.132938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.541 [2024-11-20 12:43:10.132983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.541 [2024-11-20 12:43:10.133007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.542 [2024-11-20 12:43:10.133562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.542 [2024-11-20 12:43:10.133732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.542 [2024-11-20 12:43:10.133741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.542 [2024-11-20 12:43:10.133748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.542 [2024-11-20 12:43:10.133754] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.542 [2024-11-20 12:43:10.145386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.542 [2024-11-20 12:43:10.145721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.542 [2024-11-20 12:43:10.145738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.542 [2024-11-20 12:43:10.145746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.542 [2024-11-20 12:43:10.145913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.542 [2024-11-20 12:43:10.146081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.542 [2024-11-20 12:43:10.146091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.542 [2024-11-20 12:43:10.146097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.542 [2024-11-20 12:43:10.146104] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.542 [2024-11-20 12:43:10.158351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.542 [2024-11-20 12:43:10.158781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.542 [2024-11-20 12:43:10.158799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.542 [2024-11-20 12:43:10.158806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.542 [2024-11-20 12:43:10.158975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.542 [2024-11-20 12:43:10.159145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.542 [2024-11-20 12:43:10.159155] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.542 [2024-11-20 12:43:10.159161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.542 [2024-11-20 12:43:10.159167] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.542 [2024-11-20 12:43:10.171301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.542 [2024-11-20 12:43:10.171702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.542 [2024-11-20 12:43:10.171719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.542 [2024-11-20 12:43:10.171728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.542 [2024-11-20 12:43:10.171895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.542 [2024-11-20 12:43:10.172063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.542 [2024-11-20 12:43:10.172073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.542 [2024-11-20 12:43:10.172080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.542 [2024-11-20 12:43:10.172086] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.542 [2024-11-20 12:43:10.184179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.542 [2024-11-20 12:43:10.184583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.542 [2024-11-20 12:43:10.184600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.542 [2024-11-20 12:43:10.184611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.542 [2024-11-20 12:43:10.184769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.542 [2024-11-20 12:43:10.184927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.542 [2024-11-20 12:43:10.184936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.542 [2024-11-20 12:43:10.184943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.542 [2024-11-20 12:43:10.184949] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.542 [2024-11-20 12:43:10.197057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.542 [2024-11-20 12:43:10.197395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.542 [2024-11-20 12:43:10.197413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.542 [2024-11-20 12:43:10.197421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.542 [2024-11-20 12:43:10.197587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.542 [2024-11-20 12:43:10.197755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.542 [2024-11-20 12:43:10.197764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.542 [2024-11-20 12:43:10.197771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.542 [2024-11-20 12:43:10.197777] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.542 [2024-11-20 12:43:10.210042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.542 [2024-11-20 12:43:10.210443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.542 [2024-11-20 12:43:10.210461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.542 [2024-11-20 12:43:10.210469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.542 [2024-11-20 12:43:10.210637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.542 [2024-11-20 12:43:10.210805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.542 [2024-11-20 12:43:10.210815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.542 [2024-11-20 12:43:10.210821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.542 [2024-11-20 12:43:10.210827] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.542 [2024-11-20 12:43:10.222935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.542 [2024-11-20 12:43:10.223354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.542 [2024-11-20 12:43:10.223372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.542 [2024-11-20 12:43:10.223380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.542 [2024-11-20 12:43:10.223548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.542 [2024-11-20 12:43:10.223719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.542 [2024-11-20 12:43:10.223729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.542 [2024-11-20 12:43:10.223736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.542 [2024-11-20 12:43:10.223743] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.542 [2024-11-20 12:43:10.235981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.542 [2024-11-20 12:43:10.236388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.542 [2024-11-20 12:43:10.236406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.542 [2024-11-20 12:43:10.236414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.542 [2024-11-20 12:43:10.236585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.542 [2024-11-20 12:43:10.236758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.542 [2024-11-20 12:43:10.236767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.542 [2024-11-20 12:43:10.236774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.542 [2024-11-20 12:43:10.236781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.542 [2024-11-20 12:43:10.248982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.542 [2024-11-20 12:43:10.249406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.542 [2024-11-20 12:43:10.249423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.542 [2024-11-20 12:43:10.249431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.542 [2024-11-20 12:43:10.249600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.542 [2024-11-20 12:43:10.249781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.542 [2024-11-20 12:43:10.249790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.542 [2024-11-20 12:43:10.249796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.542 [2024-11-20 12:43:10.249803] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.542 [2024-11-20 12:43:10.261916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.542 [2024-11-20 12:43:10.262342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.542 [2024-11-20 12:43:10.262388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.543 [2024-11-20 12:43:10.262411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.543 [2024-11-20 12:43:10.262966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.543 [2024-11-20 12:43:10.263136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.543 [2024-11-20 12:43:10.263146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.543 [2024-11-20 12:43:10.263156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.543 [2024-11-20 12:43:10.263163] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.543 [2024-11-20 12:43:10.274816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.543 [2024-11-20 12:43:10.275218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.543 [2024-11-20 12:43:10.275236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.543 [2024-11-20 12:43:10.275244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.543 [2024-11-20 12:43:10.275411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.543 [2024-11-20 12:43:10.275579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.543 [2024-11-20 12:43:10.275589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.543 [2024-11-20 12:43:10.275595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.543 [2024-11-20 12:43:10.275602] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.543 [2024-11-20 12:43:10.287734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.543 [2024-11-20 12:43:10.288070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.543 [2024-11-20 12:43:10.288088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.543 [2024-11-20 12:43:10.288095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.543 [2024-11-20 12:43:10.288268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.543 [2024-11-20 12:43:10.288437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.543 [2024-11-20 12:43:10.288446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.543 [2024-11-20 12:43:10.288453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.543 [2024-11-20 12:43:10.288459] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.543 [2024-11-20 12:43:10.300695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.543 [2024-11-20 12:43:10.301124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.543 [2024-11-20 12:43:10.301142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.543 [2024-11-20 12:43:10.301149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.543 [2024-11-20 12:43:10.301327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.543 [2024-11-20 12:43:10.301500] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.543 [2024-11-20 12:43:10.301511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.543 [2024-11-20 12:43:10.301517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.543 [2024-11-20 12:43:10.301524] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.803 [2024-11-20 12:43:10.313667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.803 [2024-11-20 12:43:10.314097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.803 [2024-11-20 12:43:10.314114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.803 [2024-11-20 12:43:10.314122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.803 [2024-11-20 12:43:10.314295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.803 [2024-11-20 12:43:10.314463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.803 [2024-11-20 12:43:10.314473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.803 [2024-11-20 12:43:10.314479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.803 [2024-11-20 12:43:10.314485] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.803 [2024-11-20 12:43:10.326625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.803 [2024-11-20 12:43:10.327036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.803 [2024-11-20 12:43:10.327081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.803 [2024-11-20 12:43:10.327104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.803 [2024-11-20 12:43:10.327695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.803 [2024-11-20 12:43:10.328293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.803 [2024-11-20 12:43:10.328306] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.803 [2024-11-20 12:43:10.328313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.803 [2024-11-20 12:43:10.328319] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.803 [2024-11-20 12:43:10.339572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.803 [2024-11-20 12:43:10.339975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.803 [2024-11-20 12:43:10.339993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.803 [2024-11-20 12:43:10.340000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.803 [2024-11-20 12:43:10.340166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.803 [2024-11-20 12:43:10.340341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.803 [2024-11-20 12:43:10.340352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.803 [2024-11-20 12:43:10.340358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.803 [2024-11-20 12:43:10.340364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.803 [2024-11-20 12:43:10.352535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.803 [2024-11-20 12:43:10.352947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.803 [2024-11-20 12:43:10.352992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.803 [2024-11-20 12:43:10.353023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.803 [2024-11-20 12:43:10.353401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.803 [2024-11-20 12:43:10.353571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.803 [2024-11-20 12:43:10.353580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.803 [2024-11-20 12:43:10.353587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.803 [2024-11-20 12:43:10.353594] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.803 [2024-11-20 12:43:10.365582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.803 [2024-11-20 12:43:10.366005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.803 [2024-11-20 12:43:10.366051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.803 [2024-11-20 12:43:10.366075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.803 [2024-11-20 12:43:10.366669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.803 [2024-11-20 12:43:10.367271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.803 [2024-11-20 12:43:10.367281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.803 [2024-11-20 12:43:10.367287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.804 [2024-11-20 12:43:10.367294] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.804 [2024-11-20 12:43:10.378468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.804 [2024-11-20 12:43:10.378826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.804 [2024-11-20 12:43:10.378844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.804 [2024-11-20 12:43:10.378851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.804 [2024-11-20 12:43:10.379018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.804 [2024-11-20 12:43:10.379186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.804 [2024-11-20 12:43:10.379195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.804 [2024-11-20 12:43:10.379207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.804 [2024-11-20 12:43:10.379214] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.804 [2024-11-20 12:43:10.391468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.804 [2024-11-20 12:43:10.391903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.804 [2024-11-20 12:43:10.391948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.804 [2024-11-20 12:43:10.391971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.804 [2024-11-20 12:43:10.392436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.804 [2024-11-20 12:43:10.392610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.804 [2024-11-20 12:43:10.392619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.804 [2024-11-20 12:43:10.392625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.804 [2024-11-20 12:43:10.392632] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.804 [2024-11-20 12:43:10.404421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.804 [2024-11-20 12:43:10.404849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.804 [2024-11-20 12:43:10.404906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.804 [2024-11-20 12:43:10.404930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.804 [2024-11-20 12:43:10.405481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.804 [2024-11-20 12:43:10.405651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.804 [2024-11-20 12:43:10.405661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.804 [2024-11-20 12:43:10.405668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.804 [2024-11-20 12:43:10.405674] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.804 [2024-11-20 12:43:10.417321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.804 [2024-11-20 12:43:10.417743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.804 [2024-11-20 12:43:10.417787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.804 [2024-11-20 12:43:10.417811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.804 [2024-11-20 12:43:10.418377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.804 [2024-11-20 12:43:10.418548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.804 [2024-11-20 12:43:10.418558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.804 [2024-11-20 12:43:10.418564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.804 [2024-11-20 12:43:10.418571] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.804 [2024-11-20 12:43:10.430197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.804 [2024-11-20 12:43:10.430633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.804 [2024-11-20 12:43:10.430651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.804 [2024-11-20 12:43:10.430658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.804 [2024-11-20 12:43:10.430825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.804 [2024-11-20 12:43:10.430993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.804 [2024-11-20 12:43:10.431003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.804 [2024-11-20 12:43:10.431013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.804 [2024-11-20 12:43:10.431020] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.804 [2024-11-20 12:43:10.443071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.804 [2024-11-20 12:43:10.443496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.804 [2024-11-20 12:43:10.443541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.804 [2024-11-20 12:43:10.443565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.804 [2024-11-20 12:43:10.444142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.804 [2024-11-20 12:43:10.444368] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.804 [2024-11-20 12:43:10.444379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.804 [2024-11-20 12:43:10.444386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.804 [2024-11-20 12:43:10.444392] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.804 [2024-11-20 12:43:10.456151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.804 [2024-11-20 12:43:10.456587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.804 [2024-11-20 12:43:10.456605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.804 [2024-11-20 12:43:10.456612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.804 [2024-11-20 12:43:10.456784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.804 [2024-11-20 12:43:10.456957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.804 [2024-11-20 12:43:10.456967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.804 [2024-11-20 12:43:10.456973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.804 [2024-11-20 12:43:10.456980] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.804 [2024-11-20 12:43:10.469123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.804 [2024-11-20 12:43:10.469416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.804 [2024-11-20 12:43:10.469434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.804 [2024-11-20 12:43:10.469442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.804 [2024-11-20 12:43:10.469613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.804 [2024-11-20 12:43:10.469786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.804 [2024-11-20 12:43:10.469796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.804 [2024-11-20 12:43:10.469802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.804 [2024-11-20 12:43:10.469810] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.804 [2024-11-20 12:43:10.482033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.804 [2024-11-20 12:43:10.482484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.804 [2024-11-20 12:43:10.482503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.804 [2024-11-20 12:43:10.482511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.804 [2024-11-20 12:43:10.482677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.804 [2024-11-20 12:43:10.482844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.804 [2024-11-20 12:43:10.482853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.804 [2024-11-20 12:43:10.482860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.804 [2024-11-20 12:43:10.482866] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.804 [2024-11-20 12:43:10.495073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.804 [2024-11-20 12:43:10.495483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.804 [2024-11-20 12:43:10.495502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.804 [2024-11-20 12:43:10.495510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.804 [2024-11-20 12:43:10.495681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.804 [2024-11-20 12:43:10.495854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.804 [2024-11-20 12:43:10.495864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.805 [2024-11-20 12:43:10.495870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.805 [2024-11-20 12:43:10.495877] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.805 [2024-11-20 12:43:10.507950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.805 [2024-11-20 12:43:10.508368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.805 [2024-11-20 12:43:10.508386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.805 [2024-11-20 12:43:10.508393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.805 [2024-11-20 12:43:10.508564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.805 [2024-11-20 12:43:10.508723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.805 [2024-11-20 12:43:10.508732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.805 [2024-11-20 12:43:10.508738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.805 [2024-11-20 12:43:10.508744] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.805 [2024-11-20 12:43:10.520838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.805 [2024-11-20 12:43:10.521258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.805 [2024-11-20 12:43:10.521317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.805 [2024-11-20 12:43:10.521350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.805 [2024-11-20 12:43:10.521851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.805 [2024-11-20 12:43:10.522011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.805 [2024-11-20 12:43:10.522020] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.805 [2024-11-20 12:43:10.522026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.805 [2024-11-20 12:43:10.522032] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.805 [2024-11-20 12:43:10.533710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.805 [2024-11-20 12:43:10.534078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.805 [2024-11-20 12:43:10.534096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.805 [2024-11-20 12:43:10.534104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.805 [2024-11-20 12:43:10.534279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.805 [2024-11-20 12:43:10.534449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.805 [2024-11-20 12:43:10.534458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.805 [2024-11-20 12:43:10.534465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.805 [2024-11-20 12:43:10.534472] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.805 [2024-11-20 12:43:10.546663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.805 [2024-11-20 12:43:10.546994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.805 [2024-11-20 12:43:10.547038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.805 [2024-11-20 12:43:10.547063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.805 [2024-11-20 12:43:10.547541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.805 [2024-11-20 12:43:10.547711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.805 [2024-11-20 12:43:10.547719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.805 [2024-11-20 12:43:10.547725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.805 [2024-11-20 12:43:10.547731] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.805 [2024-11-20 12:43:10.559487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.805 [2024-11-20 12:43:10.559925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.805 [2024-11-20 12:43:10.559943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:04.805 [2024-11-20 12:43:10.559950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:04.805 [2024-11-20 12:43:10.560129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:04.805 [2024-11-20 12:43:10.560329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.805 [2024-11-20 12:43:10.560340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.805 [2024-11-20 12:43:10.560346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.805 [2024-11-20 12:43:10.560353] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.065 [2024-11-20 12:43:10.572559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.065 [2024-11-20 12:43:10.572986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.065 [2024-11-20 12:43:10.573031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.065 [2024-11-20 12:43:10.573056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.065 [2024-11-20 12:43:10.573647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.065 [2024-11-20 12:43:10.574081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.065 [2024-11-20 12:43:10.574091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.065 [2024-11-20 12:43:10.574097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.065 [2024-11-20 12:43:10.574103] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.065 [2024-11-20 12:43:10.585585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.065 [2024-11-20 12:43:10.586010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.065 [2024-11-20 12:43:10.586028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.065 [2024-11-20 12:43:10.586035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.065 [2024-11-20 12:43:10.586207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.065 [2024-11-20 12:43:10.586377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.065 [2024-11-20 12:43:10.586387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.065 [2024-11-20 12:43:10.586393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.065 [2024-11-20 12:43:10.586400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.065 [2024-11-20 12:43:10.598461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.065 [2024-11-20 12:43:10.598891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.065 [2024-11-20 12:43:10.598936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.065 [2024-11-20 12:43:10.598960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.065 [2024-11-20 12:43:10.599552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.065 [2024-11-20 12:43:10.599765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.065 [2024-11-20 12:43:10.599775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.065 [2024-11-20 12:43:10.599787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.065 [2024-11-20 12:43:10.599794] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.065 [2024-11-20 12:43:10.611238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.065 [2024-11-20 12:43:10.611658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.065 [2024-11-20 12:43:10.611675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.065 [2024-11-20 12:43:10.611683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.065 [2024-11-20 12:43:10.611841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.065 [2024-11-20 12:43:10.612000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.065 [2024-11-20 12:43:10.612009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.065 [2024-11-20 12:43:10.612015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.066 [2024-11-20 12:43:10.612022] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.066 [2024-11-20 12:43:10.624044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.066 [2024-11-20 12:43:10.624463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.066 [2024-11-20 12:43:10.624510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.066 [2024-11-20 12:43:10.624533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.066 [2024-11-20 12:43:10.625004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.066 [2024-11-20 12:43:10.625164] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.066 [2024-11-20 12:43:10.625173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.066 [2024-11-20 12:43:10.625180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.066 [2024-11-20 12:43:10.625186] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.066 [2024-11-20 12:43:10.636821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.066 [2024-11-20 12:43:10.637225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.066 [2024-11-20 12:43:10.637242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.066 [2024-11-20 12:43:10.637249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.066 [2024-11-20 12:43:10.637407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.066 [2024-11-20 12:43:10.637567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.066 [2024-11-20 12:43:10.637576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.066 [2024-11-20 12:43:10.637582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.066 [2024-11-20 12:43:10.637588] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.066 [2024-11-20 12:43:10.649620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.066 [2024-11-20 12:43:10.650014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.066 [2024-11-20 12:43:10.650032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.066 [2024-11-20 12:43:10.650039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.066 [2024-11-20 12:43:10.650197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.066 [2024-11-20 12:43:10.650387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.066 [2024-11-20 12:43:10.650397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.066 [2024-11-20 12:43:10.650404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.066 [2024-11-20 12:43:10.650410] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.066 [2024-11-20 12:43:10.662571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.066 [2024-11-20 12:43:10.662988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.066 [2024-11-20 12:43:10.663006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.066 [2024-11-20 12:43:10.663013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.066 [2024-11-20 12:43:10.663614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.066 [2024-11-20 12:43:10.664168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.066 [2024-11-20 12:43:10.664177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.066 [2024-11-20 12:43:10.664184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.066 [2024-11-20 12:43:10.664190] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.066 [2024-11-20 12:43:10.675325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.066 [2024-11-20 12:43:10.675644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.066 [2024-11-20 12:43:10.675662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.066 [2024-11-20 12:43:10.675669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.066 [2024-11-20 12:43:10.675827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.066 [2024-11-20 12:43:10.675987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.066 [2024-11-20 12:43:10.675997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.066 [2024-11-20 12:43:10.676003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.066 [2024-11-20 12:43:10.676009] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.066 [2024-11-20 12:43:10.688097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.066 [2024-11-20 12:43:10.688445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.066 [2024-11-20 12:43:10.688462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.066 [2024-11-20 12:43:10.688473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.066 [2024-11-20 12:43:10.688632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.066 [2024-11-20 12:43:10.688790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.066 [2024-11-20 12:43:10.688800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.066 [2024-11-20 12:43:10.688806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.066 [2024-11-20 12:43:10.688812] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.066 [2024-11-20 12:43:10.700908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.066 [2024-11-20 12:43:10.701302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.066 [2024-11-20 12:43:10.701320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.066 [2024-11-20 12:43:10.701328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.066 [2024-11-20 12:43:10.701486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.066 [2024-11-20 12:43:10.701646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.066 [2024-11-20 12:43:10.701655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.066 [2024-11-20 12:43:10.701661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.066 [2024-11-20 12:43:10.701667] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.066 [2024-11-20 12:43:10.713959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.066 [2024-11-20 12:43:10.714388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.066 [2024-11-20 12:43:10.714406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.066 [2024-11-20 12:43:10.714413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.066 [2024-11-20 12:43:10.714585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.066 [2024-11-20 12:43:10.714757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.066 [2024-11-20 12:43:10.714767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.066 [2024-11-20 12:43:10.714774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.066 [2024-11-20 12:43:10.714781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.066 [2024-11-20 12:43:10.727104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.066 [2024-11-20 12:43:10.727536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.066 [2024-11-20 12:43:10.727555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.066 [2024-11-20 12:43:10.727564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.066 [2024-11-20 12:43:10.727736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.066 [2024-11-20 12:43:10.727914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.066 [2024-11-20 12:43:10.727924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.066 [2024-11-20 12:43:10.727930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.066 [2024-11-20 12:43:10.727937] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.066 [2024-11-20 12:43:10.740075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.066 [2024-11-20 12:43:10.740427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.066 [2024-11-20 12:43:10.740446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.066 [2024-11-20 12:43:10.740455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.066 [2024-11-20 12:43:10.740626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.066 [2024-11-20 12:43:10.740800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.066 [2024-11-20 12:43:10.740810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.067 [2024-11-20 12:43:10.740817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.067 [2024-11-20 12:43:10.740823] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.067 [2024-11-20 12:43:10.753136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.067 [2024-11-20 12:43:10.753575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.067 [2024-11-20 12:43:10.753593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.067 [2024-11-20 12:43:10.753602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.067 [2024-11-20 12:43:10.753773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.067 [2024-11-20 12:43:10.753946] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.067 [2024-11-20 12:43:10.753956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.067 [2024-11-20 12:43:10.753963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.067 [2024-11-20 12:43:10.753970] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.067 [2024-11-20 12:43:10.766150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.067 [2024-11-20 12:43:10.766495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.067 [2024-11-20 12:43:10.766515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.067 [2024-11-20 12:43:10.766523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.067 [2024-11-20 12:43:10.766696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.067 [2024-11-20 12:43:10.766870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.067 [2024-11-20 12:43:10.766880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.067 [2024-11-20 12:43:10.766892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.067 [2024-11-20 12:43:10.766901] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.067 7521.75 IOPS, 29.38 MiB/s [2024-11-20T11:43:10.833Z] [2024-11-20 12:43:10.780512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.067 [2024-11-20 12:43:10.780943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.067 [2024-11-20 12:43:10.780962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.067 [2024-11-20 12:43:10.780971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.067 [2024-11-20 12:43:10.781142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.067 [2024-11-20 12:43:10.781323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.067 [2024-11-20 12:43:10.781335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.067 [2024-11-20 12:43:10.781342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.067 [2024-11-20 12:43:10.781349] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.067 [2024-11-20 12:43:10.793372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.067 [2024-11-20 12:43:10.793797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.067 [2024-11-20 12:43:10.793842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.067 [2024-11-20 12:43:10.793867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.067 [2024-11-20 12:43:10.794362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.067 [2024-11-20 12:43:10.794532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.067 [2024-11-20 12:43:10.794542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.067 [2024-11-20 12:43:10.794548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.067 [2024-11-20 12:43:10.794555] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.067 [2024-11-20 12:43:10.806231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.067 [2024-11-20 12:43:10.806574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.067 [2024-11-20 12:43:10.806591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.067 [2024-11-20 12:43:10.806598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.067 [2024-11-20 12:43:10.806757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.067 [2024-11-20 12:43:10.806915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.067 [2024-11-20 12:43:10.806924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.067 [2024-11-20 12:43:10.806931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.067 [2024-11-20 12:43:10.806937] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.067 [2024-11-20 12:43:10.819056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.067 [2024-11-20 12:43:10.819397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.067 [2024-11-20 12:43:10.819416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.067 [2024-11-20 12:43:10.819424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.067 [2024-11-20 12:43:10.819590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.067 [2024-11-20 12:43:10.819758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.067 [2024-11-20 12:43:10.819769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.067 [2024-11-20 12:43:10.819775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.067 [2024-11-20 12:43:10.819782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.327 [2024-11-20 12:43:10.831895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.327 [2024-11-20 12:43:10.832299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.327 [2024-11-20 12:43:10.832318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.327 [2024-11-20 12:43:10.832327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.327 [2024-11-20 12:43:10.832500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.327 [2024-11-20 12:43:10.832674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.328 [2024-11-20 12:43:10.832684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.328 [2024-11-20 12:43:10.832690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.328 [2024-11-20 12:43:10.832697] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.328 [2024-11-20 12:43:10.844748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.328 [2024-11-20 12:43:10.845093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.328 [2024-11-20 12:43:10.845110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.328 [2024-11-20 12:43:10.845118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.328 [2024-11-20 12:43:10.845293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.328 [2024-11-20 12:43:10.845474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.328 [2024-11-20 12:43:10.845484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.328 [2024-11-20 12:43:10.845490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.328 [2024-11-20 12:43:10.845496] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.328 [2024-11-20 12:43:10.857537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.328 [2024-11-20 12:43:10.857834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.328 [2024-11-20 12:43:10.857852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.328 [2024-11-20 12:43:10.857863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.328 [2024-11-20 12:43:10.858030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.328 [2024-11-20 12:43:10.858199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.328 [2024-11-20 12:43:10.858216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.328 [2024-11-20 12:43:10.858223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.328 [2024-11-20 12:43:10.858230] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.328 [2024-11-20 12:43:10.870432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.328 [2024-11-20 12:43:10.870845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.328 [2024-11-20 12:43:10.870862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.328 [2024-11-20 12:43:10.870870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.328 [2024-11-20 12:43:10.871036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.328 [2024-11-20 12:43:10.871212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.328 [2024-11-20 12:43:10.871224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.328 [2024-11-20 12:43:10.871231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.328 [2024-11-20 12:43:10.871238] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.328 [2024-11-20 12:43:10.883340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.328 [2024-11-20 12:43:10.883749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.328 [2024-11-20 12:43:10.883767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.328 [2024-11-20 12:43:10.883775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.328 [2024-11-20 12:43:10.883932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.328 [2024-11-20 12:43:10.884093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.328 [2024-11-20 12:43:10.884103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.328 [2024-11-20 12:43:10.884110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.328 [2024-11-20 12:43:10.884117] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.328 [2024-11-20 12:43:10.896210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.328 [2024-11-20 12:43:10.896542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.328 [2024-11-20 12:43:10.896559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.328 [2024-11-20 12:43:10.896567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.328 [2024-11-20 12:43:10.896724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.328 [2024-11-20 12:43:10.896888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.328 [2024-11-20 12:43:10.896897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.328 [2024-11-20 12:43:10.896904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.328 [2024-11-20 12:43:10.896910] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.328 [2024-11-20 12:43:10.908962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.328 [2024-11-20 12:43:10.909334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.328 [2024-11-20 12:43:10.909381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.328 [2024-11-20 12:43:10.909405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.328 [2024-11-20 12:43:10.909860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.328 [2024-11-20 12:43:10.910031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.328 [2024-11-20 12:43:10.910041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.328 [2024-11-20 12:43:10.910048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.328 [2024-11-20 12:43:10.910054] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.328 [2024-11-20 12:43:10.921791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.328 [2024-11-20 12:43:10.922177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.328 [2024-11-20 12:43:10.922194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.328 [2024-11-20 12:43:10.922207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.328 [2024-11-20 12:43:10.922388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.328 [2024-11-20 12:43:10.922556] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.328 [2024-11-20 12:43:10.922566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.328 [2024-11-20 12:43:10.922572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.328 [2024-11-20 12:43:10.922579] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.328 [2024-11-20 12:43:10.934639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.328 [2024-11-20 12:43:10.934981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.328 [2024-11-20 12:43:10.934997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.328 [2024-11-20 12:43:10.935004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.328 [2024-11-20 12:43:10.935161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.328 [2024-11-20 12:43:10.935349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.328 [2024-11-20 12:43:10.935360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.328 [2024-11-20 12:43:10.935370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.328 [2024-11-20 12:43:10.935376] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.328 [2024-11-20 12:43:10.947597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.328 [2024-11-20 12:43:10.947985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.328 [2024-11-20 12:43:10.948003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.328 [2024-11-20 12:43:10.948010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.328 [2024-11-20 12:43:10.948168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.328 [2024-11-20 12:43:10.948334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.328 [2024-11-20 12:43:10.948345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.328 [2024-11-20 12:43:10.948351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.328 [2024-11-20 12:43:10.948357] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.328 [2024-11-20 12:43:10.960413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.328 [2024-11-20 12:43:10.960685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.328 [2024-11-20 12:43:10.960702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.328 [2024-11-20 12:43:10.960709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.329 [2024-11-20 12:43:10.960867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.329 [2024-11-20 12:43:10.961025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.329 [2024-11-20 12:43:10.961035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.329 [2024-11-20 12:43:10.961041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.329 [2024-11-20 12:43:10.961047] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.329 [2024-11-20 12:43:10.973172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.329 [2024-11-20 12:43:10.973602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.329 [2024-11-20 12:43:10.973621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.329 [2024-11-20 12:43:10.973629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.329 [2024-11-20 12:43:10.973795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.329 [2024-11-20 12:43:10.973963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.329 [2024-11-20 12:43:10.973973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.329 [2024-11-20 12:43:10.973980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.329 [2024-11-20 12:43:10.973986] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.329 [2024-11-20 12:43:10.986047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.329 [2024-11-20 12:43:10.986504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.329 [2024-11-20 12:43:10.986549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.329 [2024-11-20 12:43:10.986574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.329 [2024-11-20 12:43:10.986988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.329 [2024-11-20 12:43:10.987156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.329 [2024-11-20 12:43:10.987166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.329 [2024-11-20 12:43:10.987174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.329 [2024-11-20 12:43:10.987180] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.329 [2024-11-20 12:43:10.998929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.329 [2024-11-20 12:43:10.999337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.329 [2024-11-20 12:43:10.999356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.329 [2024-11-20 12:43:10.999364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.329 [2024-11-20 12:43:10.999545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.329 [2024-11-20 12:43:10.999713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.329 [2024-11-20 12:43:10.999722] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.329 [2024-11-20 12:43:10.999729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.329 [2024-11-20 12:43:10.999735] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.329 [2024-11-20 12:43:11.011996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.329 [2024-11-20 12:43:11.012392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.329 [2024-11-20 12:43:11.012447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.329 [2024-11-20 12:43:11.012472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.329 [2024-11-20 12:43:11.013037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.329 [2024-11-20 12:43:11.013219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.329 [2024-11-20 12:43:11.013230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.329 [2024-11-20 12:43:11.013237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.329 [2024-11-20 12:43:11.013243] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.329 [2024-11-20 12:43:11.024977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.329 [2024-11-20 12:43:11.025314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.329 [2024-11-20 12:43:11.025332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.329 [2024-11-20 12:43:11.025343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.329 [2024-11-20 12:43:11.025511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.329 [2024-11-20 12:43:11.025679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.329 [2024-11-20 12:43:11.025689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.329 [2024-11-20 12:43:11.025695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.329 [2024-11-20 12:43:11.025702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.329 [2024-11-20 12:43:11.037998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.329 [2024-11-20 12:43:11.038356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.329 [2024-11-20 12:43:11.038374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.329 [2024-11-20 12:43:11.038382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.329 [2024-11-20 12:43:11.038549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.329 [2024-11-20 12:43:11.038716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.329 [2024-11-20 12:43:11.038726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.329 [2024-11-20 12:43:11.038733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.329 [2024-11-20 12:43:11.038739] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.329 [2024-11-20 12:43:11.050905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.329 [2024-11-20 12:43:11.051249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.329 [2024-11-20 12:43:11.051267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.329 [2024-11-20 12:43:11.051276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.329 [2024-11-20 12:43:11.051433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.329 [2024-11-20 12:43:11.051593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.329 [2024-11-20 12:43:11.051603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.329 [2024-11-20 12:43:11.051609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.329 [2024-11-20 12:43:11.051615] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.329 [2024-11-20 12:43:11.063708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.329 [2024-11-20 12:43:11.064116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.329 [2024-11-20 12:43:11.064150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.329 [2024-11-20 12:43:11.064176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.329 [2024-11-20 12:43:11.064753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.329 [2024-11-20 12:43:11.064917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.329 [2024-11-20 12:43:11.064927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.329 [2024-11-20 12:43:11.064933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.329 [2024-11-20 12:43:11.064940] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.329 [2024-11-20 12:43:11.076643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.329 [2024-11-20 12:43:11.077024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.329 [2024-11-20 12:43:11.077042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.329 [2024-11-20 12:43:11.077050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.329 [2024-11-20 12:43:11.077214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.329 [2024-11-20 12:43:11.077374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.329 [2024-11-20 12:43:11.077384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.329 [2024-11-20 12:43:11.077391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.329 [2024-11-20 12:43:11.077397] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.591 [2024-11-20 12:43:11.089654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.591 [2024-11-20 12:43:11.090002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.591 [2024-11-20 12:43:11.090047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.591 [2024-11-20 12:43:11.090071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.591 [2024-11-20 12:43:11.090662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.591 [2024-11-20 12:43:11.091162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.591 [2024-11-20 12:43:11.091172] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.591 [2024-11-20 12:43:11.091178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.591 [2024-11-20 12:43:11.091184] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.591 [2024-11-20 12:43:11.102626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.591 [2024-11-20 12:43:11.103045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.591 [2024-11-20 12:43:11.103062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.591 [2024-11-20 12:43:11.103070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.591 [2024-11-20 12:43:11.103231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.591 [2024-11-20 12:43:11.103391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.591 [2024-11-20 12:43:11.103401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.591 [2024-11-20 12:43:11.103410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.591 [2024-11-20 12:43:11.103417] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.591 [2024-11-20 12:43:11.115547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.591 [2024-11-20 12:43:11.115892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.591 [2024-11-20 12:43:11.115910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.591 [2024-11-20 12:43:11.115918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.591 [2024-11-20 12:43:11.116077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.591 [2024-11-20 12:43:11.116240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.591 [2024-11-20 12:43:11.116251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.591 [2024-11-20 12:43:11.116258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.591 [2024-11-20 12:43:11.116264] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.591 [2024-11-20 12:43:11.128360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.591 [2024-11-20 12:43:11.128717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.591 [2024-11-20 12:43:11.128763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.591 [2024-11-20 12:43:11.128787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.591 [2024-11-20 12:43:11.129301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.591 [2024-11-20 12:43:11.129470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.591 [2024-11-20 12:43:11.129480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.591 [2024-11-20 12:43:11.129486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.591 [2024-11-20 12:43:11.129493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.591 [2024-11-20 12:43:11.141260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.591 [2024-11-20 12:43:11.141638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.591 [2024-11-20 12:43:11.141655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.591 [2024-11-20 12:43:11.141663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.591 [2024-11-20 12:43:11.141830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.591 [2024-11-20 12:43:11.141998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.591 [2024-11-20 12:43:11.142008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.591 [2024-11-20 12:43:11.142015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.591 [2024-11-20 12:43:11.142021] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.591 [2024-11-20 12:43:11.153994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.591 [2024-11-20 12:43:11.154436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.591 [2024-11-20 12:43:11.154454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.591 [2024-11-20 12:43:11.154462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.591 [2024-11-20 12:43:11.154621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.591 [2024-11-20 12:43:11.154780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.591 [2024-11-20 12:43:11.154789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.591 [2024-11-20 12:43:11.154795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.591 [2024-11-20 12:43:11.154801] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.591 [2024-11-20 12:43:11.166873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.591 [2024-11-20 12:43:11.167316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.591 [2024-11-20 12:43:11.167334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.591 [2024-11-20 12:43:11.167342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.591 [2024-11-20 12:43:11.167511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.591 [2024-11-20 12:43:11.167669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.591 [2024-11-20 12:43:11.167679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.591 [2024-11-20 12:43:11.167685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.591 [2024-11-20 12:43:11.167692] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.591 [2024-11-20 12:43:11.179873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.591 [2024-11-20 12:43:11.180313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.591 [2024-11-20 12:43:11.180369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.591 [2024-11-20 12:43:11.180393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.591 [2024-11-20 12:43:11.180969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.591 [2024-11-20 12:43:11.181197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.591 [2024-11-20 12:43:11.181211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.592 [2024-11-20 12:43:11.181218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.592 [2024-11-20 12:43:11.181225] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.592 [2024-11-20 12:43:11.192692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.592 [2024-11-20 12:43:11.193092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.592 [2024-11-20 12:43:11.193137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.592 [2024-11-20 12:43:11.193168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.592 [2024-11-20 12:43:11.193763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.592 [2024-11-20 12:43:11.194333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.592 [2024-11-20 12:43:11.194344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.592 [2024-11-20 12:43:11.194350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.592 [2024-11-20 12:43:11.194357] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.592 [2024-11-20 12:43:11.205410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.592 [2024-11-20 12:43:11.205760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.592 [2024-11-20 12:43:11.205777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.592 [2024-11-20 12:43:11.205785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.592 [2024-11-20 12:43:11.205943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.592 [2024-11-20 12:43:11.206101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.592 [2024-11-20 12:43:11.206111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.592 [2024-11-20 12:43:11.206116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.592 [2024-11-20 12:43:11.206123] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.592 [2024-11-20 12:43:11.218213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.592 [2024-11-20 12:43:11.218556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.592 [2024-11-20 12:43:11.218573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.592 [2024-11-20 12:43:11.218580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.592 [2024-11-20 12:43:11.218738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.592 [2024-11-20 12:43:11.218897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.592 [2024-11-20 12:43:11.218906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.592 [2024-11-20 12:43:11.218912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.592 [2024-11-20 12:43:11.218919] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.592 [2024-11-20 12:43:11.231002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.592 [2024-11-20 12:43:11.231338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.592 [2024-11-20 12:43:11.231356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.592 [2024-11-20 12:43:11.231363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.592 [2024-11-20 12:43:11.231521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.592 [2024-11-20 12:43:11.231686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.592 [2024-11-20 12:43:11.231696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.592 [2024-11-20 12:43:11.231701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.592 [2024-11-20 12:43:11.231708] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.592 [2024-11-20 12:43:11.244037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.592 [2024-11-20 12:43:11.244410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.592 [2024-11-20 12:43:11.244427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.592 [2024-11-20 12:43:11.244434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.592 [2024-11-20 12:43:11.244594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.592 [2024-11-20 12:43:11.244752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.592 [2024-11-20 12:43:11.244761] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.592 [2024-11-20 12:43:11.244768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.592 [2024-11-20 12:43:11.244774] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.592 [2024-11-20 12:43:11.256798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.592 [2024-11-20 12:43:11.257216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.592 [2024-11-20 12:43:11.257233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.592 [2024-11-20 12:43:11.257258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.592 [2024-11-20 12:43:11.257425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.592 [2024-11-20 12:43:11.257594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.592 [2024-11-20 12:43:11.257605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.592 [2024-11-20 12:43:11.257612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.592 [2024-11-20 12:43:11.257620] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.592 [2024-11-20 12:43:11.269713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.592 [2024-11-20 12:43:11.270140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.592 [2024-11-20 12:43:11.270157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.592 [2024-11-20 12:43:11.270165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.592 [2024-11-20 12:43:11.270344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.592 [2024-11-20 12:43:11.270517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.592 [2024-11-20 12:43:11.270527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.592 [2024-11-20 12:43:11.270538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.592 [2024-11-20 12:43:11.270545] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.592 [2024-11-20 12:43:11.282662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.592 [2024-11-20 12:43:11.283081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.592 [2024-11-20 12:43:11.283121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.592 [2024-11-20 12:43:11.283147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.592 [2024-11-20 12:43:11.283699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.592 [2024-11-20 12:43:11.283869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.592 [2024-11-20 12:43:11.283877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.593 [2024-11-20 12:43:11.283884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.593 [2024-11-20 12:43:11.283890] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.593 [2024-11-20 12:43:11.295456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.593 [2024-11-20 12:43:11.295855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.593 [2024-11-20 12:43:11.295900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.593 [2024-11-20 12:43:11.295924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.593 [2024-11-20 12:43:11.296407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.593 [2024-11-20 12:43:11.296567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.593 [2024-11-20 12:43:11.296575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.593 [2024-11-20 12:43:11.296581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.593 [2024-11-20 12:43:11.296586] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.593 [2024-11-20 12:43:11.308182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.593 [2024-11-20 12:43:11.308583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.593 [2024-11-20 12:43:11.308629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.593 [2024-11-20 12:43:11.308652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.593 [2024-11-20 12:43:11.309185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.593 [2024-11-20 12:43:11.309585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.593 [2024-11-20 12:43:11.309605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.593 [2024-11-20 12:43:11.309620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.593 [2024-11-20 12:43:11.309634] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.593 [2024-11-20 12:43:11.323340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.593 [2024-11-20 12:43:11.323836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.593 [2024-11-20 12:43:11.323859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.593 [2024-11-20 12:43:11.323869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.593 [2024-11-20 12:43:11.324121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.593 [2024-11-20 12:43:11.324382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.593 [2024-11-20 12:43:11.324396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.593 [2024-11-20 12:43:11.324405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.593 [2024-11-20 12:43:11.324415] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.593 [2024-11-20 12:43:11.336209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.593 [2024-11-20 12:43:11.336628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.593 [2024-11-20 12:43:11.336645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.593 [2024-11-20 12:43:11.336653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.593 [2024-11-20 12:43:11.336820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.593 [2024-11-20 12:43:11.336987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.593 [2024-11-20 12:43:11.336997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.593 [2024-11-20 12:43:11.337004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.593 [2024-11-20 12:43:11.337010] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.593 [2024-11-20 12:43:11.349266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.593 [2024-11-20 12:43:11.349666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.593 [2024-11-20 12:43:11.349684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.593 [2024-11-20 12:43:11.349691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.593 [2024-11-20 12:43:11.349862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.593 [2024-11-20 12:43:11.350035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.593 [2024-11-20 12:43:11.350045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.593 [2024-11-20 12:43:11.350052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.593 [2024-11-20 12:43:11.350059] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.854 [2024-11-20 12:43:11.362253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.854 [2024-11-20 12:43:11.362584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.854 [2024-11-20 12:43:11.362602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.854 [2024-11-20 12:43:11.362614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.854 [2024-11-20 12:43:11.362781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.854 [2024-11-20 12:43:11.362950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.854 [2024-11-20 12:43:11.362959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.854 [2024-11-20 12:43:11.362965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.854 [2024-11-20 12:43:11.362972] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.854 [2024-11-20 12:43:11.375070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.854 [2024-11-20 12:43:11.375420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.854 [2024-11-20 12:43:11.375466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.854 [2024-11-20 12:43:11.375490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.854 [2024-11-20 12:43:11.375988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.854 [2024-11-20 12:43:11.376149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.854 [2024-11-20 12:43:11.376159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.854 [2024-11-20 12:43:11.376164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.854 [2024-11-20 12:43:11.376171] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.854 [2024-11-20 12:43:11.387823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.854 [2024-11-20 12:43:11.388218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.854 [2024-11-20 12:43:11.388252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.854 [2024-11-20 12:43:11.388259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.854 [2024-11-20 12:43:11.388436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.854 [2024-11-20 12:43:11.388597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.854 [2024-11-20 12:43:11.388606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.854 [2024-11-20 12:43:11.388612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.854 [2024-11-20 12:43:11.388618] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.854 [2024-11-20 12:43:11.400632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.854 [2024-11-20 12:43:11.401046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.854 [2024-11-20 12:43:11.401063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.854 [2024-11-20 12:43:11.401071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.854 [2024-11-20 12:43:11.401250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.854 [2024-11-20 12:43:11.401422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.854 [2024-11-20 12:43:11.401431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.854 [2024-11-20 12:43:11.401438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.854 [2024-11-20 12:43:11.401444] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.854 [2024-11-20 12:43:11.413473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.854 [2024-11-20 12:43:11.413891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.854 [2024-11-20 12:43:11.413907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.854 [2024-11-20 12:43:11.413914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.854 [2024-11-20 12:43:11.414072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.854 [2024-11-20 12:43:11.414254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.854 [2024-11-20 12:43:11.414265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.854 [2024-11-20 12:43:11.414272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.854 [2024-11-20 12:43:11.414279] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.854 [2024-11-20 12:43:11.426232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.854 [2024-11-20 12:43:11.426646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.854 [2024-11-20 12:43:11.426695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.854 [2024-11-20 12:43:11.426719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.854 [2024-11-20 12:43:11.427311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.854 [2024-11-20 12:43:11.427884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.854 [2024-11-20 12:43:11.427893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.854 [2024-11-20 12:43:11.427900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.854 [2024-11-20 12:43:11.427906] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.854 [2024-11-20 12:43:11.439026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.854 [2024-11-20 12:43:11.439443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.854 [2024-11-20 12:43:11.439461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.854 [2024-11-20 12:43:11.439469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.854 [2024-11-20 12:43:11.439626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.854 [2024-11-20 12:43:11.439787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.854 [2024-11-20 12:43:11.439796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.854 [2024-11-20 12:43:11.439805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.854 [2024-11-20 12:43:11.439813] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.854 [2024-11-20 12:43:11.451782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.854 [2024-11-20 12:43:11.452169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.854 [2024-11-20 12:43:11.452187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.854 [2024-11-20 12:43:11.452194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.854 [2024-11-20 12:43:11.452380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.854 [2024-11-20 12:43:11.452548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.854 [2024-11-20 12:43:11.452558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.854 [2024-11-20 12:43:11.452564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.854 [2024-11-20 12:43:11.452571] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.854 [2024-11-20 12:43:11.464521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.854 [2024-11-20 12:43:11.464935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.854 [2024-11-20 12:43:11.464951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.854 [2024-11-20 12:43:11.464958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.854 [2024-11-20 12:43:11.465115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.854 [2024-11-20 12:43:11.465281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.855 [2024-11-20 12:43:11.465291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.855 [2024-11-20 12:43:11.465297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.855 [2024-11-20 12:43:11.465302] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.855 [2024-11-20 12:43:11.477348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.855 [2024-11-20 12:43:11.477777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.855 [2024-11-20 12:43:11.477821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.855 [2024-11-20 12:43:11.477844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.855 [2024-11-20 12:43:11.478270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.855 [2024-11-20 12:43:11.478440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.855 [2024-11-20 12:43:11.478449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.855 [2024-11-20 12:43:11.478456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.855 [2024-11-20 12:43:11.478462] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.855 [2024-11-20 12:43:11.490205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.855 [2024-11-20 12:43:11.490621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.855 [2024-11-20 12:43:11.490665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.855 [2024-11-20 12:43:11.490689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.855 [2024-11-20 12:43:11.491235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.855 [2024-11-20 12:43:11.491623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.855 [2024-11-20 12:43:11.491641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.855 [2024-11-20 12:43:11.491655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.855 [2024-11-20 12:43:11.491669] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.855 [2024-11-20 12:43:11.505019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.855 [2024-11-20 12:43:11.505505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.855 [2024-11-20 12:43:11.505528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.855 [2024-11-20 12:43:11.505539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.855 [2024-11-20 12:43:11.505791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.855 [2024-11-20 12:43:11.506045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.855 [2024-11-20 12:43:11.506059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.855 [2024-11-20 12:43:11.506068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.855 [2024-11-20 12:43:11.506078] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.855 [2024-11-20 12:43:11.518118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.855 [2024-11-20 12:43:11.518527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.855 [2024-11-20 12:43:11.518545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.855 [2024-11-20 12:43:11.518554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.855 [2024-11-20 12:43:11.518726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.855 [2024-11-20 12:43:11.518899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.855 [2024-11-20 12:43:11.518909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.855 [2024-11-20 12:43:11.518915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.855 [2024-11-20 12:43:11.518922] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.855 [2024-11-20 12:43:11.531094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.855 [2024-11-20 12:43:11.531428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.855 [2024-11-20 12:43:11.531446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.855 [2024-11-20 12:43:11.531457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.855 [2024-11-20 12:43:11.531629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.855 [2024-11-20 12:43:11.531802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.855 [2024-11-20 12:43:11.531812] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.855 [2024-11-20 12:43:11.531820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.855 [2024-11-20 12:43:11.531826] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.855 [2024-11-20 12:43:11.544049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.855 [2024-11-20 12:43:11.544332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.855 [2024-11-20 12:43:11.544350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.855 [2024-11-20 12:43:11.544357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.855 [2024-11-20 12:43:11.544526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.855 [2024-11-20 12:43:11.544696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.855 [2024-11-20 12:43:11.544705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.855 [2024-11-20 12:43:11.544712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.855 [2024-11-20 12:43:11.544718] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.855 [2024-11-20 12:43:11.556996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.855 [2024-11-20 12:43:11.557346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.855 [2024-11-20 12:43:11.557366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.855 [2024-11-20 12:43:11.557373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.855 [2024-11-20 12:43:11.557540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.855 [2024-11-20 12:43:11.557708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.855 [2024-11-20 12:43:11.557718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.855 [2024-11-20 12:43:11.557724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.855 [2024-11-20 12:43:11.557731] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.855 [2024-11-20 12:43:11.569994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.855 [2024-11-20 12:43:11.570433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.855 [2024-11-20 12:43:11.570480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.855 [2024-11-20 12:43:11.570503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.855 [2024-11-20 12:43:11.571003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.855 [2024-11-20 12:43:11.571167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.855 [2024-11-20 12:43:11.571178] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.855 [2024-11-20 12:43:11.571184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.855 [2024-11-20 12:43:11.571190] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.855 [2024-11-20 12:43:11.582743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.855 [2024-11-20 12:43:11.583163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.855 [2024-11-20 12:43:11.583219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.855 [2024-11-20 12:43:11.583247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.855 [2024-11-20 12:43:11.583824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.855 [2024-11-20 12:43:11.584312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.855 [2024-11-20 12:43:11.584322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.855 [2024-11-20 12:43:11.584329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.855 [2024-11-20 12:43:11.584335] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.855 [2024-11-20 12:43:11.595501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.855 [2024-11-20 12:43:11.595839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.855 [2024-11-20 12:43:11.595892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.855 [2024-11-20 12:43:11.595928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.855 [2024-11-20 12:43:11.596461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.855 [2024-11-20 12:43:11.596631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.856 [2024-11-20 12:43:11.596640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.856 [2024-11-20 12:43:11.596647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.856 [2024-11-20 12:43:11.596653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.856 [2024-11-20 12:43:11.608213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.856 [2024-11-20 12:43:11.608624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.856 [2024-11-20 12:43:11.608641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:05.856 [2024-11-20 12:43:11.608648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:05.856 [2024-11-20 12:43:11.608806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:05.856 [2024-11-20 12:43:11.608965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.856 [2024-11-20 12:43:11.608975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.856 [2024-11-20 12:43:11.608985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.856 [2024-11-20 12:43:11.608991] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.116 [2024-11-20 12:43:11.621205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.116 [2024-11-20 12:43:11.621612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.116 [2024-11-20 12:43:11.621630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.116 [2024-11-20 12:43:11.621638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.116 [2024-11-20 12:43:11.621810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.116 [2024-11-20 12:43:11.621983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.116 [2024-11-20 12:43:11.621994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.116 [2024-11-20 12:43:11.622000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.116 [2024-11-20 12:43:11.622007] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.116 [2024-11-20 12:43:11.634025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.116 [2024-11-20 12:43:11.634364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.116 [2024-11-20 12:43:11.634383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.116 [2024-11-20 12:43:11.634390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.116 [2024-11-20 12:43:11.634557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.116 [2024-11-20 12:43:11.634724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.116 [2024-11-20 12:43:11.634734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.116 [2024-11-20 12:43:11.634740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.116 [2024-11-20 12:43:11.634747] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.117 [2024-11-20 12:43:11.646791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.117 [2024-11-20 12:43:11.647159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.117 [2024-11-20 12:43:11.647175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.117 [2024-11-20 12:43:11.647182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.117 [2024-11-20 12:43:11.647369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.117 [2024-11-20 12:43:11.647537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.117 [2024-11-20 12:43:11.647547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.117 [2024-11-20 12:43:11.647554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.117 [2024-11-20 12:43:11.647560] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.117 [2024-11-20 12:43:11.659560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.117 [2024-11-20 12:43:11.659894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.117 [2024-11-20 12:43:11.659937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.117 [2024-11-20 12:43:11.659962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.117 [2024-11-20 12:43:11.660480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.117 [2024-11-20 12:43:11.660641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.117 [2024-11-20 12:43:11.660651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.117 [2024-11-20 12:43:11.660657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.117 [2024-11-20 12:43:11.660663] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.117 [2024-11-20 12:43:11.672414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.117 [2024-11-20 12:43:11.672814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.117 [2024-11-20 12:43:11.672859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.117 [2024-11-20 12:43:11.672883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.117 [2024-11-20 12:43:11.673366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.117 [2024-11-20 12:43:11.673527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.117 [2024-11-20 12:43:11.673537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.117 [2024-11-20 12:43:11.673543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.117 [2024-11-20 12:43:11.673549] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.117 [2024-11-20 12:43:11.685149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.117 [2024-11-20 12:43:11.685571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.117 [2024-11-20 12:43:11.685588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.117 [2024-11-20 12:43:11.685595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.117 [2024-11-20 12:43:11.685752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.117 [2024-11-20 12:43:11.685912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.117 [2024-11-20 12:43:11.685922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.117 [2024-11-20 12:43:11.685928] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.117 [2024-11-20 12:43:11.685934] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.117 [2024-11-20 12:43:11.697884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.117 [2024-11-20 12:43:11.698299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.117 [2024-11-20 12:43:11.698316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.117 [2024-11-20 12:43:11.698326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.117 [2024-11-20 12:43:11.698484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.117 [2024-11-20 12:43:11.698643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.117 [2024-11-20 12:43:11.698653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.117 [2024-11-20 12:43:11.698659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.117 [2024-11-20 12:43:11.698665] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.117 [2024-11-20 12:43:11.710599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.117 [2024-11-20 12:43:11.711014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.117 [2024-11-20 12:43:11.711031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.117 [2024-11-20 12:43:11.711041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.117 [2024-11-20 12:43:11.711209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.117 [2024-11-20 12:43:11.711392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.117 [2024-11-20 12:43:11.711401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.117 [2024-11-20 12:43:11.711408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.117 [2024-11-20 12:43:11.711414] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.117 [2024-11-20 12:43:11.723381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.117 [2024-11-20 12:43:11.723803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.117 [2024-11-20 12:43:11.723848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.117 [2024-11-20 12:43:11.723872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.117 [2024-11-20 12:43:11.724466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.117 [2024-11-20 12:43:11.724912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.117 [2024-11-20 12:43:11.724921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.117 [2024-11-20 12:43:11.724927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.117 [2024-11-20 12:43:11.724933] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.117 [2024-11-20 12:43:11.736156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.117 [2024-11-20 12:43:11.736580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.117 [2024-11-20 12:43:11.736597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.117 [2024-11-20 12:43:11.736604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.117 [2024-11-20 12:43:11.736762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.117 [2024-11-20 12:43:11.736923] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.117 [2024-11-20 12:43:11.736933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.117 [2024-11-20 12:43:11.736939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.117 [2024-11-20 12:43:11.736945] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.117 [2024-11-20 12:43:11.748934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.117 [2024-11-20 12:43:11.749370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.117 [2024-11-20 12:43:11.749405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.117 [2024-11-20 12:43:11.749413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.117 [2024-11-20 12:43:11.749581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.117 [2024-11-20 12:43:11.749749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.117 [2024-11-20 12:43:11.749759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.117 [2024-11-20 12:43:11.749765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.117 [2024-11-20 12:43:11.749772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.117 [2024-11-20 12:43:11.761680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.117 [2024-11-20 12:43:11.762092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.117 [2024-11-20 12:43:11.762109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.117 [2024-11-20 12:43:11.762116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.117 [2024-11-20 12:43:11.762297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.117 [2024-11-20 12:43:11.762465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.117 [2024-11-20 12:43:11.762475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.117 [2024-11-20 12:43:11.762481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.118 [2024-11-20 12:43:11.762488] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.118 [2024-11-20 12:43:11.774439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.118 [2024-11-20 12:43:11.774862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.118 [2024-11-20 12:43:11.774906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.118 [2024-11-20 12:43:11.774930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.118 [2024-11-20 12:43:11.775523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.118 [2024-11-20 12:43:11.775956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.118 [2024-11-20 12:43:11.775967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.118 [2024-11-20 12:43:11.775977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.118 [2024-11-20 12:43:11.775985] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.118 6017.40 IOPS, 23.51 MiB/s [2024-11-20T11:43:11.884Z] [2024-11-20 12:43:11.787473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.118 [2024-11-20 12:43:11.787869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.118 [2024-11-20 12:43:11.787887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.118 [2024-11-20 12:43:11.787896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.118 [2024-11-20 12:43:11.788068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.118 [2024-11-20 12:43:11.788246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.118 [2024-11-20 12:43:11.788258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.118 [2024-11-20 12:43:11.788265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.118 [2024-11-20 12:43:11.788272] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.118 [2024-11-20 12:43:11.800377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.118 [2024-11-20 12:43:11.800803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.118 [2024-11-20 12:43:11.800850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.118 [2024-11-20 12:43:11.800874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.118 [2024-11-20 12:43:11.801466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.118 [2024-11-20 12:43:11.801994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.118 [2024-11-20 12:43:11.802004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.118 [2024-11-20 12:43:11.802011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.118 [2024-11-20 12:43:11.802018] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.118 [2024-11-20 12:43:11.813178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.118 [2024-11-20 12:43:11.813525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.118 [2024-11-20 12:43:11.813543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.118 [2024-11-20 12:43:11.813551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.118 [2024-11-20 12:43:11.813708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.118 [2024-11-20 12:43:11.813867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.118 [2024-11-20 12:43:11.813876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.118 [2024-11-20 12:43:11.813882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.118 [2024-11-20 12:43:11.813889] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.118 [2024-11-20 12:43:11.826024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.118 [2024-11-20 12:43:11.826369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.118 [2024-11-20 12:43:11.826387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.118 [2024-11-20 12:43:11.826395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.118 [2024-11-20 12:43:11.826562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.118 [2024-11-20 12:43:11.826729] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.118 [2024-11-20 12:43:11.826739] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.118 [2024-11-20 12:43:11.826746] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.118 [2024-11-20 12:43:11.826752] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.118 [2024-11-20 12:43:11.838942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.118 [2024-11-20 12:43:11.839300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.118 [2024-11-20 12:43:11.839318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.118 [2024-11-20 12:43:11.839327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.118 [2024-11-20 12:43:11.839502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.118 [2024-11-20 12:43:11.839661] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.118 [2024-11-20 12:43:11.839670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.118 [2024-11-20 12:43:11.839677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.118 [2024-11-20 12:43:11.839683] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.118 [2024-11-20 12:43:11.851852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.118 [2024-11-20 12:43:11.852197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.118 [2024-11-20 12:43:11.852221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.118 [2024-11-20 12:43:11.852228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.118 [2024-11-20 12:43:11.852386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.118 [2024-11-20 12:43:11.852546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.118 [2024-11-20 12:43:11.852556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.118 [2024-11-20 12:43:11.852562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.118 [2024-11-20 12:43:11.852568] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.118 [2024-11-20 12:43:11.864701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.118 [2024-11-20 12:43:11.865122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.118 [2024-11-20 12:43:11.865140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.118 [2024-11-20 12:43:11.865153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.118 [2024-11-20 12:43:11.865339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.118 [2024-11-20 12:43:11.865508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.118 [2024-11-20 12:43:11.865518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.118 [2024-11-20 12:43:11.865525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.118 [2024-11-20 12:43:11.865532] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.118 [2024-11-20 12:43:11.877767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.118 [2024-11-20 12:43:11.878125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.118 [2024-11-20 12:43:11.878143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.118 [2024-11-20 12:43:11.878151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.118 [2024-11-20 12:43:11.878331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.379 [2024-11-20 12:43:11.878505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.379 [2024-11-20 12:43:11.878514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.379 [2024-11-20 12:43:11.878521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.379 [2024-11-20 12:43:11.878528] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.379 [2024-11-20 12:43:11.890705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.379 [2024-11-20 12:43:11.891032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.379 [2024-11-20 12:43:11.891050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.379 [2024-11-20 12:43:11.891057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.379 [2024-11-20 12:43:11.891234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.379 [2024-11-20 12:43:11.891403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.379 [2024-11-20 12:43:11.891413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.379 [2024-11-20 12:43:11.891420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.379 [2024-11-20 12:43:11.891426] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.379 [2024-11-20 12:43:11.903454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.379 [2024-11-20 12:43:11.903851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.379 [2024-11-20 12:43:11.903868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.379 [2024-11-20 12:43:11.903876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.379 [2024-11-20 12:43:11.904042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.379 [2024-11-20 12:43:11.904219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.379 [2024-11-20 12:43:11.904230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.379 [2024-11-20 12:43:11.904236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.380 [2024-11-20 12:43:11.904243] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.380 [2024-11-20 12:43:11.916156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.380 [2024-11-20 12:43:11.916500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.380 [2024-11-20 12:43:11.916518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.380 [2024-11-20 12:43:11.916526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.380 [2024-11-20 12:43:11.916683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.380 [2024-11-20 12:43:11.916841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.380 [2024-11-20 12:43:11.916850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.380 [2024-11-20 12:43:11.916856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.380 [2024-11-20 12:43:11.916862] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.380 [2024-11-20 12:43:11.928912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.380 [2024-11-20 12:43:11.929338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.380 [2024-11-20 12:43:11.929384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.380 [2024-11-20 12:43:11.929408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.380 [2024-11-20 12:43:11.929983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.380 [2024-11-20 12:43:11.930161] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.380 [2024-11-20 12:43:11.930170] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.380 [2024-11-20 12:43:11.930175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.380 [2024-11-20 12:43:11.930182] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.380 [2024-11-20 12:43:11.941665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.380 [2024-11-20 12:43:11.942073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.380 [2024-11-20 12:43:11.942112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.380 [2024-11-20 12:43:11.942138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.380 [2024-11-20 12:43:11.942730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.380 [2024-11-20 12:43:11.943000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.380 [2024-11-20 12:43:11.943010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.380 [2024-11-20 12:43:11.943019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.380 [2024-11-20 12:43:11.943026] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.380 [2024-11-20 12:43:11.954451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.380 [2024-11-20 12:43:11.954792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.380 [2024-11-20 12:43:11.954809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.380 [2024-11-20 12:43:11.954816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.380 [2024-11-20 12:43:11.954973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.380 [2024-11-20 12:43:11.955132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.380 [2024-11-20 12:43:11.955141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.380 [2024-11-20 12:43:11.955147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.380 [2024-11-20 12:43:11.955154] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.380 [2024-11-20 12:43:11.967179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.380 [2024-11-20 12:43:11.967597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.380 [2024-11-20 12:43:11.967614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.380 [2024-11-20 12:43:11.967621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.380 [2024-11-20 12:43:11.967779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.380 [2024-11-20 12:43:11.967938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.380 [2024-11-20 12:43:11.967948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.380 [2024-11-20 12:43:11.967954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.380 [2024-11-20 12:43:11.967960] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.380 [2024-11-20 12:43:11.979959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.380 [2024-11-20 12:43:11.980289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.380 [2024-11-20 12:43:11.980334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.380 [2024-11-20 12:43:11.980357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.380 [2024-11-20 12:43:11.980934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.380 [2024-11-20 12:43:11.981362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.380 [2024-11-20 12:43:11.981372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.380 [2024-11-20 12:43:11.981379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.380 [2024-11-20 12:43:11.981386] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.380 [2024-11-20 12:43:11.992758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.380 [2024-11-20 12:43:11.993158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.380 [2024-11-20 12:43:11.993176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.380 [2024-11-20 12:43:11.993183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.380 [2024-11-20 12:43:11.993367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.380 [2024-11-20 12:43:11.993535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.380 [2024-11-20 12:43:11.993545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.380 [2024-11-20 12:43:11.993552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.380 [2024-11-20 12:43:11.993558] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.380 [2024-11-20 12:43:12.005548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.380 [2024-11-20 12:43:12.005976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.380 [2024-11-20 12:43:12.006022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.380 [2024-11-20 12:43:12.006045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.380 [2024-11-20 12:43:12.006446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.380 [2024-11-20 12:43:12.006615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.380 [2024-11-20 12:43:12.006625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.380 [2024-11-20 12:43:12.006631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.380 [2024-11-20 12:43:12.006638] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.380 [2024-11-20 12:43:12.018336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.380 [2024-11-20 12:43:12.018741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.380 [2024-11-20 12:43:12.018758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.380 [2024-11-20 12:43:12.018765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.380 [2024-11-20 12:43:12.018923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.380 [2024-11-20 12:43:12.019082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.380 [2024-11-20 12:43:12.019091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.380 [2024-11-20 12:43:12.019098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.380 [2024-11-20 12:43:12.019103] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.380 [2024-11-20 12:43:12.031087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.380 [2024-11-20 12:43:12.031519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.380 [2024-11-20 12:43:12.031537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.380 [2024-11-20 12:43:12.031548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.380 [2024-11-20 12:43:12.031716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.380 [2024-11-20 12:43:12.031884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.380 [2024-11-20 12:43:12.031893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.381 [2024-11-20 12:43:12.031900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.381 [2024-11-20 12:43:12.031906] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.381 [2024-11-20 12:43:12.044144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.381 [2024-11-20 12:43:12.044576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.381 [2024-11-20 12:43:12.044594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.381 [2024-11-20 12:43:12.044602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.381 [2024-11-20 12:43:12.044774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.381 [2024-11-20 12:43:12.044946] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.381 [2024-11-20 12:43:12.044957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.381 [2024-11-20 12:43:12.044964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.381 [2024-11-20 12:43:12.044971] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.381 [2024-11-20 12:43:12.057027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.381 [2024-11-20 12:43:12.057394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.381 [2024-11-20 12:43:12.057412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.381 [2024-11-20 12:43:12.057419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.381 [2024-11-20 12:43:12.057586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.381 [2024-11-20 12:43:12.057754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.381 [2024-11-20 12:43:12.057763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.381 [2024-11-20 12:43:12.057770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.381 [2024-11-20 12:43:12.057776] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.381 [2024-11-20 12:43:12.069863] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.381 [2024-11-20 12:43:12.070217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.381 [2024-11-20 12:43:12.070234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.381 [2024-11-20 12:43:12.070241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.381 [2024-11-20 12:43:12.070399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.381 [2024-11-20 12:43:12.070562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.381 [2024-11-20 12:43:12.070571] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.381 [2024-11-20 12:43:12.070578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.381 [2024-11-20 12:43:12.070584] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.381 [2024-11-20 12:43:12.082870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.381 [2024-11-20 12:43:12.083291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.381 [2024-11-20 12:43:12.083308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.381 [2024-11-20 12:43:12.083316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.381 [2024-11-20 12:43:12.083484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.381 [2024-11-20 12:43:12.083653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.381 [2024-11-20 12:43:12.083662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.381 [2024-11-20 12:43:12.083669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.381 [2024-11-20 12:43:12.083675] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.381 [2024-11-20 12:43:12.095606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.381 [2024-11-20 12:43:12.096023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.381 [2024-11-20 12:43:12.096039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.381 [2024-11-20 12:43:12.096047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.381 [2024-11-20 12:43:12.096211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.381 [2024-11-20 12:43:12.096394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.381 [2024-11-20 12:43:12.096404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.381 [2024-11-20 12:43:12.096410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.381 [2024-11-20 12:43:12.096416] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.381 [2024-11-20 12:43:12.108467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.381 [2024-11-20 12:43:12.108876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.381 [2024-11-20 12:43:12.108893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.381 [2024-11-20 12:43:12.108899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.381 [2024-11-20 12:43:12.109057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.381 [2024-11-20 12:43:12.109221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.381 [2024-11-20 12:43:12.109231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.381 [2024-11-20 12:43:12.109241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.381 [2024-11-20 12:43:12.109247] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.381 [2024-11-20 12:43:12.121299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.381 [2024-11-20 12:43:12.121718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.381 [2024-11-20 12:43:12.121736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.381 [2024-11-20 12:43:12.121743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.381 [2024-11-20 12:43:12.121901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.381 [2024-11-20 12:43:12.122061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.381 [2024-11-20 12:43:12.122070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.381 [2024-11-20 12:43:12.122077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.381 [2024-11-20 12:43:12.122083] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.381 [2024-11-20 12:43:12.134069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.381 [2024-11-20 12:43:12.134488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.381 [2024-11-20 12:43:12.134504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.381 [2024-11-20 12:43:12.134513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.381 [2024-11-20 12:43:12.135104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.381 [2024-11-20 12:43:12.135650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.381 [2024-11-20 12:43:12.135660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.381 [2024-11-20 12:43:12.135666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.381 [2024-11-20 12:43:12.135673] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.642 [2024-11-20 12:43:12.146954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.642 [2024-11-20 12:43:12.147383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.642 [2024-11-20 12:43:12.147401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.642 [2024-11-20 12:43:12.147409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.642 [2024-11-20 12:43:12.147580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.642 [2024-11-20 12:43:12.147754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.642 [2024-11-20 12:43:12.147763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.642 [2024-11-20 12:43:12.147769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.642 [2024-11-20 12:43:12.147776] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.642 [2024-11-20 12:43:12.159807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.642 [2024-11-20 12:43:12.160069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.642 [2024-11-20 12:43:12.160085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.642 [2024-11-20 12:43:12.160093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.642 [2024-11-20 12:43:12.160256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.642 [2024-11-20 12:43:12.160415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.642 [2024-11-20 12:43:12.160424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.642 [2024-11-20 12:43:12.160431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.642 [2024-11-20 12:43:12.160437] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.642 [2024-11-20 12:43:12.172657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.642 [2024-11-20 12:43:12.172981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.642 [2024-11-20 12:43:12.172999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.642 [2024-11-20 12:43:12.173006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.642 [2024-11-20 12:43:12.173164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.642 [2024-11-20 12:43:12.173327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.642 [2024-11-20 12:43:12.173341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.642 [2024-11-20 12:43:12.173351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.642 [2024-11-20 12:43:12.173359] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.642 [2024-11-20 12:43:12.185525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.642 [2024-11-20 12:43:12.186292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.642 [2024-11-20 12:43:12.186315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.642 [2024-11-20 12:43:12.186325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.642 [2024-11-20 12:43:12.186506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.642 [2024-11-20 12:43:12.186665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.642 [2024-11-20 12:43:12.186675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.642 [2024-11-20 12:43:12.186681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.642 [2024-11-20 12:43:12.186687] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.642 [2024-11-20 12:43:12.198396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.642 [2024-11-20 12:43:12.198754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.642 [2024-11-20 12:43:12.198802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.642 [2024-11-20 12:43:12.198835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.642 [2024-11-20 12:43:12.199434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.642 [2024-11-20 12:43:12.199987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.642 [2024-11-20 12:43:12.199997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.642 [2024-11-20 12:43:12.200003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.642 [2024-11-20 12:43:12.200010] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.642 [2024-11-20 12:43:12.211418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.642 [2024-11-20 12:43:12.211792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.642 [2024-11-20 12:43:12.211811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.642 [2024-11-20 12:43:12.211819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.642 [2024-11-20 12:43:12.211992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.642 [2024-11-20 12:43:12.212167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.642 [2024-11-20 12:43:12.212177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.642 [2024-11-20 12:43:12.212185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.642 [2024-11-20 12:43:12.212192] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.642 [2024-11-20 12:43:12.224288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.642 [2024-11-20 12:43:12.224609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.642 [2024-11-20 12:43:12.224627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.642 [2024-11-20 12:43:12.224634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.642 [2024-11-20 12:43:12.224792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.642 [2024-11-20 12:43:12.224951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.642 [2024-11-20 12:43:12.224961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.642 [2024-11-20 12:43:12.224967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.642 [2024-11-20 12:43:12.224974] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.642 [2024-11-20 12:43:12.237020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.642 [2024-11-20 12:43:12.237304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.642 [2024-11-20 12:43:12.237321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.642 [2024-11-20 12:43:12.237329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.642 [2024-11-20 12:43:12.237496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.642 [2024-11-20 12:43:12.237667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.642 [2024-11-20 12:43:12.237677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.642 [2024-11-20 12:43:12.237684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.642 [2024-11-20 12:43:12.237690] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.642 [2024-11-20 12:43:12.249888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.642 [2024-11-20 12:43:12.250214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.642 [2024-11-20 12:43:12.250248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.642 [2024-11-20 12:43:12.250256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.642 [2024-11-20 12:43:12.250424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.642 [2024-11-20 12:43:12.250592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.643 [2024-11-20 12:43:12.250602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.643 [2024-11-20 12:43:12.250609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.643 [2024-11-20 12:43:12.250616] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.643 [2024-11-20 12:43:12.262714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.643 [2024-11-20 12:43:12.263100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.643 [2024-11-20 12:43:12.263117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.643 [2024-11-20 12:43:12.263125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.643 [2024-11-20 12:43:12.263297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.643 [2024-11-20 12:43:12.263474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.643 [2024-11-20 12:43:12.263484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.643 [2024-11-20 12:43:12.263490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.643 [2024-11-20 12:43:12.263496] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.643 [2024-11-20 12:43:12.275537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.643 [2024-11-20 12:43:12.275875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.643 [2024-11-20 12:43:12.275892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.643 [2024-11-20 12:43:12.275899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.643 [2024-11-20 12:43:12.276057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.643 [2024-11-20 12:43:12.276222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.643 [2024-11-20 12:43:12.276231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.643 [2024-11-20 12:43:12.276257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.643 [2024-11-20 12:43:12.276265] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.643 [2024-11-20 12:43:12.288292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.643 [2024-11-20 12:43:12.288688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.643 [2024-11-20 12:43:12.288706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.643 [2024-11-20 12:43:12.288714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.643 [2024-11-20 12:43:12.288886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.643 [2024-11-20 12:43:12.289059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.643 [2024-11-20 12:43:12.289070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.643 [2024-11-20 12:43:12.289077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.643 [2024-11-20 12:43:12.289083] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.643 [2024-11-20 12:43:12.301324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.643 [2024-11-20 12:43:12.301738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.643 [2024-11-20 12:43:12.301756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.643 [2024-11-20 12:43:12.301764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.643 [2024-11-20 12:43:12.301935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.643 [2024-11-20 12:43:12.302109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.643 [2024-11-20 12:43:12.302120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.643 [2024-11-20 12:43:12.302127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.643 [2024-11-20 12:43:12.302133] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.643 [2024-11-20 12:43:12.314349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.643 [2024-11-20 12:43:12.314776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.643 [2024-11-20 12:43:12.314793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.643 [2024-11-20 12:43:12.314801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.643 [2024-11-20 12:43:12.314968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.643 [2024-11-20 12:43:12.315136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.643 [2024-11-20 12:43:12.315146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.643 [2024-11-20 12:43:12.315152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.643 [2024-11-20 12:43:12.315159] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.643 [2024-11-20 12:43:12.327152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.643 [2024-11-20 12:43:12.327530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.643 [2024-11-20 12:43:12.327548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.643 [2024-11-20 12:43:12.327554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.643 [2024-11-20 12:43:12.327712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.643 [2024-11-20 12:43:12.327871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.643 [2024-11-20 12:43:12.327881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.643 [2024-11-20 12:43:12.327887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.643 [2024-11-20 12:43:12.327893] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.643 [2024-11-20 12:43:12.340106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.643 [2024-11-20 12:43:12.340380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.643 [2024-11-20 12:43:12.340397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.643 [2024-11-20 12:43:12.340405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.643 [2024-11-20 12:43:12.340562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.643 [2024-11-20 12:43:12.340722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.643 [2024-11-20 12:43:12.340731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.643 [2024-11-20 12:43:12.340737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.643 [2024-11-20 12:43:12.340743] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.643 [2024-11-20 12:43:12.353079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.643 [2024-11-20 12:43:12.353484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.643 [2024-11-20 12:43:12.353501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.643 [2024-11-20 12:43:12.353509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.643 [2024-11-20 12:43:12.353667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.643 [2024-11-20 12:43:12.353826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.643 [2024-11-20 12:43:12.353836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.643 [2024-11-20 12:43:12.353842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.643 [2024-11-20 12:43:12.353849] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.643 [2024-11-20 12:43:12.365959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.643 [2024-11-20 12:43:12.366335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.643 [2024-11-20 12:43:12.366353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.643 [2024-11-20 12:43:12.366364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.643 [2024-11-20 12:43:12.366531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.643 [2024-11-20 12:43:12.366699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.643 [2024-11-20 12:43:12.366709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.643 [2024-11-20 12:43:12.366715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.643 [2024-11-20 12:43:12.366721] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.643 [2024-11-20 12:43:12.378866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.643 [2024-11-20 12:43:12.379189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.643 [2024-11-20 12:43:12.379214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.643 [2024-11-20 12:43:12.379223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.644 [2024-11-20 12:43:12.379404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.644 [2024-11-20 12:43:12.379573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.644 [2024-11-20 12:43:12.379583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.644 [2024-11-20 12:43:12.379590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.644 [2024-11-20 12:43:12.379596] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.644 [2024-11-20 12:43:12.391839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.644 [2024-11-20 12:43:12.392130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.644 [2024-11-20 12:43:12.392147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.644 [2024-11-20 12:43:12.392155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.644 [2024-11-20 12:43:12.392332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.644 [2024-11-20 12:43:12.392507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.644 [2024-11-20 12:43:12.392517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.644 [2024-11-20 12:43:12.392524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.644 [2024-11-20 12:43:12.392530] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.904 [2024-11-20 12:43:12.404833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.904 [2024-11-20 12:43:12.405163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.904 [2024-11-20 12:43:12.405181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.904 [2024-11-20 12:43:12.405188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.904 [2024-11-20 12:43:12.405366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.904 [2024-11-20 12:43:12.405542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.904 [2024-11-20 12:43:12.405552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.904 [2024-11-20 12:43:12.405559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.904 [2024-11-20 12:43:12.405566] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.904 [2024-11-20 12:43:12.417878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.904 [2024-11-20 12:43:12.418240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.904 [2024-11-20 12:43:12.418258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.904 [2024-11-20 12:43:12.418266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.904 [2024-11-20 12:43:12.418439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.904 [2024-11-20 12:43:12.418613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.904 [2024-11-20 12:43:12.418623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.905 [2024-11-20 12:43:12.418631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.905 [2024-11-20 12:43:12.418637] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.905 [2024-11-20 12:43:12.430951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.905 [2024-11-20 12:43:12.431394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-11-20 12:43:12.431413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.905 [2024-11-20 12:43:12.431422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.905 [2024-11-20 12:43:12.431604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.905 [2024-11-20 12:43:12.431789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.905 [2024-11-20 12:43:12.431800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.905 [2024-11-20 12:43:12.431807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.905 [2024-11-20 12:43:12.431815] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.905 [2024-11-20 12:43:12.444110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.905 [2024-11-20 12:43:12.444554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-11-20 12:43:12.444572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.905 [2024-11-20 12:43:12.444580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.905 [2024-11-20 12:43:12.444763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.905 [2024-11-20 12:43:12.444947] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.905 [2024-11-20 12:43:12.444958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.905 [2024-11-20 12:43:12.444968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.905 [2024-11-20 12:43:12.444976] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.905 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 336503 Killed "${NVMF_APP[@]}" "$@" 00:29:06.905 12:43:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:06.905 12:43:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:06.905 12:43:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:06.905 12:43:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:06.905 12:43:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:06.905 12:43:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=337838 00:29:06.905 12:43:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:06.905 12:43:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 337838 00:29:06.905 12:43:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 337838 ']' 00:29:06.905 12:43:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:06.905 12:43:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:06.905 [2024-11-20 12:43:12.457341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.905 12:43:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:06.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:06.905 [2024-11-20 12:43:12.457684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-11-20 12:43:12.457703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.905 [2024-11-20 12:43:12.457712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.905 12:43:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:06.905 [2024-11-20 12:43:12.457894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.905 12:43:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:06.905 [2024-11-20 12:43:12.458078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.905 [2024-11-20 12:43:12.458089] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.905 [2024-11-20 12:43:12.458096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.905 [2024-11-20 12:43:12.458103] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.905 [2024-11-20 12:43:12.470346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.905 [2024-11-20 12:43:12.470703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-11-20 12:43:12.470720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.905 [2024-11-20 12:43:12.470727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.905 [2024-11-20 12:43:12.470898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.905 [2024-11-20 12:43:12.471071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.905 [2024-11-20 12:43:12.471082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.905 [2024-11-20 12:43:12.471088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.905 [2024-11-20 12:43:12.471095] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.905 [2024-11-20 12:43:12.483415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.905 [2024-11-20 12:43:12.483760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-11-20 12:43:12.483778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.905 [2024-11-20 12:43:12.483786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.905 [2024-11-20 12:43:12.483958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.905 [2024-11-20 12:43:12.484131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.905 [2024-11-20 12:43:12.484141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.905 [2024-11-20 12:43:12.484148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.905 [2024-11-20 12:43:12.484155] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.905 [2024-11-20 12:43:12.496417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.905 [2024-11-20 12:43:12.496702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-11-20 12:43:12.496720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.905 [2024-11-20 12:43:12.496728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.905 [2024-11-20 12:43:12.496894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.905 [2024-11-20 12:43:12.497062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.905 [2024-11-20 12:43:12.497071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.905 [2024-11-20 12:43:12.497078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.905 [2024-11-20 12:43:12.497084] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.905 [2024-11-20 12:43:12.503656] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:29:06.905 [2024-11-20 12:43:12.503697] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:06.905 [2024-11-20 12:43:12.509435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.905 [2024-11-20 12:43:12.509792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-11-20 12:43:12.509809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.905 [2024-11-20 12:43:12.509817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.905 [2024-11-20 12:43:12.509989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.905 [2024-11-20 12:43:12.510162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.905 [2024-11-20 12:43:12.510176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.905 [2024-11-20 12:43:12.510183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.905 [2024-11-20 12:43:12.510190] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.905 [2024-11-20 12:43:12.522493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.905 [2024-11-20 12:43:12.522901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-11-20 12:43:12.522919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.905 [2024-11-20 12:43:12.522928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.905 [2024-11-20 12:43:12.523100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.905 [2024-11-20 12:43:12.523279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.905 [2024-11-20 12:43:12.523289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.905 [2024-11-20 12:43:12.523298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.906 [2024-11-20 12:43:12.523305] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.906 [2024-11-20 12:43:12.535550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.906 [2024-11-20 12:43:12.535839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.906 [2024-11-20 12:43:12.535858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.906 [2024-11-20 12:43:12.535865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.906 [2024-11-20 12:43:12.536038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.906 [2024-11-20 12:43:12.536218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.906 [2024-11-20 12:43:12.536229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.906 [2024-11-20 12:43:12.536236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.906 [2024-11-20 12:43:12.536243] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.906 [2024-11-20 12:43:12.548552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.906 [2024-11-20 12:43:12.548871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.906 [2024-11-20 12:43:12.548889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.906 [2024-11-20 12:43:12.548897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.906 [2024-11-20 12:43:12.549069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.906 [2024-11-20 12:43:12.549247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.906 [2024-11-20 12:43:12.549258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.906 [2024-11-20 12:43:12.549265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.906 [2024-11-20 12:43:12.549277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.906 [2024-11-20 12:43:12.561592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.906 [2024-11-20 12:43:12.561884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.906 [2024-11-20 12:43:12.561901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.906 [2024-11-20 12:43:12.561910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.906 [2024-11-20 12:43:12.562082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.906 [2024-11-20 12:43:12.562264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.906 [2024-11-20 12:43:12.562275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.906 [2024-11-20 12:43:12.562283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.906 [2024-11-20 12:43:12.562290] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.906 [2024-11-20 12:43:12.574598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.906 [2024-11-20 12:43:12.574936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.906 [2024-11-20 12:43:12.574954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.906 [2024-11-20 12:43:12.574962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.906 [2024-11-20 12:43:12.575134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.906 [2024-11-20 12:43:12.575313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.906 [2024-11-20 12:43:12.575323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.906 [2024-11-20 12:43:12.575330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.906 [2024-11-20 12:43:12.575337] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.906 [2024-11-20 12:43:12.583383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:06.906 [2024-11-20 12:43:12.587635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.906 [2024-11-20 12:43:12.587992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.906 [2024-11-20 12:43:12.588009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.906 [2024-11-20 12:43:12.588017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.906 [2024-11-20 12:43:12.588185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.906 [2024-11-20 12:43:12.588378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.906 [2024-11-20 12:43:12.588388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.906 [2024-11-20 12:43:12.588395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.906 [2024-11-20 12:43:12.588402] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.906 [2024-11-20 12:43:12.600561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.906 [2024-11-20 12:43:12.600980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.906 [2024-11-20 12:43:12.600998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.906 [2024-11-20 12:43:12.601006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.906 [2024-11-20 12:43:12.601178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.906 [2024-11-20 12:43:12.601355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.906 [2024-11-20 12:43:12.601365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.906 [2024-11-20 12:43:12.601372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.906 [2024-11-20 12:43:12.601379] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.906 [2024-11-20 12:43:12.613542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.906 [2024-11-20 12:43:12.613975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.906 [2024-11-20 12:43:12.613992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.906 [2024-11-20 12:43:12.614001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.906 [2024-11-20 12:43:12.614173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.906 [2024-11-20 12:43:12.614350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.906 [2024-11-20 12:43:12.614360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.906 [2024-11-20 12:43:12.614368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.906 [2024-11-20 12:43:12.614375] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.906 [2024-11-20 12:43:12.625189] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:06.906 [2024-11-20 12:43:12.625218] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:06.906 [2024-11-20 12:43:12.625225] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:06.906 [2024-11-20 12:43:12.625231] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:06.906 [2024-11-20 12:43:12.625236] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:06.906 [2024-11-20 12:43:12.626510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.906 [2024-11-20 12:43:12.626637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:06.906 [2024-11-20 12:43:12.626745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:06.906 [2024-11-20 12:43:12.626745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:06.906 [2024-11-20 12:43:12.626937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.906 [2024-11-20 12:43:12.626955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.906 [2024-11-20 12:43:12.626963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.906 [2024-11-20 12:43:12.627133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.906 [2024-11-20 12:43:12.627318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.907 [2024-11-20 12:43:12.627329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.907 [2024-11-20 12:43:12.627335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.907 [2024-11-20 12:43:12.627342] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.907 [2024-11-20 12:43:12.639484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.907 [2024-11-20 12:43:12.639937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.907 [2024-11-20 12:43:12.639958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.907 [2024-11-20 12:43:12.639967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.907 [2024-11-20 12:43:12.640140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.907 [2024-11-20 12:43:12.640323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.907 [2024-11-20 12:43:12.640333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.907 [2024-11-20 12:43:12.640341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.907 [2024-11-20 12:43:12.640348] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.907 [2024-11-20 12:43:12.652494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.907 [2024-11-20 12:43:12.652939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.907 [2024-11-20 12:43:12.652961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.907 [2024-11-20 12:43:12.652970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.907 [2024-11-20 12:43:12.653143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:06.907 [2024-11-20 12:43:12.653324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.907 [2024-11-20 12:43:12.653335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.907 [2024-11-20 12:43:12.653342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.907 [2024-11-20 12:43:12.653349] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.907 [2024-11-20 12:43:12.665494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.907 [2024-11-20 12:43:12.665861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.907 [2024-11-20 12:43:12.665882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:06.907 [2024-11-20 12:43:12.665890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:06.907 [2024-11-20 12:43:12.666064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:07.167 [2024-11-20 12:43:12.666243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:07.167 [2024-11-20 12:43:12.666254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:07.167 [2024-11-20 12:43:12.666267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:07.167 [2024-11-20 12:43:12.666276] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:07.167 [2024-11-20 12:43:12.678587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:07.167 [2024-11-20 12:43:12.679030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.167 [2024-11-20 12:43:12.679050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:07.167 [2024-11-20 12:43:12.679059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:07.167 [2024-11-20 12:43:12.679240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:07.167 [2024-11-20 12:43:12.679415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:07.167 [2024-11-20 12:43:12.679426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:07.167 [2024-11-20 12:43:12.679433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:07.167 [2024-11-20 12:43:12.679440] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:07.167 [2024-11-20 12:43:12.691585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:07.167 [2024-11-20 12:43:12.691949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.167 [2024-11-20 12:43:12.691968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:07.167 [2024-11-20 12:43:12.691976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:07.167 [2024-11-20 12:43:12.692149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:07.167 [2024-11-20 12:43:12.692330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:07.167 [2024-11-20 12:43:12.692340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:07.167 [2024-11-20 12:43:12.692347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:07.167 [2024-11-20 12:43:12.692354] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:07.167 [2024-11-20 12:43:12.704648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:07.167 [2024-11-20 12:43:12.705078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.167 [2024-11-20 12:43:12.705096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:07.167 [2024-11-20 12:43:12.705105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:07.167 [2024-11-20 12:43:12.705282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:07.167 [2024-11-20 12:43:12.705457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:07.167 [2024-11-20 12:43:12.705467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:07.167 [2024-11-20 12:43:12.705474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:07.167 [2024-11-20 12:43:12.705481] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:07.167 12:43:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:07.167 [2024-11-20 12:43:12.717673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:07.167 12:43:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:29:07.167 [2024-11-20 12:43:12.718085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.167 [2024-11-20 12:43:12.718105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:07.167 [2024-11-20 12:43:12.718114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:07.167 12:43:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:07.167 [2024-11-20 12:43:12.718294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:07.167 [2024-11-20 12:43:12.718468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:07.168 [2024-11-20 12:43:12.718478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:07.168 12:43:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:07.168 [2024-11-20 12:43:12.718485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:07.168 [2024-11-20 12:43:12.718492] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:07.168 12:43:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:07.168 [2024-11-20 12:43:12.730648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:07.168 [2024-11-20 12:43:12.731074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.168 [2024-11-20 12:43:12.731093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:07.168 [2024-11-20 12:43:12.731102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:07.168 [2024-11-20 12:43:12.731281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:07.168 [2024-11-20 12:43:12.731455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:07.168 [2024-11-20 12:43:12.731465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:07.168 [2024-11-20 12:43:12.731472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:07.168 [2024-11-20 12:43:12.731479] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:07.168 [2024-11-20 12:43:12.743643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:07.168 [2024-11-20 12:43:12.744050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.168 [2024-11-20 12:43:12.744069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:07.168 [2024-11-20 12:43:12.744077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:07.168 [2024-11-20 12:43:12.744254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:07.168 [2024-11-20 12:43:12.744427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:07.168 [2024-11-20 12:43:12.744437] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:07.168 [2024-11-20 12:43:12.744444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:07.168 [2024-11-20 12:43:12.744451] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:07.168 12:43:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:07.168 12:43:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:07.168 12:43:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.168 12:43:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:07.168 [2024-11-20 12:43:12.756613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:07.168 [2024-11-20 12:43:12.757050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.168 [2024-11-20 12:43:12.757068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:07.168 [2024-11-20 12:43:12.757075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:07.168 [2024-11-20 12:43:12.757254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:07.168 [2024-11-20 12:43:12.757428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:07.168 [2024-11-20 12:43:12.757439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:07.168 [2024-11-20 12:43:12.757446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:07.168 [2024-11-20 12:43:12.757453] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:07.168 [2024-11-20 12:43:12.762518] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:07.168 12:43:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.168 12:43:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:07.168 12:43:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.168 12:43:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:07.168 [2024-11-20 12:43:12.769602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:07.168 [2024-11-20 12:43:12.770024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.168 [2024-11-20 12:43:12.770041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:07.168 [2024-11-20 12:43:12.770049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:07.168 [2024-11-20 12:43:12.770226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:07.168 [2024-11-20 12:43:12.770400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:07.168 [2024-11-20 12:43:12.770410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:07.168 [2024-11-20 12:43:12.770417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:07.168 [2024-11-20 12:43:12.770424] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:07.168 5014.50 IOPS, 19.59 MiB/s [2024-11-20T11:43:12.934Z] [2024-11-20 12:43:12.783851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:07.168 [2024-11-20 12:43:12.784269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.168 [2024-11-20 12:43:12.784287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:07.168 [2024-11-20 12:43:12.784295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:07.168 [2024-11-20 12:43:12.784473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:07.168 [2024-11-20 12:43:12.784647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:07.168 [2024-11-20 12:43:12.784657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:07.168 [2024-11-20 12:43:12.784663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:07.168 [2024-11-20 12:43:12.784670] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:07.168 [2024-11-20 12:43:12.796826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:07.168 [2024-11-20 12:43:12.797187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.168 [2024-11-20 12:43:12.797210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:07.168 [2024-11-20 12:43:12.797219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:07.168 [2024-11-20 12:43:12.797392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:07.168 [2024-11-20 12:43:12.797565] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:07.168 [2024-11-20 12:43:12.797575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:07.168 [2024-11-20 12:43:12.797584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:07.168 [2024-11-20 12:43:12.797591] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:07.168 Malloc0 00:29:07.168 12:43:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.168 12:43:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:07.168 12:43:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.168 12:43:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:07.168 [2024-11-20 12:43:12.809877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:07.168 [2024-11-20 12:43:12.810299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.169 [2024-11-20 12:43:12.810317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:07.169 [2024-11-20 12:43:12.810326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:07.169 [2024-11-20 12:43:12.810498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:07.169 [2024-11-20 12:43:12.810671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:07.169 [2024-11-20 12:43:12.810681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:07.169 [2024-11-20 12:43:12.810688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:07.169 [2024-11-20 12:43:12.810695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:07.169 12:43:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.169 12:43:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:07.169 12:43:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.169 12:43:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:07.169 12:43:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.169 12:43:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:07.169 12:43:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.169 12:43:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:07.169 [2024-11-20 12:43:12.823033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:07.169 [2024-11-20 12:43:12.823395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.169 [2024-11-20 12:43:12.823414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d8500 with addr=10.0.0.2, port=4420 00:29:07.169 [2024-11-20 12:43:12.823423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d8500 is same with the state(6) to be set 00:29:07.169 [2024-11-20 12:43:12.823596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d8500 (9): Bad file descriptor 00:29:07.169 [2024-11-20 12:43:12.823769] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:07.169 [2024-11-20 12:43:12.823779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:07.169 [2024-11-20 12:43:12.823786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:07.169 [2024-11-20 12:43:12.823793] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:07.169 [2024-11-20 12:43:12.824212] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:07.169 12:43:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.169 12:43:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 336770 00:29:07.169 [2024-11-20 12:43:12.836071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:07.428 [2024-11-20 12:43:12.991420] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:29:09.299 5561.14 IOPS, 21.72 MiB/s [2024-11-20T11:43:16.001Z] 6269.88 IOPS, 24.49 MiB/s [2024-11-20T11:43:16.935Z] 6803.00 IOPS, 26.57 MiB/s [2024-11-20T11:43:17.871Z] 7245.60 IOPS, 28.30 MiB/s [2024-11-20T11:43:18.806Z] 7601.82 IOPS, 29.69 MiB/s [2024-11-20T11:43:20.181Z] 7904.25 IOPS, 30.88 MiB/s [2024-11-20T11:43:21.118Z] 8152.54 IOPS, 31.85 MiB/s [2024-11-20T11:43:22.054Z] 8374.14 IOPS, 32.71 MiB/s 00:29:16.288 Latency(us) 00:29:16.288 [2024-11-20T11:43:22.054Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:16.288 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:16.288 Verification LBA range: start 0x0 length 0x4000 00:29:16.288 Nvme1n1 : 15.01 8562.87 33.45 11535.56 0.00 6347.93 624.15 15915.89 00:29:16.288 [2024-11-20T11:43:22.054Z] =================================================================================================================== 00:29:16.288 [2024-11-20T11:43:22.054Z] Total : 8562.87 33.45 11535.56 0.00 6347.93 624.15 15915.89 00:29:16.288 12:43:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:16.288 12:43:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:16.288 12:43:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.288 12:43:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:16.288 12:43:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.288 12:43:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:16.288 12:43:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:16.288 12:43:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:16.288 12:43:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:29:16.288 12:43:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:16.288 12:43:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:29:16.288 12:43:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:16.288 12:43:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:16.288 rmmod nvme_tcp 00:29:16.288 rmmod nvme_fabrics 00:29:16.288 rmmod nvme_keyring 00:29:16.288 12:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:16.288 12:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:29:16.288 12:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:29:16.288 12:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 337838 ']' 00:29:16.288 12:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 337838 00:29:16.288 12:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 337838 ']' 00:29:16.288 12:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 337838 00:29:16.288 12:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:29:16.288 12:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:16.288 12:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 337838 00:29:16.548 12:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:16.548 12:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:16.548 12:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 337838' 00:29:16.548 killing process with pid 337838 00:29:16.548 12:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 337838 00:29:16.548 12:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 337838 00:29:16.548 12:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:16.548 12:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:16.548 12:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:16.548 12:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:29:16.548 12:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:29:16.548 12:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:16.548 12:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:29:16.548 12:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:16.548 12:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:16.548 12:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:16.548 12:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:16.548 12:43:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:19.084 12:43:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:19.084 00:29:19.084 real 0m26.120s 00:29:19.084 user 1m0.931s 00:29:19.084 sys 0m6.711s 00:29:19.084 12:43:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:19.084 12:43:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:19.084 ************************************ 00:29:19.084 END TEST nvmf_bdevperf 00:29:19.084 ************************************ 00:29:19.084 12:43:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:19.084 12:43:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:19.084 12:43:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:19.084 12:43:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.084 ************************************ 00:29:19.084 START TEST nvmf_target_disconnect 00:29:19.084 ************************************ 00:29:19.084 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:19.084 * Looking for test storage... 00:29:19.084 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:19.084 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:19.084 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:29:19.084 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:19.084 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:19.084 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:19.084 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:19.084 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:19.084 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:29:19.084 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:19.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.085 --rc genhtml_branch_coverage=1 00:29:19.085 --rc genhtml_function_coverage=1 00:29:19.085 --rc genhtml_legend=1 00:29:19.085 --rc geninfo_all_blocks=1 00:29:19.085 --rc geninfo_unexecuted_blocks=1 00:29:19.085 00:29:19.085 ' 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:19.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.085 --rc genhtml_branch_coverage=1 00:29:19.085 --rc genhtml_function_coverage=1 00:29:19.085 --rc genhtml_legend=1 00:29:19.085 --rc geninfo_all_blocks=1 00:29:19.085 --rc geninfo_unexecuted_blocks=1 00:29:19.085 00:29:19.085 ' 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:19.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.085 --rc genhtml_branch_coverage=1 00:29:19.085 --rc genhtml_function_coverage=1 00:29:19.085 --rc genhtml_legend=1 00:29:19.085 --rc geninfo_all_blocks=1 00:29:19.085 --rc geninfo_unexecuted_blocks=1 00:29:19.085 00:29:19.085 ' 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:19.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.085 --rc genhtml_branch_coverage=1 00:29:19.085 --rc genhtml_function_coverage=1 00:29:19.085 --rc genhtml_legend=1 00:29:19.085 --rc geninfo_all_blocks=1 00:29:19.085 --rc geninfo_unexecuted_blocks=1 00:29:19.085 00:29:19.085 ' 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:19.085 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:19.086 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:19.086 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:19.086 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:19.086 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:19.086 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:19.086 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:19.086 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:19.086 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:19.086 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:19.086 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:19.086 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:19.086 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:19.086 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:19.086 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:19.086 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:19.086 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:19.086 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:19.086 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:19.086 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:19.086 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:29:19.086 12:43:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:25.676 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:25.676 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:25.676 Found net devices under 0000:86:00.0: cvl_0_0 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:25.676 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:25.676 Found net devices under 0000:86:00.1: cvl_0_1 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:25.677 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:25.677 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.371 ms 00:29:25.677 00:29:25.677 --- 10.0.0.2 ping statistics --- 00:29:25.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:25.677 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:25.677 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:25.677 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:29:25.677 00:29:25.677 --- 10.0.0.1 ping statistics --- 00:29:25.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:25.677 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:25.677 ************************************ 00:29:25.677 START TEST nvmf_target_disconnect_tc1 00:29:25.677 ************************************ 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:25.677 [2024-11-20 12:43:30.713014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.677 [2024-11-20 12:43:30.713071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb0ab0 with addr=10.0.0.2, port=4420 00:29:25.677 [2024-11-20 12:43:30.713093] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:25.677 [2024-11-20 12:43:30.713103] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:25.677 [2024-11-20 12:43:30.713109] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:29:25.677 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:25.677 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:25.677 Initializing NVMe Controllers 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:25.677 00:29:25.677 real 0m0.130s 00:29:25.677 user 0m0.052s 00:29:25.677 sys 0m0.074s 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:25.677 ************************************ 00:29:25.677 END TEST nvmf_target_disconnect_tc1 00:29:25.677 ************************************ 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:25.677 ************************************ 00:29:25.677 START TEST nvmf_target_disconnect_tc2 00:29:25.677 ************************************ 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=342856 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 342856 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:25.677 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 342856 ']' 00:29:25.678 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:25.678 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:25.678 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:25.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:25.678 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:25.678 12:43:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:25.678 [2024-11-20 12:43:30.854857] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:29:25.678 [2024-11-20 12:43:30.854896] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:25.678 [2024-11-20 12:43:30.933490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:25.678 [2024-11-20 12:43:30.975975] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:25.678 [2024-11-20 12:43:30.976011] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:25.678 [2024-11-20 12:43:30.976018] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:25.678 [2024-11-20 12:43:30.976026] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:25.678 [2024-11-20 12:43:30.976034] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:25.678 [2024-11-20 12:43:30.977563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:25.678 [2024-11-20 12:43:30.977670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:25.678 [2024-11-20 12:43:30.977776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:25.678 [2024-11-20 12:43:30.977777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:25.936 12:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:25.936 12:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:25.936 12:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:25.936 12:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:25.936 12:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.194 12:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:26.194 12:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:26.194 12:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.194 12:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.195 Malloc0 00:29:26.195 12:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.195 12:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:26.195 12:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.195 12:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.195 [2024-11-20 12:43:31.768210] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:26.195 12:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.195 12:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:26.195 12:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.195 12:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.195 12:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.195 12:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:26.195 12:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.195 12:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.195 12:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.195 12:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:26.195 12:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.195 12:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.195 [2024-11-20 12:43:31.800443] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:26.195 12:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.195 12:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:26.195 12:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.195 12:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.195 12:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.195 12:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=343102 00:29:26.195 12:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:26.195 12:43:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:28.101 12:43:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 342856 00:29:28.101 12:43:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Write completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Write completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Write completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Write completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Write completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Write completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Write completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Write completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Write completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Write completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Write completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Write completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 [2024-11-20 12:43:33.828932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Write completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Write completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Write completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Write completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Write completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Write completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Write completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 [2024-11-20 12:43:33.829138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Write completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Write completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Write completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Write completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Write completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Write completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Write completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Write completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Write completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Write completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Write completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Write completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 [2024-11-20 12:43:33.829360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:28.101 Write completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Write completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Write completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Write completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Write completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Write completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.101 Read completed with error (sct=0, sc=8) 00:29:28.101 starting I/O failed 00:29:28.102 Write completed with error (sct=0, sc=8) 00:29:28.102 starting I/O failed 00:29:28.102 Write completed with error (sct=0, sc=8) 00:29:28.102 starting I/O failed 00:29:28.102 Write completed with error (sct=0, sc=8) 00:29:28.102 starting I/O failed 00:29:28.102 Read completed with error (sct=0, sc=8) 00:29:28.102 starting I/O failed 00:29:28.102 Write completed with error (sct=0, sc=8) 00:29:28.102 starting I/O failed 00:29:28.102 Write completed with error (sct=0, sc=8) 00:29:28.102 starting I/O failed 00:29:28.102 Read completed with error (sct=0, sc=8) 00:29:28.102 starting I/O failed 00:29:28.102 Read completed with error (sct=0, sc=8) 00:29:28.102 starting I/O failed 00:29:28.102 Write completed with error (sct=0, sc=8) 00:29:28.102 starting I/O failed 00:29:28.102 Read completed with error (sct=0, sc=8) 00:29:28.102 starting I/O failed 00:29:28.102 Write completed with error (sct=0, sc=8) 00:29:28.102 starting I/O failed 00:29:28.102 Write completed with error (sct=0, sc=8) 00:29:28.102 starting I/O failed 00:29:28.102 Write completed with error (sct=0, sc=8) 00:29:28.102 starting I/O failed 00:29:28.102 Write completed with error (sct=0, sc=8) 00:29:28.102 starting I/O failed 00:29:28.102 Read completed with error (sct=0, sc=8) 00:29:28.102 starting I/O failed 00:29:28.102 Write completed with error (sct=0, sc=8) 00:29:28.102 starting I/O failed 00:29:28.102 Read completed with error (sct=0, sc=8) 00:29:28.102 starting I/O failed 00:29:28.102 Write completed with error (sct=0, sc=8) 00:29:28.102 starting I/O failed 00:29:28.102 Read completed with error (sct=0, sc=8) 00:29:28.102 starting I/O failed 00:29:28.102 Write completed with error (sct=0, sc=8) 00:29:28.102 starting I/O failed 00:29:28.102 Read completed with error (sct=0, sc=8) 00:29:28.102 starting I/O failed 00:29:28.102 Read completed with error (sct=0, sc=8) 00:29:28.102 starting I/O failed 00:29:28.102 Write completed with error (sct=0, sc=8) 00:29:28.102 starting I/O failed 00:29:28.102 [2024-11-20 12:43:33.829572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.102 [2024-11-20 12:43:33.829735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.102 [2024-11-20 12:43:33.829761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.102 qpair failed and we were unable to recover it. 00:29:28.102 [2024-11-20 12:43:33.830040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.102 [2024-11-20 12:43:33.830059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.102 qpair failed and we were unable to recover it. 00:29:28.102 [2024-11-20 12:43:33.830223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.102 [2024-11-20 12:43:33.830237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.102 qpair failed and we were unable to recover it. 00:29:28.102 [2024-11-20 12:43:33.830409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.102 [2024-11-20 12:43:33.830426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.102 qpair failed and we were unable to recover it. 00:29:28.102 [2024-11-20 12:43:33.830528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.102 [2024-11-20 12:43:33.830539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.102 qpair failed and we were unable to recover it. 00:29:28.102 [2024-11-20 12:43:33.830636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.102 [2024-11-20 12:43:33.830647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.102 qpair failed and we were unable to recover it. 00:29:28.102 [2024-11-20 12:43:33.830798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.102 [2024-11-20 12:43:33.830812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.102 qpair failed and we were unable to recover it. 00:29:28.102 [2024-11-20 12:43:33.830886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.102 [2024-11-20 12:43:33.830898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.102 qpair failed and we were unable to recover it. 00:29:28.102 [2024-11-20 12:43:33.830962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.102 [2024-11-20 12:43:33.830973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.102 qpair failed and we were unable to recover it. 00:29:28.102 [2024-11-20 12:43:33.831133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.102 [2024-11-20 12:43:33.831166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.102 qpair failed and we were unable to recover it. 00:29:28.102 [2024-11-20 12:43:33.831376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.102 [2024-11-20 12:43:33.831412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.102 qpair failed and we were unable to recover it. 00:29:28.102 [2024-11-20 12:43:33.831545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.102 [2024-11-20 12:43:33.831580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.102 qpair failed and we were unable to recover it. 00:29:28.102 [2024-11-20 12:43:33.831830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.102 [2024-11-20 12:43:33.831843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.102 qpair failed and we were unable to recover it. 00:29:28.102 [2024-11-20 12:43:33.831952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.102 [2024-11-20 12:43:33.831965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.102 qpair failed and we were unable to recover it. 00:29:28.102 [2024-11-20 12:43:33.832052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.102 [2024-11-20 12:43:33.832063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.102 qpair failed and we were unable to recover it. 00:29:28.102 [2024-11-20 12:43:33.832146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.102 [2024-11-20 12:43:33.832159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.102 qpair failed and we were unable to recover it. 00:29:28.102 [2024-11-20 12:43:33.832396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.102 [2024-11-20 12:43:33.832410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.102 qpair failed and we were unable to recover it. 00:29:28.102 [2024-11-20 12:43:33.832505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.102 [2024-11-20 12:43:33.832517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.102 qpair failed and we were unable to recover it. 00:29:28.102 [2024-11-20 12:43:33.832610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.102 [2024-11-20 12:43:33.832621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.102 qpair failed and we were unable to recover it. 00:29:28.102 [2024-11-20 12:43:33.832690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.102 [2024-11-20 12:43:33.832701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.102 qpair failed and we were unable to recover it. 00:29:28.102 [2024-11-20 12:43:33.832787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.102 [2024-11-20 12:43:33.832798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.102 qpair failed and we were unable to recover it. 00:29:28.102 [2024-11-20 12:43:33.832952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.102 [2024-11-20 12:43:33.832964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.102 qpair failed and we were unable to recover it. 00:29:28.102 [2024-11-20 12:43:33.833122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.102 [2024-11-20 12:43:33.833135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.102 qpair failed and we were unable to recover it. 00:29:28.102 [2024-11-20 12:43:33.833238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.102 [2024-11-20 12:43:33.833271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.102 qpair failed and we were unable to recover it. 00:29:28.102 [2024-11-20 12:43:33.833456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.102 [2024-11-20 12:43:33.833489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.102 qpair failed and we were unable to recover it. 00:29:28.102 [2024-11-20 12:43:33.833627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.102 [2024-11-20 12:43:33.833660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.102 qpair failed and we were unable to recover it. 00:29:28.102 [2024-11-20 12:43:33.833800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.102 [2024-11-20 12:43:33.833812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.102 qpair failed and we were unable to recover it. 00:29:28.102 [2024-11-20 12:43:33.833885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.102 [2024-11-20 12:43:33.833896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.102 qpair failed and we were unable to recover it. 00:29:28.102 [2024-11-20 12:43:33.833973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.102 [2024-11-20 12:43:33.833986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.102 qpair failed and we were unable to recover it. 00:29:28.102 [2024-11-20 12:43:33.834068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.103 [2024-11-20 12:43:33.834079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.103 qpair failed and we were unable to recover it. 00:29:28.103 [2024-11-20 12:43:33.834233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.103 [2024-11-20 12:43:33.834256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.103 qpair failed and we were unable to recover it. 00:29:28.103 [2024-11-20 12:43:33.834327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.103 [2024-11-20 12:43:33.834339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.103 qpair failed and we were unable to recover it. 00:29:28.103 [2024-11-20 12:43:33.834417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.103 [2024-11-20 12:43:33.834427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.103 qpair failed and we were unable to recover it. 00:29:28.103 [2024-11-20 12:43:33.834508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.103 [2024-11-20 12:43:33.834518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.103 qpair failed and we were unable to recover it. 00:29:28.103 [2024-11-20 12:43:33.834597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.103 [2024-11-20 12:43:33.834608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.103 qpair failed and we were unable to recover it. 00:29:28.103 [2024-11-20 12:43:33.834743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.103 [2024-11-20 12:43:33.834755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.103 qpair failed and we were unable to recover it. 00:29:28.103 [2024-11-20 12:43:33.834844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.103 [2024-11-20 12:43:33.834855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.103 qpair failed and we were unable to recover it. 00:29:28.103 [2024-11-20 12:43:33.834921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.103 [2024-11-20 12:43:33.834932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.103 qpair failed and we were unable to recover it. 00:29:28.103 [2024-11-20 12:43:33.835000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.103 [2024-11-20 12:43:33.835010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.103 qpair failed and we were unable to recover it. 00:29:28.103 [2024-11-20 12:43:33.835182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.103 [2024-11-20 12:43:33.835225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.103 qpair failed and we were unable to recover it. 00:29:28.103 [2024-11-20 12:43:33.835347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.103 [2024-11-20 12:43:33.835357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.103 qpair failed and we were unable to recover it. 00:29:28.103 [2024-11-20 12:43:33.835434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.103 [2024-11-20 12:43:33.835444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.103 qpair failed and we were unable to recover it. 00:29:28.103 [2024-11-20 12:43:33.835513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.103 [2024-11-20 12:43:33.835524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.103 qpair failed and we were unable to recover it. 00:29:28.103 [2024-11-20 12:43:33.835740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.103 [2024-11-20 12:43:33.835772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.103 qpair failed and we were unable to recover it. 00:29:28.103 [2024-11-20 12:43:33.835894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.103 [2024-11-20 12:43:33.835929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.103 qpair failed and we were unable to recover it. 00:29:28.103 [2024-11-20 12:43:33.836104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.103 [2024-11-20 12:43:33.836136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.103 qpair failed and we were unable to recover it. 00:29:28.103 [2024-11-20 12:43:33.836260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.103 [2024-11-20 12:43:33.836295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.103 qpair failed and we were unable to recover it. 00:29:28.103 [2024-11-20 12:43:33.836570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.103 [2024-11-20 12:43:33.836605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.103 qpair failed and we were unable to recover it. 00:29:28.103 [2024-11-20 12:43:33.836735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.103 [2024-11-20 12:43:33.836768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.103 qpair failed and we were unable to recover it. 00:29:28.103 [2024-11-20 12:43:33.836890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.103 [2024-11-20 12:43:33.836925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.103 qpair failed and we were unable to recover it. 00:29:28.103 [2024-11-20 12:43:33.837170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.103 [2024-11-20 12:43:33.837211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.103 qpair failed and we were unable to recover it. 00:29:28.103 [2024-11-20 12:43:33.837330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.103 [2024-11-20 12:43:33.837363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.103 qpair failed and we were unable to recover it. 00:29:28.103 [2024-11-20 12:43:33.837481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.103 [2024-11-20 12:43:33.837513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.103 qpair failed and we were unable to recover it. 00:29:28.103 [2024-11-20 12:43:33.837697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.103 [2024-11-20 12:43:33.837720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.103 qpair failed and we were unable to recover it. 00:29:28.103 [2024-11-20 12:43:33.837836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.103 [2024-11-20 12:43:33.837858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.103 qpair failed and we were unable to recover it. 00:29:28.103 [2024-11-20 12:43:33.838025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.103 [2024-11-20 12:43:33.838048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.103 qpair failed and we were unable to recover it. 00:29:28.103 [2024-11-20 12:43:33.838135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.103 [2024-11-20 12:43:33.838157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.103 qpair failed and we were unable to recover it. 00:29:28.103 [2024-11-20 12:43:33.838262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.103 [2024-11-20 12:43:33.838289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.103 qpair failed and we were unable to recover it. 00:29:28.103 [2024-11-20 12:43:33.838383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.103 [2024-11-20 12:43:33.838403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.103 qpair failed and we were unable to recover it. 00:29:28.103 [2024-11-20 12:43:33.838500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.103 [2024-11-20 12:43:33.838521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.103 qpair failed and we were unable to recover it. 00:29:28.103 [2024-11-20 12:43:33.838620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.103 [2024-11-20 12:43:33.838642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.103 qpair failed and we were unable to recover it. 00:29:28.103 [2024-11-20 12:43:33.838726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.103 [2024-11-20 12:43:33.838747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.103 qpair failed and we were unable to recover it. 00:29:28.103 [2024-11-20 12:43:33.838834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.103 [2024-11-20 12:43:33.838855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.103 qpair failed and we were unable to recover it. 00:29:28.103 [2024-11-20 12:43:33.838945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.103 [2024-11-20 12:43:33.838967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.103 qpair failed and we were unable to recover it. 00:29:28.103 [2024-11-20 12:43:33.839126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.103 [2024-11-20 12:43:33.839149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.103 qpair failed and we were unable to recover it. 00:29:28.103 [2024-11-20 12:43:33.839255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.103 [2024-11-20 12:43:33.839277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.103 qpair failed and we were unable to recover it. 00:29:28.103 [2024-11-20 12:43:33.839367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.103 [2024-11-20 12:43:33.839388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.103 qpair failed and we were unable to recover it. 00:29:28.103 [2024-11-20 12:43:33.839518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.104 [2024-11-20 12:43:33.839551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.104 qpair failed and we were unable to recover it. 00:29:28.104 [2024-11-20 12:43:33.839668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.104 [2024-11-20 12:43:33.839701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.104 qpair failed and we were unable to recover it. 00:29:28.104 [2024-11-20 12:43:33.839828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.104 [2024-11-20 12:43:33.839861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.104 qpair failed and we were unable to recover it. 00:29:28.104 [2024-11-20 12:43:33.840065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.104 [2024-11-20 12:43:33.840098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.104 qpair failed and we were unable to recover it. 00:29:28.104 [2024-11-20 12:43:33.840292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.104 [2024-11-20 12:43:33.840327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.104 qpair failed and we were unable to recover it. 00:29:28.104 [2024-11-20 12:43:33.840502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.104 [2024-11-20 12:43:33.840527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.104 qpair failed and we were unable to recover it. 00:29:28.104 [2024-11-20 12:43:33.840656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.104 [2024-11-20 12:43:33.840688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.104 qpair failed and we were unable to recover it. 00:29:28.104 [2024-11-20 12:43:33.841224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.104 [2024-11-20 12:43:33.841271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.104 qpair failed and we were unable to recover it. 00:29:28.104 [2024-11-20 12:43:33.841471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.104 [2024-11-20 12:43:33.841505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.104 qpair failed and we were unable to recover it. 00:29:28.104 [2024-11-20 12:43:33.841624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.104 [2024-11-20 12:43:33.841647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.104 qpair failed and we were unable to recover it. 00:29:28.104 [2024-11-20 12:43:33.841737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.104 [2024-11-20 12:43:33.841760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.104 qpair failed and we were unable to recover it. 00:29:28.104 [2024-11-20 12:43:33.841849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.104 [2024-11-20 12:43:33.841871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.104 qpair failed and we were unable to recover it. 00:29:28.104 [2024-11-20 12:43:33.841965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.104 [2024-11-20 12:43:33.841987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.104 qpair failed and we were unable to recover it. 00:29:28.104 [2024-11-20 12:43:33.842078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.104 [2024-11-20 12:43:33.842101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.104 qpair failed and we were unable to recover it. 00:29:28.104 [2024-11-20 12:43:33.842184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.104 [2024-11-20 12:43:33.842215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.104 qpair failed and we were unable to recover it. 00:29:28.104 [2024-11-20 12:43:33.842388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.104 [2024-11-20 12:43:33.842411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.104 qpair failed and we were unable to recover it. 00:29:28.104 [2024-11-20 12:43:33.842627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.104 [2024-11-20 12:43:33.842650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.104 qpair failed and we were unable to recover it. 00:29:28.104 [2024-11-20 12:43:33.842800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.104 [2024-11-20 12:43:33.842822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.104 qpair failed and we were unable to recover it. 00:29:28.104 [2024-11-20 12:43:33.842909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.104 [2024-11-20 12:43:33.842932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.104 qpair failed and we were unable to recover it. 00:29:28.104 [2024-11-20 12:43:33.843031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.104 [2024-11-20 12:43:33.843054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.104 qpair failed and we were unable to recover it. 00:29:28.104 [2024-11-20 12:43:33.843241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.104 [2024-11-20 12:43:33.843265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.104 qpair failed and we were unable to recover it. 00:29:28.104 [2024-11-20 12:43:33.843417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.104 [2024-11-20 12:43:33.843440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.104 qpair failed and we were unable to recover it. 00:29:28.104 [2024-11-20 12:43:33.843524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.104 [2024-11-20 12:43:33.843547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.104 qpair failed and we were unable to recover it. 00:29:28.104 [2024-11-20 12:43:33.843642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.104 [2024-11-20 12:43:33.843664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.104 qpair failed and we were unable to recover it. 00:29:28.104 [2024-11-20 12:43:33.843757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.104 [2024-11-20 12:43:33.843780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.104 qpair failed and we were unable to recover it. 00:29:28.104 [2024-11-20 12:43:33.843932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.104 [2024-11-20 12:43:33.843955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.104 qpair failed and we were unable to recover it. 00:29:28.104 [2024-11-20 12:43:33.844124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.104 [2024-11-20 12:43:33.844147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.104 qpair failed and we were unable to recover it. 00:29:28.104 [2024-11-20 12:43:33.844301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.104 [2024-11-20 12:43:33.844325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.104 qpair failed and we were unable to recover it. 00:29:28.104 [2024-11-20 12:43:33.844433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.104 [2024-11-20 12:43:33.844456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.104 qpair failed and we were unable to recover it. 00:29:28.104 [2024-11-20 12:43:33.844542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.104 [2024-11-20 12:43:33.844565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.104 qpair failed and we were unable to recover it. 00:29:28.104 [2024-11-20 12:43:33.844652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.104 [2024-11-20 12:43:33.844675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.104 qpair failed and we were unable to recover it. 00:29:28.104 [2024-11-20 12:43:33.844816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.104 [2024-11-20 12:43:33.844888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.104 qpair failed and we were unable to recover it. 00:29:28.104 [2024-11-20 12:43:33.845017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.104 [2024-11-20 12:43:33.845054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.104 qpair failed and we were unable to recover it. 00:29:28.104 [2024-11-20 12:43:33.845232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.104 [2024-11-20 12:43:33.845267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.104 qpair failed and we were unable to recover it. 00:29:28.104 [2024-11-20 12:43:33.845373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.104 [2024-11-20 12:43:33.845406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.104 qpair failed and we were unable to recover it. 00:29:28.104 [2024-11-20 12:43:33.845608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.104 [2024-11-20 12:43:33.845642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.104 qpair failed and we were unable to recover it. 00:29:28.104 [2024-11-20 12:43:33.845828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.104 [2024-11-20 12:43:33.845862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.104 qpair failed and we were unable to recover it. 00:29:28.104 [2024-11-20 12:43:33.846026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.104 [2024-11-20 12:43:33.846052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.104 qpair failed and we were unable to recover it. 00:29:28.104 [2024-11-20 12:43:33.846215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.105 [2024-11-20 12:43:33.846240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.105 qpair failed and we were unable to recover it. 00:29:28.105 [2024-11-20 12:43:33.846354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.105 [2024-11-20 12:43:33.846377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.105 qpair failed and we were unable to recover it. 00:29:28.105 [2024-11-20 12:43:33.846482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.105 [2024-11-20 12:43:33.846506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.105 qpair failed and we were unable to recover it. 00:29:28.105 [2024-11-20 12:43:33.846679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.105 [2024-11-20 12:43:33.846702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.105 qpair failed and we were unable to recover it. 00:29:28.105 [2024-11-20 12:43:33.846800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.105 [2024-11-20 12:43:33.846823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.105 qpair failed and we were unable to recover it. 00:29:28.105 [2024-11-20 12:43:33.846908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.105 [2024-11-20 12:43:33.846929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.105 qpair failed and we were unable to recover it. 00:29:28.105 [2024-11-20 12:43:33.847098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.105 [2024-11-20 12:43:33.847121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.105 qpair failed and we were unable to recover it. 00:29:28.105 [2024-11-20 12:43:33.847274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.105 [2024-11-20 12:43:33.847299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.105 qpair failed and we were unable to recover it. 00:29:28.105 [2024-11-20 12:43:33.847386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.105 [2024-11-20 12:43:33.847409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.105 qpair failed and we were unable to recover it. 00:29:28.105 [2024-11-20 12:43:33.847507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.105 [2024-11-20 12:43:33.847531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.105 qpair failed and we were unable to recover it. 00:29:28.105 [2024-11-20 12:43:33.847631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.105 [2024-11-20 12:43:33.847654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.105 qpair failed and we were unable to recover it. 00:29:28.105 [2024-11-20 12:43:33.847807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.105 [2024-11-20 12:43:33.847831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.105 qpair failed and we were unable to recover it. 00:29:28.105 [2024-11-20 12:43:33.847932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.105 [2024-11-20 12:43:33.847955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.105 qpair failed and we were unable to recover it. 00:29:28.105 [2024-11-20 12:43:33.848143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.105 [2024-11-20 12:43:33.848168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.105 qpair failed and we were unable to recover it. 00:29:28.105 [2024-11-20 12:43:33.848291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.105 [2024-11-20 12:43:33.848316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.105 qpair failed and we were unable to recover it. 00:29:28.105 [2024-11-20 12:43:33.848480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.105 [2024-11-20 12:43:33.848502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.105 qpair failed and we were unable to recover it. 00:29:28.105 [2024-11-20 12:43:33.848591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.105 [2024-11-20 12:43:33.848612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.105 qpair failed and we were unable to recover it. 00:29:28.105 [2024-11-20 12:43:33.848776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.105 [2024-11-20 12:43:33.848799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.105 qpair failed and we were unable to recover it. 00:29:28.105 [2024-11-20 12:43:33.848887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.105 [2024-11-20 12:43:33.848908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.105 qpair failed and we were unable to recover it. 00:29:28.105 [2024-11-20 12:43:33.849003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.105 [2024-11-20 12:43:33.849026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.105 qpair failed and we were unable to recover it. 00:29:28.105 [2024-11-20 12:43:33.849109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.105 [2024-11-20 12:43:33.849136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.105 qpair failed and we were unable to recover it. 00:29:28.105 [2024-11-20 12:43:33.849285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.105 [2024-11-20 12:43:33.849310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.105 qpair failed and we were unable to recover it. 00:29:28.105 [2024-11-20 12:43:33.849423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.105 [2024-11-20 12:43:33.849447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.105 qpair failed and we were unable to recover it. 00:29:28.105 [2024-11-20 12:43:33.849538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.105 [2024-11-20 12:43:33.849559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.105 qpair failed and we were unable to recover it. 00:29:28.105 [2024-11-20 12:43:33.849662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.105 [2024-11-20 12:43:33.849685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.105 qpair failed and we were unable to recover it. 00:29:28.105 [2024-11-20 12:43:33.849854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.105 [2024-11-20 12:43:33.849877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.105 qpair failed and we were unable to recover it. 00:29:28.105 [2024-11-20 12:43:33.850033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.105 [2024-11-20 12:43:33.850065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.105 qpair failed and we were unable to recover it. 00:29:28.105 [2024-11-20 12:43:33.850253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.105 [2024-11-20 12:43:33.850312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.105 qpair failed and we were unable to recover it. 00:29:28.105 [2024-11-20 12:43:33.850516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.105 [2024-11-20 12:43:33.850551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.105 qpair failed and we were unable to recover it. 00:29:28.105 [2024-11-20 12:43:33.850659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.105 [2024-11-20 12:43:33.850691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.105 qpair failed and we were unable to recover it. 00:29:28.105 [2024-11-20 12:43:33.850876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.105 [2024-11-20 12:43:33.850908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.105 qpair failed and we were unable to recover it. 00:29:28.105 [2024-11-20 12:43:33.851020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.105 [2024-11-20 12:43:33.851051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.105 qpair failed and we were unable to recover it. 00:29:28.106 [2024-11-20 12:43:33.851177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.106 [2024-11-20 12:43:33.851221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.106 qpair failed and we were unable to recover it. 00:29:28.106 [2024-11-20 12:43:33.851417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.106 [2024-11-20 12:43:33.851451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.106 qpair failed and we were unable to recover it. 00:29:28.106 [2024-11-20 12:43:33.851678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.106 [2024-11-20 12:43:33.851712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.106 qpair failed and we were unable to recover it. 00:29:28.106 [2024-11-20 12:43:33.851837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.106 [2024-11-20 12:43:33.851870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.106 qpair failed and we were unable to recover it. 00:29:28.106 [2024-11-20 12:43:33.851974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.106 [2024-11-20 12:43:33.852000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.106 qpair failed and we were unable to recover it. 00:29:28.106 [2024-11-20 12:43:33.852168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.106 [2024-11-20 12:43:33.852191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.106 qpair failed and we were unable to recover it. 00:29:28.106 [2024-11-20 12:43:33.852326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.106 [2024-11-20 12:43:33.852360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.106 qpair failed and we were unable to recover it. 00:29:28.106 [2024-11-20 12:43:33.852468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.106 [2024-11-20 12:43:33.852500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.106 qpair failed and we were unable to recover it. 00:29:28.106 [2024-11-20 12:43:33.852689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.106 [2024-11-20 12:43:33.852721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.106 qpair failed and we were unable to recover it. 00:29:28.106 [2024-11-20 12:43:33.852935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.106 [2024-11-20 12:43:33.852967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.106 qpair failed and we were unable to recover it. 00:29:28.106 [2024-11-20 12:43:33.853143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.106 [2024-11-20 12:43:33.853175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.106 qpair failed and we were unable to recover it. 00:29:28.106 [2024-11-20 12:43:33.853370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.106 [2024-11-20 12:43:33.853403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.106 qpair failed and we were unable to recover it. 00:29:28.106 [2024-11-20 12:43:33.853511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.106 [2024-11-20 12:43:33.853543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.106 qpair failed and we were unable to recover it. 00:29:28.106 [2024-11-20 12:43:33.853660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.106 [2024-11-20 12:43:33.853692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.106 qpair failed and we were unable to recover it. 00:29:28.106 [2024-11-20 12:43:33.853793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.106 [2024-11-20 12:43:33.853816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.106 qpair failed and we were unable to recover it. 00:29:28.106 [2024-11-20 12:43:33.853904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.106 [2024-11-20 12:43:33.853931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.106 qpair failed and we were unable to recover it. 00:29:28.106 [2024-11-20 12:43:33.854019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.106 [2024-11-20 12:43:33.854041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.106 qpair failed and we were unable to recover it. 00:29:28.106 [2024-11-20 12:43:33.854193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.106 [2024-11-20 12:43:33.854225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.106 qpair failed and we were unable to recover it. 00:29:28.106 [2024-11-20 12:43:33.854388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.106 [2024-11-20 12:43:33.854411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.106 qpair failed and we were unable to recover it. 00:29:28.106 [2024-11-20 12:43:33.854565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.106 [2024-11-20 12:43:33.854587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.106 qpair failed and we were unable to recover it. 00:29:28.106 [2024-11-20 12:43:33.854783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.106 [2024-11-20 12:43:33.854806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.106 qpair failed and we were unable to recover it. 00:29:28.106 [2024-11-20 12:43:33.854916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.106 [2024-11-20 12:43:33.854939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.106 qpair failed and we were unable to recover it. 00:29:28.106 [2024-11-20 12:43:33.855031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.106 [2024-11-20 12:43:33.855054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.106 qpair failed and we were unable to recover it. 00:29:28.106 [2024-11-20 12:43:33.855141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.106 [2024-11-20 12:43:33.855164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.106 qpair failed and we were unable to recover it. 00:29:28.106 [2024-11-20 12:43:33.855260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.106 [2024-11-20 12:43:33.855283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.106 qpair failed and we were unable to recover it. 00:29:28.106 [2024-11-20 12:43:33.855451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.106 [2024-11-20 12:43:33.855474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.106 qpair failed and we were unable to recover it. 00:29:28.106 [2024-11-20 12:43:33.855633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.106 [2024-11-20 12:43:33.855655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.106 qpair failed and we were unable to recover it. 00:29:28.106 [2024-11-20 12:43:33.855756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.106 [2024-11-20 12:43:33.855779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.106 qpair failed and we were unable to recover it. 00:29:28.106 [2024-11-20 12:43:33.855862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.106 [2024-11-20 12:43:33.855885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.106 qpair failed and we were unable to recover it. 00:29:28.106 [2024-11-20 12:43:33.856004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.106 [2024-11-20 12:43:33.856027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.106 qpair failed and we were unable to recover it. 00:29:28.106 [2024-11-20 12:43:33.856128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.106 [2024-11-20 12:43:33.856151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.106 qpair failed and we were unable to recover it. 00:29:28.106 [2024-11-20 12:43:33.856237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.106 [2024-11-20 12:43:33.856262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.106 qpair failed and we were unable to recover it. 00:29:28.106 [2024-11-20 12:43:33.856434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.106 [2024-11-20 12:43:33.856457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.106 qpair failed and we were unable to recover it. 00:29:28.106 [2024-11-20 12:43:33.856548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.106 [2024-11-20 12:43:33.856571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.106 qpair failed and we were unable to recover it. 00:29:28.106 [2024-11-20 12:43:33.856662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.106 [2024-11-20 12:43:33.856684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.106 qpair failed and we were unable to recover it. 00:29:28.106 [2024-11-20 12:43:33.857679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.106 [2024-11-20 12:43:33.857724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.106 qpair failed and we were unable to recover it. 00:29:28.106 [2024-11-20 12:43:33.858004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.106 [2024-11-20 12:43:33.858029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.106 qpair failed and we were unable to recover it. 00:29:28.106 [2024-11-20 12:43:33.858214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.106 [2024-11-20 12:43:33.858238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.106 qpair failed and we were unable to recover it. 00:29:28.107 [2024-11-20 12:43:33.858454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.107 [2024-11-20 12:43:33.858478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.107 qpair failed and we were unable to recover it. 00:29:28.107 [2024-11-20 12:43:33.858653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.107 [2024-11-20 12:43:33.858676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.107 qpair failed and we were unable to recover it. 00:29:28.107 [2024-11-20 12:43:33.858837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.107 [2024-11-20 12:43:33.858870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.107 qpair failed and we were unable to recover it. 00:29:28.107 [2024-11-20 12:43:33.859062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.107 [2024-11-20 12:43:33.859095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.107 qpair failed and we were unable to recover it. 00:29:28.107 [2024-11-20 12:43:33.859289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.107 [2024-11-20 12:43:33.859331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.107 qpair failed and we were unable to recover it. 00:29:28.107 [2024-11-20 12:43:33.859443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.107 [2024-11-20 12:43:33.859475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.107 qpair failed and we were unable to recover it. 00:29:28.107 [2024-11-20 12:43:33.859585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.107 [2024-11-20 12:43:33.859618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.107 qpair failed and we were unable to recover it. 00:29:28.107 [2024-11-20 12:43:33.859754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.107 [2024-11-20 12:43:33.859775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.107 qpair failed and we were unable to recover it. 00:29:28.107 [2024-11-20 12:43:33.859991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.107 [2024-11-20 12:43:33.860013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.107 qpair failed and we were unable to recover it. 00:29:28.107 [2024-11-20 12:43:33.860235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.107 [2024-11-20 12:43:33.860259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.107 qpair failed and we were unable to recover it. 00:29:28.395 [2024-11-20 12:43:33.860366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.395 [2024-11-20 12:43:33.860389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.395 qpair failed and we were unable to recover it. 00:29:28.395 [2024-11-20 12:43:33.860477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.395 [2024-11-20 12:43:33.860498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.395 qpair failed and we were unable to recover it. 00:29:28.395 [2024-11-20 12:43:33.860706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.395 [2024-11-20 12:43:33.860728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.395 qpair failed and we were unable to recover it. 00:29:28.395 [2024-11-20 12:43:33.860813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.395 [2024-11-20 12:43:33.860833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.395 qpair failed and we were unable to recover it. 00:29:28.395 [2024-11-20 12:43:33.860930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.395 [2024-11-20 12:43:33.860951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.395 qpair failed and we were unable to recover it. 00:29:28.395 [2024-11-20 12:43:33.861040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.395 [2024-11-20 12:43:33.861063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.395 qpair failed and we were unable to recover it. 00:29:28.395 [2024-11-20 12:43:33.861153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.395 [2024-11-20 12:43:33.861174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.395 qpair failed and we were unable to recover it. 00:29:28.395 [2024-11-20 12:43:33.861337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.395 [2024-11-20 12:43:33.861360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.395 qpair failed and we were unable to recover it. 00:29:28.395 [2024-11-20 12:43:33.861509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.395 [2024-11-20 12:43:33.861591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.395 qpair failed and we were unable to recover it. 00:29:28.395 [2024-11-20 12:43:33.861793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.395 [2024-11-20 12:43:33.861831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.395 qpair failed and we were unable to recover it. 00:29:28.395 [2024-11-20 12:43:33.861952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.395 [2024-11-20 12:43:33.861986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.395 qpair failed and we were unable to recover it. 00:29:28.395 [2024-11-20 12:43:33.862232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.396 [2024-11-20 12:43:33.862267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.396 qpair failed and we were unable to recover it. 00:29:28.396 [2024-11-20 12:43:33.862467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.396 [2024-11-20 12:43:33.862501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.396 qpair failed and we were unable to recover it. 00:29:28.396 [2024-11-20 12:43:33.862605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.396 [2024-11-20 12:43:33.862638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.396 qpair failed and we were unable to recover it. 00:29:28.396 [2024-11-20 12:43:33.862747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.396 [2024-11-20 12:43:33.862772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.396 qpair failed and we were unable to recover it. 00:29:28.396 [2024-11-20 12:43:33.862873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.396 [2024-11-20 12:43:33.862897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.396 qpair failed and we were unable to recover it. 00:29:28.396 [2024-11-20 12:43:33.862985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.396 [2024-11-20 12:43:33.863006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.396 qpair failed and we were unable to recover it. 00:29:28.396 [2024-11-20 12:43:33.863090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.396 [2024-11-20 12:43:33.863112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.396 qpair failed and we were unable to recover it. 00:29:28.396 [2024-11-20 12:43:33.863266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.396 [2024-11-20 12:43:33.863292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.396 qpair failed and we were unable to recover it. 00:29:28.396 [2024-11-20 12:43:33.863389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.396 [2024-11-20 12:43:33.863411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.396 qpair failed and we were unable to recover it. 00:29:28.396 [2024-11-20 12:43:33.863608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.396 [2024-11-20 12:43:33.863640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.396 qpair failed and we were unable to recover it. 00:29:28.396 [2024-11-20 12:43:33.863740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.396 [2024-11-20 12:43:33.863772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.396 qpair failed and we were unable to recover it. 00:29:28.396 [2024-11-20 12:43:33.863956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.396 [2024-11-20 12:43:33.863988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.396 qpair failed and we were unable to recover it. 00:29:28.396 [2024-11-20 12:43:33.864178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.396 [2024-11-20 12:43:33.864223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.396 qpair failed and we were unable to recover it. 00:29:28.396 [2024-11-20 12:43:33.864440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.396 [2024-11-20 12:43:33.864472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.396 qpair failed and we were unable to recover it. 00:29:28.396 [2024-11-20 12:43:33.864583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.396 [2024-11-20 12:43:33.864616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.396 qpair failed and we were unable to recover it. 00:29:28.396 [2024-11-20 12:43:33.864731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.396 [2024-11-20 12:43:33.864753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.396 qpair failed and we were unable to recover it. 00:29:28.396 [2024-11-20 12:43:33.864838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.396 [2024-11-20 12:43:33.864859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.396 qpair failed and we were unable to recover it. 00:29:28.396 [2024-11-20 12:43:33.865058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.396 [2024-11-20 12:43:33.865081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.396 qpair failed and we were unable to recover it. 00:29:28.396 [2024-11-20 12:43:33.865237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.396 [2024-11-20 12:43:33.865261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.396 qpair failed and we were unable to recover it. 00:29:28.396 [2024-11-20 12:43:33.865418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.396 [2024-11-20 12:43:33.865450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.396 qpair failed and we were unable to recover it. 00:29:28.396 [2024-11-20 12:43:33.865558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.396 [2024-11-20 12:43:33.865591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.396 qpair failed and we were unable to recover it. 00:29:28.396 [2024-11-20 12:43:33.865726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.396 [2024-11-20 12:43:33.865759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.396 qpair failed and we were unable to recover it. 00:29:28.396 [2024-11-20 12:43:33.865863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.396 [2024-11-20 12:43:33.865895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.396 qpair failed and we were unable to recover it. 00:29:28.396 [2024-11-20 12:43:33.866012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.396 [2024-11-20 12:43:33.866045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.396 qpair failed and we were unable to recover it. 00:29:28.396 [2024-11-20 12:43:33.866165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.396 [2024-11-20 12:43:33.866213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.396 qpair failed and we were unable to recover it. 00:29:28.396 [2024-11-20 12:43:33.866342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.396 [2024-11-20 12:43:33.866375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.396 qpair failed and we were unable to recover it. 00:29:28.396 [2024-11-20 12:43:33.866570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.396 [2024-11-20 12:43:33.866601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.396 qpair failed and we were unable to recover it. 00:29:28.396 [2024-11-20 12:43:33.866776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.396 [2024-11-20 12:43:33.866808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.396 qpair failed and we were unable to recover it. 00:29:28.396 [2024-11-20 12:43:33.866926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.396 [2024-11-20 12:43:33.866958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.396 qpair failed and we were unable to recover it. 00:29:28.396 [2024-11-20 12:43:33.867143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.396 [2024-11-20 12:43:33.867176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.396 qpair failed and we were unable to recover it. 00:29:28.396 [2024-11-20 12:43:33.867290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.396 [2024-11-20 12:43:33.867320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.396 qpair failed and we were unable to recover it. 00:29:28.396 [2024-11-20 12:43:33.867459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.396 [2024-11-20 12:43:33.867490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.396 qpair failed and we were unable to recover it. 00:29:28.396 [2024-11-20 12:43:33.867599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.396 [2024-11-20 12:43:33.867631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.396 qpair failed and we were unable to recover it. 00:29:28.396 [2024-11-20 12:43:33.867816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.396 [2024-11-20 12:43:33.867849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.396 qpair failed and we were unable to recover it. 00:29:28.396 [2024-11-20 12:43:33.867970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.396 [2024-11-20 12:43:33.868003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.396 qpair failed and we were unable to recover it. 00:29:28.396 [2024-11-20 12:43:33.868171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.396 [2024-11-20 12:43:33.868212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.396 qpair failed and we were unable to recover it. 00:29:28.396 [2024-11-20 12:43:33.868324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.396 [2024-11-20 12:43:33.868356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.396 qpair failed and we were unable to recover it. 00:29:28.396 [2024-11-20 12:43:33.868466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.396 [2024-11-20 12:43:33.868498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.396 qpair failed and we were unable to recover it. 00:29:28.396 [2024-11-20 12:43:33.868681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.397 [2024-11-20 12:43:33.868713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.397 qpair failed and we were unable to recover it. 00:29:28.397 [2024-11-20 12:43:33.868830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.397 [2024-11-20 12:43:33.868862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.397 qpair failed and we were unable to recover it. 00:29:28.397 [2024-11-20 12:43:33.868965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.397 [2024-11-20 12:43:33.868998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.397 qpair failed and we were unable to recover it. 00:29:28.397 [2024-11-20 12:43:33.869223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.397 [2024-11-20 12:43:33.869267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.397 qpair failed and we were unable to recover it. 00:29:28.397 [2024-11-20 12:43:33.869355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.397 [2024-11-20 12:43:33.869377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.397 qpair failed and we were unable to recover it. 00:29:28.397 [2024-11-20 12:43:33.869459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.397 [2024-11-20 12:43:33.869481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.397 qpair failed and we were unable to recover it. 00:29:28.397 [2024-11-20 12:43:33.869698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.397 [2024-11-20 12:43:33.869722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.397 qpair failed and we were unable to recover it. 00:29:28.397 [2024-11-20 12:43:33.869816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.397 [2024-11-20 12:43:33.869836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.397 qpair failed and we were unable to recover it. 00:29:28.397 [2024-11-20 12:43:33.869994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.397 [2024-11-20 12:43:33.870017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.397 qpair failed and we were unable to recover it. 00:29:28.397 [2024-11-20 12:43:33.870110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.397 [2024-11-20 12:43:33.870131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.397 qpair failed and we were unable to recover it. 00:29:28.397 [2024-11-20 12:43:33.870251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.397 [2024-11-20 12:43:33.870275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.397 qpair failed and we were unable to recover it. 00:29:28.397 [2024-11-20 12:43:33.870359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.397 [2024-11-20 12:43:33.870380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.397 qpair failed and we were unable to recover it. 00:29:28.397 [2024-11-20 12:43:33.870459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.397 [2024-11-20 12:43:33.870480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.397 qpair failed and we were unable to recover it. 00:29:28.397 [2024-11-20 12:43:33.870641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.397 [2024-11-20 12:43:33.870667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.397 qpair failed and we were unable to recover it. 00:29:28.397 [2024-11-20 12:43:33.870761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.397 [2024-11-20 12:43:33.870782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.397 qpair failed and we were unable to recover it. 00:29:28.397 [2024-11-20 12:43:33.870865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.397 [2024-11-20 12:43:33.870886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.397 qpair failed and we were unable to recover it. 00:29:28.397 [2024-11-20 12:43:33.871057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.397 [2024-11-20 12:43:33.871080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.397 qpair failed and we were unable to recover it. 00:29:28.397 [2024-11-20 12:43:33.871281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.397 [2024-11-20 12:43:33.871305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.397 qpair failed and we were unable to recover it. 00:29:28.397 [2024-11-20 12:43:33.871470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.397 [2024-11-20 12:43:33.871493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.397 qpair failed and we were unable to recover it. 00:29:28.397 [2024-11-20 12:43:33.871600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.397 [2024-11-20 12:43:33.871621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.397 qpair failed and we were unable to recover it. 00:29:28.397 [2024-11-20 12:43:33.871782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.397 [2024-11-20 12:43:33.871804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.397 qpair failed and we were unable to recover it. 00:29:28.397 [2024-11-20 12:43:33.871902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.397 [2024-11-20 12:43:33.871924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.397 qpair failed and we were unable to recover it. 00:29:28.397 [2024-11-20 12:43:33.872087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.397 [2024-11-20 12:43:33.872110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.397 qpair failed and we were unable to recover it. 00:29:28.397 [2024-11-20 12:43:33.872208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.397 [2024-11-20 12:43:33.872231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.397 qpair failed and we were unable to recover it. 00:29:28.397 [2024-11-20 12:43:33.872337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.397 [2024-11-20 12:43:33.872360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.397 qpair failed and we were unable to recover it. 00:29:28.397 [2024-11-20 12:43:33.872448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.397 [2024-11-20 12:43:33.872470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.397 qpair failed and we were unable to recover it. 00:29:28.397 [2024-11-20 12:43:33.872624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.397 [2024-11-20 12:43:33.872647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.397 qpair failed and we were unable to recover it. 00:29:28.397 [2024-11-20 12:43:33.872742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.397 [2024-11-20 12:43:33.872765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.397 qpair failed and we were unable to recover it. 00:29:28.397 [2024-11-20 12:43:33.872861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.397 [2024-11-20 12:43:33.872885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.397 qpair failed and we were unable to recover it. 00:29:28.397 [2024-11-20 12:43:33.873126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.397 [2024-11-20 12:43:33.873149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.397 qpair failed and we were unable to recover it. 00:29:28.397 [2024-11-20 12:43:33.873307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.397 [2024-11-20 12:43:33.873330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.397 qpair failed and we were unable to recover it. 00:29:28.397 [2024-11-20 12:43:33.873414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.397 [2024-11-20 12:43:33.873435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.397 qpair failed and we were unable to recover it. 00:29:28.397 [2024-11-20 12:43:33.873520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.397 [2024-11-20 12:43:33.873543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.397 qpair failed and we were unable to recover it. 00:29:28.397 [2024-11-20 12:43:33.873649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.397 [2024-11-20 12:43:33.873672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.397 qpair failed and we were unable to recover it. 00:29:28.397 [2024-11-20 12:43:33.873775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.397 [2024-11-20 12:43:33.873807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.397 qpair failed and we were unable to recover it. 00:29:28.397 [2024-11-20 12:43:33.873989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.397 [2024-11-20 12:43:33.874021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.397 qpair failed and we were unable to recover it. 00:29:28.397 [2024-11-20 12:43:33.874191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.397 [2024-11-20 12:43:33.874232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.397 qpair failed and we were unable to recover it. 00:29:28.397 [2024-11-20 12:43:33.874404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.397 [2024-11-20 12:43:33.874436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.397 qpair failed and we were unable to recover it. 00:29:28.397 [2024-11-20 12:43:33.874545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.397 [2024-11-20 12:43:33.874578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.398 qpair failed and we were unable to recover it. 00:29:28.398 [2024-11-20 12:43:33.874751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.398 [2024-11-20 12:43:33.874774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.398 qpair failed and we were unable to recover it. 00:29:28.398 [2024-11-20 12:43:33.874870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.398 [2024-11-20 12:43:33.874910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.398 qpair failed and we were unable to recover it. 00:29:28.398 [2024-11-20 12:43:33.875040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.398 [2024-11-20 12:43:33.875073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.398 qpair failed and we were unable to recover it. 00:29:28.398 [2024-11-20 12:43:33.875175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.398 [2024-11-20 12:43:33.875239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.398 qpair failed and we were unable to recover it. 00:29:28.398 [2024-11-20 12:43:33.875436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.398 [2024-11-20 12:43:33.875478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.398 qpair failed and we were unable to recover it. 00:29:28.398 [2024-11-20 12:43:33.875565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.398 [2024-11-20 12:43:33.875587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.398 qpair failed and we were unable to recover it. 00:29:28.398 [2024-11-20 12:43:33.875738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.398 [2024-11-20 12:43:33.875777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.398 qpair failed and we were unable to recover it. 00:29:28.398 [2024-11-20 12:43:33.875953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.398 [2024-11-20 12:43:33.875984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.398 qpair failed and we were unable to recover it. 00:29:28.398 [2024-11-20 12:43:33.876161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.398 [2024-11-20 12:43:33.876192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.398 qpair failed and we were unable to recover it. 00:29:28.398 [2024-11-20 12:43:33.876323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.398 [2024-11-20 12:43:33.876357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.398 qpair failed and we were unable to recover it. 00:29:28.398 [2024-11-20 12:43:33.876544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.398 [2024-11-20 12:43:33.876576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.398 qpair failed and we were unable to recover it. 00:29:28.398 [2024-11-20 12:43:33.876724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.398 [2024-11-20 12:43:33.876756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.398 qpair failed and we were unable to recover it. 00:29:28.398 [2024-11-20 12:43:33.876931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.398 [2024-11-20 12:43:33.876962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.398 qpair failed and we were unable to recover it. 00:29:28.398 [2024-11-20 12:43:33.877095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.398 [2024-11-20 12:43:33.877128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.398 qpair failed and we were unable to recover it. 00:29:28.398 [2024-11-20 12:43:33.877327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.398 [2024-11-20 12:43:33.877361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.398 qpair failed and we were unable to recover it. 00:29:28.398 [2024-11-20 12:43:33.877481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.398 [2024-11-20 12:43:33.877517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.398 qpair failed and we were unable to recover it. 00:29:28.398 [2024-11-20 12:43:33.877696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.398 [2024-11-20 12:43:33.877728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.398 qpair failed and we were unable to recover it. 00:29:28.398 [2024-11-20 12:43:33.877848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.398 [2024-11-20 12:43:33.877879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.398 qpair failed and we were unable to recover it. 00:29:28.398 [2024-11-20 12:43:33.878073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.398 [2024-11-20 12:43:33.878105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.398 qpair failed and we were unable to recover it. 00:29:28.398 [2024-11-20 12:43:33.878280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.398 [2024-11-20 12:43:33.878313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.398 qpair failed and we were unable to recover it. 00:29:28.398 [2024-11-20 12:43:33.878449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.398 [2024-11-20 12:43:33.878480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.398 qpair failed and we were unable to recover it. 00:29:28.398 [2024-11-20 12:43:33.878650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.398 [2024-11-20 12:43:33.878682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.398 qpair failed and we were unable to recover it. 00:29:28.398 [2024-11-20 12:43:33.878868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.398 [2024-11-20 12:43:33.878892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.398 qpair failed and we were unable to recover it. 00:29:28.398 [2024-11-20 12:43:33.878991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.398 [2024-11-20 12:43:33.879015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.398 qpair failed and we were unable to recover it. 00:29:28.398 [2024-11-20 12:43:33.879100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.398 [2024-11-20 12:43:33.879121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.398 qpair failed and we were unable to recover it. 00:29:28.398 [2024-11-20 12:43:33.879218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.398 [2024-11-20 12:43:33.879243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.398 qpair failed and we were unable to recover it. 00:29:28.398 [2024-11-20 12:43:33.879357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.398 [2024-11-20 12:43:33.879380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.398 qpair failed and we were unable to recover it. 00:29:28.398 [2024-11-20 12:43:33.879483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.398 [2024-11-20 12:43:33.879505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.398 qpair failed and we were unable to recover it. 00:29:28.398 [2024-11-20 12:43:33.879596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.398 [2024-11-20 12:43:33.879619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.398 qpair failed and we were unable to recover it. 00:29:28.398 [2024-11-20 12:43:33.879736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.398 [2024-11-20 12:43:33.879761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.398 qpair failed and we were unable to recover it. 00:29:28.398 [2024-11-20 12:43:33.879977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.398 [2024-11-20 12:43:33.880001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.398 qpair failed and we were unable to recover it. 00:29:28.398 [2024-11-20 12:43:33.880175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.398 [2024-11-20 12:43:33.880199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.398 qpair failed and we were unable to recover it. 00:29:28.398 [2024-11-20 12:43:33.880297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.398 [2024-11-20 12:43:33.880318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.398 qpair failed and we were unable to recover it. 00:29:28.398 [2024-11-20 12:43:33.880418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.398 [2024-11-20 12:43:33.880441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.398 qpair failed and we were unable to recover it. 00:29:28.398 [2024-11-20 12:43:33.880658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.398 [2024-11-20 12:43:33.880680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.398 qpair failed and we were unable to recover it. 00:29:28.398 [2024-11-20 12:43:33.880775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.398 [2024-11-20 12:43:33.880798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.398 qpair failed and we were unable to recover it. 00:29:28.398 [2024-11-20 12:43:33.880951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.398 [2024-11-20 12:43:33.880996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.398 qpair failed and we were unable to recover it. 00:29:28.398 [2024-11-20 12:43:33.881101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.398 [2024-11-20 12:43:33.881133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.399 qpair failed and we were unable to recover it. 00:29:28.399 [2024-11-20 12:43:33.881241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.399 [2024-11-20 12:43:33.881273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.399 qpair failed and we were unable to recover it. 00:29:28.399 [2024-11-20 12:43:33.881400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.399 [2024-11-20 12:43:33.881432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.399 qpair failed and we were unable to recover it. 00:29:28.399 [2024-11-20 12:43:33.881546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.399 [2024-11-20 12:43:33.881579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.399 qpair failed and we were unable to recover it. 00:29:28.399 [2024-11-20 12:43:33.881681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.399 [2024-11-20 12:43:33.881703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.399 qpair failed and we were unable to recover it. 00:29:28.399 [2024-11-20 12:43:33.881851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.399 [2024-11-20 12:43:33.881877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.399 qpair failed and we were unable to recover it. 00:29:28.399 [2024-11-20 12:43:33.882027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.399 [2024-11-20 12:43:33.882050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.399 qpair failed and we were unable to recover it. 00:29:28.399 [2024-11-20 12:43:33.882144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.399 [2024-11-20 12:43:33.882167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.399 qpair failed and we were unable to recover it. 00:29:28.399 [2024-11-20 12:43:33.882335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.399 [2024-11-20 12:43:33.882358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.399 qpair failed and we were unable to recover it. 00:29:28.399 [2024-11-20 12:43:33.882441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.399 [2024-11-20 12:43:33.882463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.399 qpair failed and we were unable to recover it. 00:29:28.399 [2024-11-20 12:43:33.882618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.399 [2024-11-20 12:43:33.882649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.399 qpair failed and we were unable to recover it. 00:29:28.399 [2024-11-20 12:43:33.882744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.399 [2024-11-20 12:43:33.882767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.399 qpair failed and we were unable to recover it. 00:29:28.399 [2024-11-20 12:43:33.882870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.399 [2024-11-20 12:43:33.882895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.399 qpair failed and we were unable to recover it. 00:29:28.399 [2024-11-20 12:43:33.883061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.399 [2024-11-20 12:43:33.883083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.399 qpair failed and we were unable to recover it. 00:29:28.399 [2024-11-20 12:43:33.883181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.399 [2024-11-20 12:43:33.883240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.399 qpair failed and we were unable to recover it. 00:29:28.399 [2024-11-20 12:43:33.883387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.399 [2024-11-20 12:43:33.883419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.399 qpair failed and we were unable to recover it. 00:29:28.399 [2024-11-20 12:43:33.883550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.399 [2024-11-20 12:43:33.883583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.399 qpair failed and we were unable to recover it. 00:29:28.399 [2024-11-20 12:43:33.883707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.399 [2024-11-20 12:43:33.883741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.399 qpair failed and we were unable to recover it. 00:29:28.399 [2024-11-20 12:43:33.883847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.399 [2024-11-20 12:43:33.883878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.399 qpair failed and we were unable to recover it. 00:29:28.399 [2024-11-20 12:43:33.884098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.399 [2024-11-20 12:43:33.884172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.399 qpair failed and we were unable to recover it. 00:29:28.399 [2024-11-20 12:43:33.884322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.399 [2024-11-20 12:43:33.884359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.399 qpair failed and we were unable to recover it. 00:29:28.399 [2024-11-20 12:43:33.884489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.399 [2024-11-20 12:43:33.884524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.399 qpair failed and we were unable to recover it. 00:29:28.399 [2024-11-20 12:43:33.884703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.399 [2024-11-20 12:43:33.884736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.399 qpair failed and we were unable to recover it. 00:29:28.399 [2024-11-20 12:43:33.884867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.399 [2024-11-20 12:43:33.884900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.399 qpair failed and we were unable to recover it. 00:29:28.399 [2024-11-20 12:43:33.885019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.399 [2024-11-20 12:43:33.885052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.399 qpair failed and we were unable to recover it. 00:29:28.399 [2024-11-20 12:43:33.885180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.399 [2024-11-20 12:43:33.885213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.399 qpair failed and we were unable to recover it. 00:29:28.399 [2024-11-20 12:43:33.885370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.399 [2024-11-20 12:43:33.885393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.399 qpair failed and we were unable to recover it. 00:29:28.399 [2024-11-20 12:43:33.885512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.399 [2024-11-20 12:43:33.885534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.399 qpair failed and we were unable to recover it. 00:29:28.399 [2024-11-20 12:43:33.885634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.399 [2024-11-20 12:43:33.885656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.399 qpair failed and we were unable to recover it. 00:29:28.399 [2024-11-20 12:43:33.885743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.399 [2024-11-20 12:43:33.885765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.399 qpair failed and we were unable to recover it. 00:29:28.399 [2024-11-20 12:43:33.885912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.399 [2024-11-20 12:43:33.885934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.399 qpair failed and we were unable to recover it. 00:29:28.399 [2024-11-20 12:43:33.886105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.399 [2024-11-20 12:43:33.886137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.399 qpair failed and we were unable to recover it. 00:29:28.399 [2024-11-20 12:43:33.886337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.399 [2024-11-20 12:43:33.886370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.399 qpair failed and we were unable to recover it. 00:29:28.399 [2024-11-20 12:43:33.886503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.399 [2024-11-20 12:43:33.886536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.399 qpair failed and we were unable to recover it. 00:29:28.399 [2024-11-20 12:43:33.886673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.399 [2024-11-20 12:43:33.886695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.399 qpair failed and we were unable to recover it. 00:29:28.399 [2024-11-20 12:43:33.886799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.399 [2024-11-20 12:43:33.886821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.399 qpair failed and we were unable to recover it. 00:29:28.399 [2024-11-20 12:43:33.886927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.399 [2024-11-20 12:43:33.886950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.399 qpair failed and we were unable to recover it. 00:29:28.399 [2024-11-20 12:43:33.887059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.399 [2024-11-20 12:43:33.887081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.399 qpair failed and we were unable to recover it. 00:29:28.399 [2024-11-20 12:43:33.887166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.399 [2024-11-20 12:43:33.887188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.399 qpair failed and we were unable to recover it. 00:29:28.400 [2024-11-20 12:43:33.887280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.400 [2024-11-20 12:43:33.887303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.400 qpair failed and we were unable to recover it. 00:29:28.400 [2024-11-20 12:43:33.887400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.400 [2024-11-20 12:43:33.887422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.400 qpair failed and we were unable to recover it. 00:29:28.400 [2024-11-20 12:43:33.887530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.400 [2024-11-20 12:43:33.887552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.400 qpair failed and we were unable to recover it. 00:29:28.400 [2024-11-20 12:43:33.887639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.400 [2024-11-20 12:43:33.887661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.400 qpair failed and we were unable to recover it. 00:29:28.400 [2024-11-20 12:43:33.887748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.400 [2024-11-20 12:43:33.887770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.400 qpair failed and we were unable to recover it. 00:29:28.400 [2024-11-20 12:43:33.887876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.400 [2024-11-20 12:43:33.887898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.400 qpair failed and we were unable to recover it. 00:29:28.400 [2024-11-20 12:43:33.887991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.400 [2024-11-20 12:43:33.888013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.400 qpair failed and we were unable to recover it. 00:29:28.400 [2024-11-20 12:43:33.888127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.400 [2024-11-20 12:43:33.888163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.400 qpair failed and we were unable to recover it. 00:29:28.400 [2024-11-20 12:43:33.888367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.400 [2024-11-20 12:43:33.888401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.400 qpair failed and we were unable to recover it. 00:29:28.400 [2024-11-20 12:43:33.888529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.400 [2024-11-20 12:43:33.888562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.400 qpair failed and we were unable to recover it. 00:29:28.400 [2024-11-20 12:43:33.888692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.400 [2024-11-20 12:43:33.888725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.400 qpair failed and we were unable to recover it. 00:29:28.400 [2024-11-20 12:43:33.888845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.400 [2024-11-20 12:43:33.888878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.400 qpair failed and we were unable to recover it. 00:29:28.400 [2024-11-20 12:43:33.888991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.400 [2024-11-20 12:43:33.889025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.400 qpair failed and we were unable to recover it. 00:29:28.400 [2024-11-20 12:43:33.889236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.400 [2024-11-20 12:43:33.889262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.400 qpair failed and we were unable to recover it. 00:29:28.400 [2024-11-20 12:43:33.889421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.400 [2024-11-20 12:43:33.889461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.400 qpair failed and we were unable to recover it. 00:29:28.400 [2024-11-20 12:43:33.889634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.400 [2024-11-20 12:43:33.889667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.400 qpair failed and we were unable to recover it. 00:29:28.400 [2024-11-20 12:43:33.889854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.400 [2024-11-20 12:43:33.889887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.400 qpair failed and we were unable to recover it. 00:29:28.400 [2024-11-20 12:43:33.890014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.400 [2024-11-20 12:43:33.890046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.400 qpair failed and we were unable to recover it. 00:29:28.400 [2024-11-20 12:43:33.890236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.400 [2024-11-20 12:43:33.890270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.400 qpair failed and we were unable to recover it. 00:29:28.400 [2024-11-20 12:43:33.890383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.400 [2024-11-20 12:43:33.890415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.400 qpair failed and we were unable to recover it. 00:29:28.400 [2024-11-20 12:43:33.890516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.400 [2024-11-20 12:43:33.890547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.400 qpair failed and we were unable to recover it. 00:29:28.400 [2024-11-20 12:43:33.890668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.400 [2024-11-20 12:43:33.890700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.400 qpair failed and we were unable to recover it. 00:29:28.400 [2024-11-20 12:43:33.890874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.400 [2024-11-20 12:43:33.890897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.400 qpair failed and we were unable to recover it. 00:29:28.400 [2024-11-20 12:43:33.891050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.400 [2024-11-20 12:43:33.891073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.400 qpair failed and we were unable to recover it. 00:29:28.400 [2024-11-20 12:43:33.891175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.400 [2024-11-20 12:43:33.891197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.400 qpair failed and we were unable to recover it. 00:29:28.400 [2024-11-20 12:43:33.891301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.400 [2024-11-20 12:43:33.891323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.400 qpair failed and we were unable to recover it. 00:29:28.400 [2024-11-20 12:43:33.891421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.400 [2024-11-20 12:43:33.891442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.400 qpair failed and we were unable to recover it. 00:29:28.400 [2024-11-20 12:43:33.891536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.400 [2024-11-20 12:43:33.891558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.400 qpair failed and we were unable to recover it. 00:29:28.400 [2024-11-20 12:43:33.891711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.400 [2024-11-20 12:43:33.891752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.400 qpair failed and we were unable to recover it. 00:29:28.400 [2024-11-20 12:43:33.891862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.400 [2024-11-20 12:43:33.891895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.400 qpair failed and we were unable to recover it. 00:29:28.400 [2024-11-20 12:43:33.892004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.400 [2024-11-20 12:43:33.892035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.400 qpair failed and we were unable to recover it. 00:29:28.401 [2024-11-20 12:43:33.892157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.401 [2024-11-20 12:43:33.892188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.401 qpair failed and we were unable to recover it. 00:29:28.401 [2024-11-20 12:43:33.892306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.401 [2024-11-20 12:43:33.892339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.401 qpair failed and we were unable to recover it. 00:29:28.401 [2024-11-20 12:43:33.892575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.401 [2024-11-20 12:43:33.892607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.401 qpair failed and we were unable to recover it. 00:29:28.401 [2024-11-20 12:43:33.892737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.401 [2024-11-20 12:43:33.892784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.401 qpair failed and we were unable to recover it. 00:29:28.401 [2024-11-20 12:43:33.892874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.401 [2024-11-20 12:43:33.892897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.401 qpair failed and we were unable to recover it. 00:29:28.401 [2024-11-20 12:43:33.893054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.401 [2024-11-20 12:43:33.893096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.401 qpair failed and we were unable to recover it. 00:29:28.401 [2024-11-20 12:43:33.893228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.401 [2024-11-20 12:43:33.893262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.401 qpair failed and we were unable to recover it. 00:29:28.401 [2024-11-20 12:43:33.893442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.401 [2024-11-20 12:43:33.893465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.401 qpair failed and we were unable to recover it. 00:29:28.401 [2024-11-20 12:43:33.893645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.401 [2024-11-20 12:43:33.893677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.401 qpair failed and we were unable to recover it. 00:29:28.401 [2024-11-20 12:43:33.893797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.401 [2024-11-20 12:43:33.893830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.401 qpair failed and we were unable to recover it. 00:29:28.401 [2024-11-20 12:43:33.893951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.401 [2024-11-20 12:43:33.893993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.401 qpair failed and we were unable to recover it. 00:29:28.401 [2024-11-20 12:43:33.894119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.401 [2024-11-20 12:43:33.894152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.401 qpair failed and we were unable to recover it. 00:29:28.401 [2024-11-20 12:43:33.894272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.401 [2024-11-20 12:43:33.894306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.401 qpair failed and we were unable to recover it. 00:29:28.401 [2024-11-20 12:43:33.894497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.401 [2024-11-20 12:43:33.894529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.401 qpair failed and we were unable to recover it. 00:29:28.401 [2024-11-20 12:43:33.894644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.401 [2024-11-20 12:43:33.894676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.401 qpair failed and we were unable to recover it. 00:29:28.401 [2024-11-20 12:43:33.894800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.401 [2024-11-20 12:43:33.894831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.401 qpair failed and we were unable to recover it. 00:29:28.401 [2024-11-20 12:43:33.895036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.401 [2024-11-20 12:43:33.895068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.401 qpair failed and we were unable to recover it. 00:29:28.401 [2024-11-20 12:43:33.895274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.401 [2024-11-20 12:43:33.895307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.401 qpair failed and we were unable to recover it. 00:29:28.401 [2024-11-20 12:43:33.895415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.401 [2024-11-20 12:43:33.895438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.401 qpair failed and we were unable to recover it. 00:29:28.401 [2024-11-20 12:43:33.895586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.401 [2024-11-20 12:43:33.895608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.401 qpair failed and we were unable to recover it. 00:29:28.401 [2024-11-20 12:43:33.895690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.401 [2024-11-20 12:43:33.895713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.401 qpair failed and we were unable to recover it. 00:29:28.401 [2024-11-20 12:43:33.895876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.401 [2024-11-20 12:43:33.895899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.401 qpair failed and we were unable to recover it. 00:29:28.401 [2024-11-20 12:43:33.896015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.401 [2024-11-20 12:43:33.896036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.401 qpair failed and we were unable to recover it. 00:29:28.401 [2024-11-20 12:43:33.896128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.401 [2024-11-20 12:43:33.896150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.401 qpair failed and we were unable to recover it. 00:29:28.401 [2024-11-20 12:43:33.896237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.401 [2024-11-20 12:43:33.896258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.401 qpair failed and we were unable to recover it. 00:29:28.401 [2024-11-20 12:43:33.896360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.401 [2024-11-20 12:43:33.896382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.401 qpair failed and we were unable to recover it. 00:29:28.401 [2024-11-20 12:43:33.896476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.401 [2024-11-20 12:43:33.896498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.401 qpair failed and we were unable to recover it. 00:29:28.401 [2024-11-20 12:43:33.896660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.401 [2024-11-20 12:43:33.896685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.401 qpair failed and we were unable to recover it. 00:29:28.401 [2024-11-20 12:43:33.896842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.401 [2024-11-20 12:43:33.896865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.401 qpair failed and we were unable to recover it. 00:29:28.401 [2024-11-20 12:43:33.897014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.401 [2024-11-20 12:43:33.897036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.401 qpair failed and we were unable to recover it. 00:29:28.401 [2024-11-20 12:43:33.897138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.401 [2024-11-20 12:43:33.897165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.401 qpair failed and we were unable to recover it. 00:29:28.401 [2024-11-20 12:43:33.897256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.401 [2024-11-20 12:43:33.897277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.401 qpair failed and we were unable to recover it. 00:29:28.401 [2024-11-20 12:43:33.897373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.401 [2024-11-20 12:43:33.897395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.401 qpair failed and we were unable to recover it. 00:29:28.401 [2024-11-20 12:43:33.897482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.401 [2024-11-20 12:43:33.897504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.401 qpair failed and we were unable to recover it. 00:29:28.401 [2024-11-20 12:43:33.897609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.401 [2024-11-20 12:43:33.897631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.401 qpair failed and we were unable to recover it. 00:29:28.401 [2024-11-20 12:43:33.897780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.401 [2024-11-20 12:43:33.897802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.401 qpair failed and we were unable to recover it. 00:29:28.401 [2024-11-20 12:43:33.897892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.401 [2024-11-20 12:43:33.897915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.401 qpair failed and we were unable to recover it. 00:29:28.401 [2024-11-20 12:43:33.898021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.401 [2024-11-20 12:43:33.898043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.401 qpair failed and we were unable to recover it. 00:29:28.402 [2024-11-20 12:43:33.898132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.402 [2024-11-20 12:43:33.898154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.402 qpair failed and we were unable to recover it. 00:29:28.402 [2024-11-20 12:43:33.898324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.402 [2024-11-20 12:43:33.898350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.402 qpair failed and we were unable to recover it. 00:29:28.402 [2024-11-20 12:43:33.898440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.402 [2024-11-20 12:43:33.898463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.402 qpair failed and we were unable to recover it. 00:29:28.402 [2024-11-20 12:43:33.898690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.402 [2024-11-20 12:43:33.898723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.402 qpair failed and we were unable to recover it. 00:29:28.402 [2024-11-20 12:43:33.898834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.402 [2024-11-20 12:43:33.898866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.402 qpair failed and we were unable to recover it. 00:29:28.402 [2024-11-20 12:43:33.898971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.402 [2024-11-20 12:43:33.899002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.402 qpair failed and we were unable to recover it. 00:29:28.402 [2024-11-20 12:43:33.899249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.402 [2024-11-20 12:43:33.899284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.402 qpair failed and we were unable to recover it. 00:29:28.402 [2024-11-20 12:43:33.899391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.402 [2024-11-20 12:43:33.899423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.402 qpair failed and we were unable to recover it. 00:29:28.402 [2024-11-20 12:43:33.899595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.402 [2024-11-20 12:43:33.899626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.402 qpair failed and we were unable to recover it. 00:29:28.402 [2024-11-20 12:43:33.899784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.402 [2024-11-20 12:43:33.899806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.402 qpair failed and we were unable to recover it. 00:29:28.402 [2024-11-20 12:43:33.899898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.402 [2024-11-20 12:43:33.899920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.402 qpair failed and we were unable to recover it. 00:29:28.402 [2024-11-20 12:43:33.900097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.402 [2024-11-20 12:43:33.900121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.402 qpair failed and we were unable to recover it. 00:29:28.402 [2024-11-20 12:43:33.900274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.402 [2024-11-20 12:43:33.900297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.402 qpair failed and we were unable to recover it. 00:29:28.402 [2024-11-20 12:43:33.900472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.402 [2024-11-20 12:43:33.900504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.402 qpair failed and we were unable to recover it. 00:29:28.402 [2024-11-20 12:43:33.900678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.402 [2024-11-20 12:43:33.900710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.402 qpair failed and we were unable to recover it. 00:29:28.402 [2024-11-20 12:43:33.900955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.402 [2024-11-20 12:43:33.900987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.402 qpair failed and we were unable to recover it. 00:29:28.402 [2024-11-20 12:43:33.901099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.402 [2024-11-20 12:43:33.901131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.402 qpair failed and we were unable to recover it. 00:29:28.402 [2024-11-20 12:43:33.901240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.402 [2024-11-20 12:43:33.901275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.402 qpair failed and we were unable to recover it. 00:29:28.402 [2024-11-20 12:43:33.901379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.402 [2024-11-20 12:43:33.901411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.402 qpair failed and we were unable to recover it. 00:29:28.402 [2024-11-20 12:43:33.901517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.402 [2024-11-20 12:43:33.901549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.402 qpair failed and we were unable to recover it. 00:29:28.402 [2024-11-20 12:43:33.901717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.402 [2024-11-20 12:43:33.901740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.402 qpair failed and we were unable to recover it. 00:29:28.402 [2024-11-20 12:43:33.901913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.402 [2024-11-20 12:43:33.901944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.402 qpair failed and we were unable to recover it. 00:29:28.402 [2024-11-20 12:43:33.902060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.402 [2024-11-20 12:43:33.902092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.402 qpair failed and we were unable to recover it. 00:29:28.402 [2024-11-20 12:43:33.902195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.402 [2024-11-20 12:43:33.902237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.402 qpair failed and we were unable to recover it. 00:29:28.402 [2024-11-20 12:43:33.902359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.402 [2024-11-20 12:43:33.902391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.402 qpair failed and we were unable to recover it. 00:29:28.402 [2024-11-20 12:43:33.902523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.402 [2024-11-20 12:43:33.902554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.402 qpair failed and we were unable to recover it. 00:29:28.402 [2024-11-20 12:43:33.902669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.402 [2024-11-20 12:43:33.902701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.402 qpair failed and we were unable to recover it. 00:29:28.402 [2024-11-20 12:43:33.902890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.402 [2024-11-20 12:43:33.902921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.402 qpair failed and we were unable to recover it. 00:29:28.402 [2024-11-20 12:43:33.903093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.402 [2024-11-20 12:43:33.903124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.402 qpair failed and we were unable to recover it. 00:29:28.402 [2024-11-20 12:43:33.903309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.402 [2024-11-20 12:43:33.903342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.402 qpair failed and we were unable to recover it. 00:29:28.402 [2024-11-20 12:43:33.903456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.402 [2024-11-20 12:43:33.903489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.402 qpair failed and we were unable to recover it. 00:29:28.402 [2024-11-20 12:43:33.903606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.402 [2024-11-20 12:43:33.903637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.402 qpair failed and we were unable to recover it. 00:29:28.402 [2024-11-20 12:43:33.903816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.402 [2024-11-20 12:43:33.903838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.402 qpair failed and we were unable to recover it. 00:29:28.402 [2024-11-20 12:43:33.903931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.402 [2024-11-20 12:43:33.903956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.402 qpair failed and we were unable to recover it. 00:29:28.402 [2024-11-20 12:43:33.904121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.402 [2024-11-20 12:43:33.904144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.402 qpair failed and we were unable to recover it. 00:29:28.402 [2024-11-20 12:43:33.904237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.402 [2024-11-20 12:43:33.904260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.402 qpair failed and we were unable to recover it. 00:29:28.402 [2024-11-20 12:43:33.904348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.402 [2024-11-20 12:43:33.904371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.402 qpair failed and we were unable to recover it. 00:29:28.402 [2024-11-20 12:43:33.904547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.402 [2024-11-20 12:43:33.904570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.402 qpair failed and we were unable to recover it. 00:29:28.403 [2024-11-20 12:43:33.904664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.403 [2024-11-20 12:43:33.904686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.403 qpair failed and we were unable to recover it. 00:29:28.403 [2024-11-20 12:43:33.904856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.403 [2024-11-20 12:43:33.904878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.403 qpair failed and we were unable to recover it. 00:29:28.403 [2024-11-20 12:43:33.904979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.403 [2024-11-20 12:43:33.905001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.403 qpair failed and we were unable to recover it. 00:29:28.403 [2024-11-20 12:43:33.905106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.403 [2024-11-20 12:43:33.905128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.403 qpair failed and we were unable to recover it. 00:29:28.403 [2024-11-20 12:43:33.905215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.403 [2024-11-20 12:43:33.905238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.403 qpair failed and we were unable to recover it. 00:29:28.403 [2024-11-20 12:43:33.905347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.403 [2024-11-20 12:43:33.905370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.403 qpair failed and we were unable to recover it. 00:29:28.403 [2024-11-20 12:43:33.905470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.403 [2024-11-20 12:43:33.905492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.403 qpair failed and we were unable to recover it. 00:29:28.403 [2024-11-20 12:43:33.905588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.403 [2024-11-20 12:43:33.905610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.403 qpair failed and we were unable to recover it. 00:29:28.403 [2024-11-20 12:43:33.905689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.403 [2024-11-20 12:43:33.905711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.403 qpair failed and we were unable to recover it. 00:29:28.403 [2024-11-20 12:43:33.905806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.403 [2024-11-20 12:43:33.905828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.403 qpair failed and we were unable to recover it. 00:29:28.403 [2024-11-20 12:43:33.905915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.403 [2024-11-20 12:43:33.905937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.403 qpair failed and we were unable to recover it. 00:29:28.403 [2024-11-20 12:43:33.906019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.403 [2024-11-20 12:43:33.906041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.403 qpair failed and we were unable to recover it. 00:29:28.403 [2024-11-20 12:43:33.906188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.403 [2024-11-20 12:43:33.906218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.403 qpair failed and we were unable to recover it. 00:29:28.403 [2024-11-20 12:43:33.906305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.403 [2024-11-20 12:43:33.906327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.403 qpair failed and we were unable to recover it. 00:29:28.403 [2024-11-20 12:43:33.906423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.403 [2024-11-20 12:43:33.906445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.403 qpair failed and we were unable to recover it. 00:29:28.403 [2024-11-20 12:43:33.906532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.403 [2024-11-20 12:43:33.906555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.403 qpair failed and we were unable to recover it. 00:29:28.403 [2024-11-20 12:43:33.906645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.403 [2024-11-20 12:43:33.906668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.403 qpair failed and we were unable to recover it. 00:29:28.403 [2024-11-20 12:43:33.906823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.403 [2024-11-20 12:43:33.906845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.403 qpair failed and we were unable to recover it. 00:29:28.403 [2024-11-20 12:43:33.906926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.403 [2024-11-20 12:43:33.906947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.403 qpair failed and we were unable to recover it. 00:29:28.403 [2024-11-20 12:43:33.907104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.403 [2024-11-20 12:43:33.907127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.403 qpair failed and we were unable to recover it. 00:29:28.403 [2024-11-20 12:43:33.907279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.403 [2024-11-20 12:43:33.907302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.403 qpair failed and we were unable to recover it. 00:29:28.403 [2024-11-20 12:43:33.907395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.403 [2024-11-20 12:43:33.907418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.403 qpair failed and we were unable to recover it. 00:29:28.403 [2024-11-20 12:43:33.907575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.403 [2024-11-20 12:43:33.907598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.403 qpair failed and we were unable to recover it. 00:29:28.403 [2024-11-20 12:43:33.907756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.403 [2024-11-20 12:43:33.907779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.403 qpair failed and we were unable to recover it. 00:29:28.403 [2024-11-20 12:43:33.907865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.403 [2024-11-20 12:43:33.907888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.403 qpair failed and we were unable to recover it. 00:29:28.403 [2024-11-20 12:43:33.908069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.403 [2024-11-20 12:43:33.908091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.403 qpair failed and we were unable to recover it. 00:29:28.403 [2024-11-20 12:43:33.908237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.403 [2024-11-20 12:43:33.908260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.403 qpair failed and we were unable to recover it. 00:29:28.403 [2024-11-20 12:43:33.908350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.403 [2024-11-20 12:43:33.908373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.403 qpair failed and we were unable to recover it. 00:29:28.403 [2024-11-20 12:43:33.908457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.403 [2024-11-20 12:43:33.908478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.403 qpair failed and we were unable to recover it. 00:29:28.403 [2024-11-20 12:43:33.908625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.403 [2024-11-20 12:43:33.908647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.403 qpair failed and we were unable to recover it. 00:29:28.403 [2024-11-20 12:43:33.908887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.403 [2024-11-20 12:43:33.908909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.403 qpair failed and we were unable to recover it. 00:29:28.403 [2024-11-20 12:43:33.908989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.403 [2024-11-20 12:43:33.909009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.403 qpair failed and we were unable to recover it. 00:29:28.403 [2024-11-20 12:43:33.909170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.403 [2024-11-20 12:43:33.909193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.403 qpair failed and we were unable to recover it. 00:29:28.403 [2024-11-20 12:43:33.909362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.403 [2024-11-20 12:43:33.909385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.403 qpair failed and we were unable to recover it. 00:29:28.403 [2024-11-20 12:43:33.909555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.403 [2024-11-20 12:43:33.909578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.403 qpair failed and we were unable to recover it. 00:29:28.403 [2024-11-20 12:43:33.909683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.403 [2024-11-20 12:43:33.909705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.403 qpair failed and we were unable to recover it. 00:29:28.403 [2024-11-20 12:43:33.909806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.403 [2024-11-20 12:43:33.909828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.403 qpair failed and we were unable to recover it. 00:29:28.403 [2024-11-20 12:43:33.909982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.403 [2024-11-20 12:43:33.910004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.403 qpair failed and we were unable to recover it. 00:29:28.403 [2024-11-20 12:43:33.910102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.403 [2024-11-20 12:43:33.910124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.404 qpair failed and we were unable to recover it. 00:29:28.404 [2024-11-20 12:43:33.910291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.404 [2024-11-20 12:43:33.910314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.404 qpair failed and we were unable to recover it. 00:29:28.404 [2024-11-20 12:43:33.910396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.404 [2024-11-20 12:43:33.910418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.404 qpair failed and we were unable to recover it. 00:29:28.404 [2024-11-20 12:43:33.910598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.404 [2024-11-20 12:43:33.910621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.404 qpair failed and we were unable to recover it. 00:29:28.404 [2024-11-20 12:43:33.910700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.404 [2024-11-20 12:43:33.910720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.404 qpair failed and we were unable to recover it. 00:29:28.404 [2024-11-20 12:43:33.910879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.404 [2024-11-20 12:43:33.910902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.404 qpair failed and we were unable to recover it. 00:29:28.404 [2024-11-20 12:43:33.910990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.404 [2024-11-20 12:43:33.911012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.404 qpair failed and we were unable to recover it. 00:29:28.404 [2024-11-20 12:43:33.911160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.404 [2024-11-20 12:43:33.911182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.404 qpair failed and we were unable to recover it. 00:29:28.404 [2024-11-20 12:43:33.911342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.404 [2024-11-20 12:43:33.911366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.404 qpair failed and we were unable to recover it. 00:29:28.404 [2024-11-20 12:43:33.911603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.404 [2024-11-20 12:43:33.911626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.404 qpair failed and we were unable to recover it. 00:29:28.404 [2024-11-20 12:43:33.911706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.404 [2024-11-20 12:43:33.911728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.404 qpair failed and we were unable to recover it. 00:29:28.404 [2024-11-20 12:43:33.911817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.404 [2024-11-20 12:43:33.911840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.404 qpair failed and we were unable to recover it. 00:29:28.404 [2024-11-20 12:43:33.911938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.404 [2024-11-20 12:43:33.911962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.404 qpair failed and we were unable to recover it. 00:29:28.404 [2024-11-20 12:43:33.912048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.404 [2024-11-20 12:43:33.912071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.404 qpair failed and we were unable to recover it. 00:29:28.404 [2024-11-20 12:43:33.912177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.404 [2024-11-20 12:43:33.912199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.404 qpair failed and we were unable to recover it. 00:29:28.404 [2024-11-20 12:43:33.912314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.404 [2024-11-20 12:43:33.912337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.404 qpair failed and we were unable to recover it. 00:29:28.404 [2024-11-20 12:43:33.912438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.404 [2024-11-20 12:43:33.912460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.404 qpair failed and we were unable to recover it. 00:29:28.404 [2024-11-20 12:43:33.912539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.404 [2024-11-20 12:43:33.912561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.404 qpair failed and we were unable to recover it. 00:29:28.404 [2024-11-20 12:43:33.912653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.404 [2024-11-20 12:43:33.912676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.404 qpair failed and we were unable to recover it. 00:29:28.404 [2024-11-20 12:43:33.912776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.404 [2024-11-20 12:43:33.912798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.404 qpair failed and we were unable to recover it. 00:29:28.404 [2024-11-20 12:43:33.912883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.404 [2024-11-20 12:43:33.912905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.404 qpair failed and we were unable to recover it. 00:29:28.404 [2024-11-20 12:43:33.912991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.404 [2024-11-20 12:43:33.913014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.404 qpair failed and we were unable to recover it. 00:29:28.404 [2024-11-20 12:43:33.913111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.404 [2024-11-20 12:43:33.913133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.404 qpair failed and we were unable to recover it. 00:29:28.404 [2024-11-20 12:43:33.913225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.404 [2024-11-20 12:43:33.913249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.404 qpair failed and we were unable to recover it. 00:29:28.404 [2024-11-20 12:43:33.913328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.404 [2024-11-20 12:43:33.913350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.404 qpair failed and we were unable to recover it. 00:29:28.404 [2024-11-20 12:43:33.913527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.404 [2024-11-20 12:43:33.913553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.404 qpair failed and we were unable to recover it. 00:29:28.404 [2024-11-20 12:43:33.913703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.404 [2024-11-20 12:43:33.913725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.404 qpair failed and we were unable to recover it. 00:29:28.404 [2024-11-20 12:43:33.913890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.404 [2024-11-20 12:43:33.913913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.404 qpair failed and we were unable to recover it. 00:29:28.404 [2024-11-20 12:43:33.914080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.404 [2024-11-20 12:43:33.914102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.404 qpair failed and we were unable to recover it. 00:29:28.404 [2024-11-20 12:43:33.914216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.404 [2024-11-20 12:43:33.914240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.404 qpair failed and we were unable to recover it. 00:29:28.404 [2024-11-20 12:43:33.914393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.404 [2024-11-20 12:43:33.914414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.404 qpair failed and we were unable to recover it. 00:29:28.404 [2024-11-20 12:43:33.914637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.404 [2024-11-20 12:43:33.914660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.404 qpair failed and we were unable to recover it. 00:29:28.404 [2024-11-20 12:43:33.914808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.404 [2024-11-20 12:43:33.914830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.404 qpair failed and we were unable to recover it. 00:29:28.404 [2024-11-20 12:43:33.914981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.404 [2024-11-20 12:43:33.915003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.404 qpair failed and we were unable to recover it. 00:29:28.405 [2024-11-20 12:43:33.915151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.405 [2024-11-20 12:43:33.915173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.405 qpair failed and we were unable to recover it. 00:29:28.405 [2024-11-20 12:43:33.915351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.405 [2024-11-20 12:43:33.915374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.405 qpair failed and we were unable to recover it. 00:29:28.405 [2024-11-20 12:43:33.915470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.405 [2024-11-20 12:43:33.915493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.405 qpair failed and we were unable to recover it. 00:29:28.405 [2024-11-20 12:43:33.915588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.405 [2024-11-20 12:43:33.915610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.405 qpair failed and we were unable to recover it. 00:29:28.405 [2024-11-20 12:43:33.915773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.405 [2024-11-20 12:43:33.915795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.405 qpair failed and we were unable to recover it. 00:29:28.405 [2024-11-20 12:43:33.915884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.405 [2024-11-20 12:43:33.915906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.405 qpair failed and we were unable to recover it. 00:29:28.405 [2024-11-20 12:43:33.916059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.405 [2024-11-20 12:43:33.916082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.405 qpair failed and we were unable to recover it. 00:29:28.405 [2024-11-20 12:43:33.916182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.405 [2024-11-20 12:43:33.916210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.405 qpair failed and we were unable to recover it. 00:29:28.405 [2024-11-20 12:43:33.916299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.405 [2024-11-20 12:43:33.916321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.405 qpair failed and we were unable to recover it. 00:29:28.405 [2024-11-20 12:43:33.916475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.405 [2024-11-20 12:43:33.916497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.405 qpair failed and we were unable to recover it. 00:29:28.405 [2024-11-20 12:43:33.916594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.405 [2024-11-20 12:43:33.916616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.405 qpair failed and we were unable to recover it. 00:29:28.405 [2024-11-20 12:43:33.916702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.405 [2024-11-20 12:43:33.916724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.405 qpair failed and we were unable to recover it. 00:29:28.405 [2024-11-20 12:43:33.916874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.405 [2024-11-20 12:43:33.916896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.405 qpair failed and we were unable to recover it. 00:29:28.405 [2024-11-20 12:43:33.916993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.405 [2024-11-20 12:43:33.917015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.405 qpair failed and we were unable to recover it. 00:29:28.405 [2024-11-20 12:43:33.917112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.405 [2024-11-20 12:43:33.917133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.405 qpair failed and we were unable to recover it. 00:29:28.405 [2024-11-20 12:43:33.917219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.405 [2024-11-20 12:43:33.917241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.405 qpair failed and we were unable to recover it. 00:29:28.405 [2024-11-20 12:43:33.917345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.405 [2024-11-20 12:43:33.917368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.405 qpair failed and we were unable to recover it. 00:29:28.405 [2024-11-20 12:43:33.917552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.405 [2024-11-20 12:43:33.917574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.405 qpair failed and we were unable to recover it. 00:29:28.405 [2024-11-20 12:43:33.917668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.405 [2024-11-20 12:43:33.917690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.405 qpair failed and we were unable to recover it. 00:29:28.405 [2024-11-20 12:43:33.917848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.405 [2024-11-20 12:43:33.917871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.405 qpair failed and we were unable to recover it. 00:29:28.405 [2024-11-20 12:43:33.918021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.405 [2024-11-20 12:43:33.918043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.405 qpair failed and we were unable to recover it. 00:29:28.405 [2024-11-20 12:43:33.918262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.405 [2024-11-20 12:43:33.918284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.405 qpair failed and we were unable to recover it. 00:29:28.405 [2024-11-20 12:43:33.918382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.405 [2024-11-20 12:43:33.918404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.405 qpair failed and we were unable to recover it. 00:29:28.405 [2024-11-20 12:43:33.918583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.405 [2024-11-20 12:43:33.918604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.405 qpair failed and we were unable to recover it. 00:29:28.405 [2024-11-20 12:43:33.918690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.405 [2024-11-20 12:43:33.918711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.405 qpair failed and we were unable to recover it. 00:29:28.405 [2024-11-20 12:43:33.918865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.405 [2024-11-20 12:43:33.918887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.405 qpair failed and we were unable to recover it. 00:29:28.405 [2024-11-20 12:43:33.919060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.405 [2024-11-20 12:43:33.919082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.405 qpair failed and we were unable to recover it. 00:29:28.405 [2024-11-20 12:43:33.919183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.405 [2024-11-20 12:43:33.919210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.405 qpair failed and we were unable to recover it. 00:29:28.405 [2024-11-20 12:43:33.919321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.405 [2024-11-20 12:43:33.919345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.405 qpair failed and we were unable to recover it. 00:29:28.405 [2024-11-20 12:43:33.919429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.405 [2024-11-20 12:43:33.919450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.405 qpair failed and we were unable to recover it. 00:29:28.405 [2024-11-20 12:43:33.919557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.405 [2024-11-20 12:43:33.919579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.405 qpair failed and we were unable to recover it. 00:29:28.405 [2024-11-20 12:43:33.919733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.405 [2024-11-20 12:43:33.919755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.405 qpair failed and we were unable to recover it. 00:29:28.405 [2024-11-20 12:43:33.919973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.405 [2024-11-20 12:43:33.919996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.405 qpair failed and we were unable to recover it. 00:29:28.405 [2024-11-20 12:43:33.920086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.405 [2024-11-20 12:43:33.920108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.405 qpair failed and we were unable to recover it. 00:29:28.405 [2024-11-20 12:43:33.920187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.405 [2024-11-20 12:43:33.920214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.405 qpair failed and we were unable to recover it. 00:29:28.405 [2024-11-20 12:43:33.920500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.405 [2024-11-20 12:43:33.920573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.405 qpair failed and we were unable to recover it. 00:29:28.405 [2024-11-20 12:43:33.920721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.405 [2024-11-20 12:43:33.920757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.405 qpair failed and we were unable to recover it. 00:29:28.405 [2024-11-20 12:43:33.920923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.405 [2024-11-20 12:43:33.920946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.405 qpair failed and we were unable to recover it. 00:29:28.405 [2024-11-20 12:43:33.921111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.406 [2024-11-20 12:43:33.921133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.406 qpair failed and we were unable to recover it. 00:29:28.406 [2024-11-20 12:43:33.921228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.406 [2024-11-20 12:43:33.921251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.406 qpair failed and we were unable to recover it. 00:29:28.406 [2024-11-20 12:43:33.921333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.406 [2024-11-20 12:43:33.921355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.406 qpair failed and we were unable to recover it. 00:29:28.406 [2024-11-20 12:43:33.921441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.406 [2024-11-20 12:43:33.921464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.406 qpair failed and we were unable to recover it. 00:29:28.406 [2024-11-20 12:43:33.921550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.406 [2024-11-20 12:43:33.921572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.406 qpair failed and we were unable to recover it. 00:29:28.406 [2024-11-20 12:43:33.921736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.406 [2024-11-20 12:43:33.921759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.406 qpair failed and we were unable to recover it. 00:29:28.406 [2024-11-20 12:43:33.921855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.406 [2024-11-20 12:43:33.921877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.406 qpair failed and we were unable to recover it. 00:29:28.406 [2024-11-20 12:43:33.922028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.406 [2024-11-20 12:43:33.922050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.406 qpair failed and we were unable to recover it. 00:29:28.406 [2024-11-20 12:43:33.922208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.406 [2024-11-20 12:43:33.922232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.406 qpair failed and we were unable to recover it. 00:29:28.406 [2024-11-20 12:43:33.922319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.406 [2024-11-20 12:43:33.922341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.406 qpair failed and we were unable to recover it. 00:29:28.406 [2024-11-20 12:43:33.922491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.406 [2024-11-20 12:43:33.922513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.406 qpair failed and we were unable to recover it. 00:29:28.406 [2024-11-20 12:43:33.922728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.406 [2024-11-20 12:43:33.922750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.406 qpair failed and we were unable to recover it. 00:29:28.406 [2024-11-20 12:43:33.922861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.406 [2024-11-20 12:43:33.922884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.406 qpair failed and we were unable to recover it. 00:29:28.406 [2024-11-20 12:43:33.923107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.406 [2024-11-20 12:43:33.923129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.406 qpair failed and we were unable to recover it. 00:29:28.406 [2024-11-20 12:43:33.923239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.406 [2024-11-20 12:43:33.923262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.406 qpair failed and we were unable to recover it. 00:29:28.406 [2024-11-20 12:43:33.923357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.406 [2024-11-20 12:43:33.923380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.406 qpair failed and we were unable to recover it. 00:29:28.406 [2024-11-20 12:43:33.923545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.406 [2024-11-20 12:43:33.923568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.406 qpair failed and we were unable to recover it. 00:29:28.406 [2024-11-20 12:43:33.923728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.406 [2024-11-20 12:43:33.923750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.406 qpair failed and we were unable to recover it. 00:29:28.406 [2024-11-20 12:43:33.923861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.406 [2024-11-20 12:43:33.923882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.406 qpair failed and we were unable to recover it. 00:29:28.406 [2024-11-20 12:43:33.924035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.406 [2024-11-20 12:43:33.924058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.406 qpair failed and we were unable to recover it. 00:29:28.406 [2024-11-20 12:43:33.924141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.406 [2024-11-20 12:43:33.924162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.406 qpair failed and we were unable to recover it. 00:29:28.406 [2024-11-20 12:43:33.924256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.406 [2024-11-20 12:43:33.924282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.406 qpair failed and we were unable to recover it. 00:29:28.406 [2024-11-20 12:43:33.924430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.406 [2024-11-20 12:43:33.924452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.406 qpair failed and we were unable to recover it. 00:29:28.406 [2024-11-20 12:43:33.924606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.406 [2024-11-20 12:43:33.924628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.406 qpair failed and we were unable to recover it. 00:29:28.406 [2024-11-20 12:43:33.924727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.406 [2024-11-20 12:43:33.924750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.406 qpair failed and we were unable to recover it. 00:29:28.406 [2024-11-20 12:43:33.924835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.406 [2024-11-20 12:43:33.924857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.406 qpair failed and we were unable to recover it. 00:29:28.406 [2024-11-20 12:43:33.925025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.406 [2024-11-20 12:43:33.925048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.406 qpair failed and we were unable to recover it. 00:29:28.406 [2024-11-20 12:43:33.925133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.406 [2024-11-20 12:43:33.925155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.406 qpair failed and we were unable to recover it. 00:29:28.406 [2024-11-20 12:43:33.925268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.406 [2024-11-20 12:43:33.925292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.406 qpair failed and we were unable to recover it. 00:29:28.406 [2024-11-20 12:43:33.925517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.406 [2024-11-20 12:43:33.925539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.406 qpair failed and we were unable to recover it. 00:29:28.406 [2024-11-20 12:43:33.925641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.406 [2024-11-20 12:43:33.925662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.406 qpair failed and we were unable to recover it. 00:29:28.406 [2024-11-20 12:43:33.925817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.406 [2024-11-20 12:43:33.925839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.406 qpair failed and we were unable to recover it. 00:29:28.406 [2024-11-20 12:43:33.925942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.406 [2024-11-20 12:43:33.925965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.406 qpair failed and we were unable to recover it. 00:29:28.406 [2024-11-20 12:43:33.926058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.406 [2024-11-20 12:43:33.926080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.406 qpair failed and we were unable to recover it. 00:29:28.406 [2024-11-20 12:43:33.926234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.406 [2024-11-20 12:43:33.926257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.406 qpair failed and we were unable to recover it. 00:29:28.406 [2024-11-20 12:43:33.926423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.406 [2024-11-20 12:43:33.926446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.406 qpair failed and we were unable to recover it. 00:29:28.406 [2024-11-20 12:43:33.926609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.406 [2024-11-20 12:43:33.926631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.406 qpair failed and we were unable to recover it. 00:29:28.406 [2024-11-20 12:43:33.926794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.406 [2024-11-20 12:43:33.926816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.406 qpair failed and we were unable to recover it. 00:29:28.406 [2024-11-20 12:43:33.926910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.406 [2024-11-20 12:43:33.926933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.407 qpair failed and we were unable to recover it. 00:29:28.407 [2024-11-20 12:43:33.927035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.407 [2024-11-20 12:43:33.927058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.407 qpair failed and we were unable to recover it. 00:29:28.407 [2024-11-20 12:43:33.927228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.407 [2024-11-20 12:43:33.927251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.407 qpair failed and we were unable to recover it. 00:29:28.407 [2024-11-20 12:43:33.927406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.407 [2024-11-20 12:43:33.927428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.407 qpair failed and we were unable to recover it. 00:29:28.407 [2024-11-20 12:43:33.927599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.407 [2024-11-20 12:43:33.927621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.407 qpair failed and we were unable to recover it. 00:29:28.407 [2024-11-20 12:43:33.927705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.407 [2024-11-20 12:43:33.927727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.407 qpair failed and we were unable to recover it. 00:29:28.407 [2024-11-20 12:43:33.927891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.407 [2024-11-20 12:43:33.927913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.407 qpair failed and we were unable to recover it. 00:29:28.407 [2024-11-20 12:43:33.928019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.407 [2024-11-20 12:43:33.928041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.407 qpair failed and we were unable to recover it. 00:29:28.407 [2024-11-20 12:43:33.928143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.407 [2024-11-20 12:43:33.928165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.407 qpair failed and we were unable to recover it. 00:29:28.407 [2024-11-20 12:43:33.928257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.407 [2024-11-20 12:43:33.928280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.407 qpair failed and we were unable to recover it. 00:29:28.407 [2024-11-20 12:43:33.928450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.407 [2024-11-20 12:43:33.928472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.407 qpair failed and we were unable to recover it. 00:29:28.407 [2024-11-20 12:43:33.928646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.407 [2024-11-20 12:43:33.928669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.407 qpair failed and we were unable to recover it. 00:29:28.407 [2024-11-20 12:43:33.928772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.407 [2024-11-20 12:43:33.928794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.407 qpair failed and we were unable to recover it. 00:29:28.407 [2024-11-20 12:43:33.929026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.407 [2024-11-20 12:43:33.929049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.407 qpair failed and we were unable to recover it. 00:29:28.407 [2024-11-20 12:43:33.929224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.407 [2024-11-20 12:43:33.929248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.407 qpair failed and we were unable to recover it. 00:29:28.407 [2024-11-20 12:43:33.929335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.407 [2024-11-20 12:43:33.929357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.407 qpair failed and we were unable to recover it. 00:29:28.407 [2024-11-20 12:43:33.929521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.407 [2024-11-20 12:43:33.929543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.407 qpair failed and we were unable to recover it. 00:29:28.407 [2024-11-20 12:43:33.929779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.407 [2024-11-20 12:43:33.929802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.407 qpair failed and we were unable to recover it. 00:29:28.407 [2024-11-20 12:43:33.929966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.407 [2024-11-20 12:43:33.929988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.407 qpair failed and we were unable to recover it. 00:29:28.407 [2024-11-20 12:43:33.930097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.407 [2024-11-20 12:43:33.930119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.407 qpair failed and we were unable to recover it. 00:29:28.407 [2024-11-20 12:43:33.930215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.407 [2024-11-20 12:43:33.930238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.407 qpair failed and we were unable to recover it. 00:29:28.407 [2024-11-20 12:43:33.930456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.407 [2024-11-20 12:43:33.930478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.407 qpair failed and we were unable to recover it. 00:29:28.407 [2024-11-20 12:43:33.930639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.407 [2024-11-20 12:43:33.930661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.407 qpair failed and we were unable to recover it. 00:29:28.407 [2024-11-20 12:43:33.930812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.407 [2024-11-20 12:43:33.930834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.407 qpair failed and we were unable to recover it. 00:29:28.407 [2024-11-20 12:43:33.930958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.407 [2024-11-20 12:43:33.931003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.407 qpair failed and we were unable to recover it. 00:29:28.407 [2024-11-20 12:43:33.931216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.407 [2024-11-20 12:43:33.931252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.407 qpair failed and we were unable to recover it. 00:29:28.407 [2024-11-20 12:43:33.931428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.407 [2024-11-20 12:43:33.931462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.407 qpair failed and we were unable to recover it. 00:29:28.407 [2024-11-20 12:43:33.931657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.407 [2024-11-20 12:43:33.931689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.407 qpair failed and we were unable to recover it. 00:29:28.407 [2024-11-20 12:43:33.931942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.407 [2024-11-20 12:43:33.931976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.407 qpair failed and we were unable to recover it. 00:29:28.407 [2024-11-20 12:43:33.932170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.407 [2024-11-20 12:43:33.932219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.407 qpair failed and we were unable to recover it. 00:29:28.407 [2024-11-20 12:43:33.932386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.407 [2024-11-20 12:43:33.932411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.407 qpair failed and we were unable to recover it. 00:29:28.407 [2024-11-20 12:43:33.932518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.407 [2024-11-20 12:43:33.932541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.407 qpair failed and we were unable to recover it. 00:29:28.407 [2024-11-20 12:43:33.932700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.407 [2024-11-20 12:43:33.932721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.407 qpair failed and we were unable to recover it. 00:29:28.407 [2024-11-20 12:43:33.932941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.407 [2024-11-20 12:43:33.932964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.407 qpair failed and we were unable to recover it. 00:29:28.407 [2024-11-20 12:43:33.933126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.407 [2024-11-20 12:43:33.933148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.407 qpair failed and we were unable to recover it. 00:29:28.407 [2024-11-20 12:43:33.933312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.407 [2024-11-20 12:43:33.933348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.407 qpair failed and we were unable to recover it. 00:29:28.407 [2024-11-20 12:43:33.933457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.407 [2024-11-20 12:43:33.933479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.407 qpair failed and we were unable to recover it. 00:29:28.407 [2024-11-20 12:43:33.933639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.407 [2024-11-20 12:43:33.933660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.407 qpair failed and we were unable to recover it. 00:29:28.407 [2024-11-20 12:43:33.933812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.407 [2024-11-20 12:43:33.933835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.407 qpair failed and we were unable to recover it. 00:29:28.408 [2024-11-20 12:43:33.934003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.408 [2024-11-20 12:43:33.934026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.408 qpair failed and we were unable to recover it. 00:29:28.408 [2024-11-20 12:43:33.934215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.408 [2024-11-20 12:43:33.934239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.408 qpair failed and we were unable to recover it. 00:29:28.408 [2024-11-20 12:43:33.934394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.408 [2024-11-20 12:43:33.934417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.408 qpair failed and we were unable to recover it. 00:29:28.408 [2024-11-20 12:43:33.934501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.408 [2024-11-20 12:43:33.934523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.408 qpair failed and we were unable to recover it. 00:29:28.408 [2024-11-20 12:43:33.934621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.408 [2024-11-20 12:43:33.934643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.408 qpair failed and we were unable to recover it. 00:29:28.408 [2024-11-20 12:43:33.934725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.408 [2024-11-20 12:43:33.934747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.408 qpair failed and we were unable to recover it. 00:29:28.408 [2024-11-20 12:43:33.934866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.408 [2024-11-20 12:43:33.934888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.408 qpair failed and we were unable to recover it. 00:29:28.408 [2024-11-20 12:43:33.934996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.408 [2024-11-20 12:43:33.935020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.408 qpair failed and we were unable to recover it. 00:29:28.408 [2024-11-20 12:43:33.935114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.408 [2024-11-20 12:43:33.935137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.408 qpair failed and we were unable to recover it. 00:29:28.408 [2024-11-20 12:43:33.935293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.408 [2024-11-20 12:43:33.935318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.408 qpair failed and we were unable to recover it. 00:29:28.408 [2024-11-20 12:43:33.935418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.408 [2024-11-20 12:43:33.935440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.408 qpair failed and we were unable to recover it. 00:29:28.408 [2024-11-20 12:43:33.935611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.408 [2024-11-20 12:43:33.935634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.408 qpair failed and we were unable to recover it. 00:29:28.408 [2024-11-20 12:43:33.935734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.408 [2024-11-20 12:43:33.935760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.408 qpair failed and we were unable to recover it. 00:29:28.408 [2024-11-20 12:43:33.935863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.408 [2024-11-20 12:43:33.935887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.408 qpair failed and we were unable to recover it. 00:29:28.408 [2024-11-20 12:43:33.936061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.408 [2024-11-20 12:43:33.936083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.408 qpair failed and we were unable to recover it. 00:29:28.408 [2024-11-20 12:43:33.936200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.408 [2024-11-20 12:43:33.936229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.408 qpair failed and we were unable to recover it. 00:29:28.408 [2024-11-20 12:43:33.936390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.408 [2024-11-20 12:43:33.936412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.408 qpair failed and we were unable to recover it. 00:29:28.408 [2024-11-20 12:43:33.936565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.408 [2024-11-20 12:43:33.936587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.408 qpair failed and we were unable to recover it. 00:29:28.408 [2024-11-20 12:43:33.936772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.408 [2024-11-20 12:43:33.936795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.408 qpair failed and we were unable to recover it. 00:29:28.408 [2024-11-20 12:43:33.936896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.408 [2024-11-20 12:43:33.936918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.408 qpair failed and we were unable to recover it. 00:29:28.408 [2024-11-20 12:43:33.937146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.408 [2024-11-20 12:43:33.937168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.408 qpair failed and we were unable to recover it. 00:29:28.408 [2024-11-20 12:43:33.937338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.408 [2024-11-20 12:43:33.937364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.408 qpair failed and we were unable to recover it. 00:29:28.408 [2024-11-20 12:43:33.937516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.408 [2024-11-20 12:43:33.937539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.408 qpair failed and we were unable to recover it. 00:29:28.408 [2024-11-20 12:43:33.937632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.408 [2024-11-20 12:43:33.937656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.408 qpair failed and we were unable to recover it. 00:29:28.408 [2024-11-20 12:43:33.937874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.408 [2024-11-20 12:43:33.937897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.408 qpair failed and we were unable to recover it. 00:29:28.408 [2024-11-20 12:43:33.938057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.408 [2024-11-20 12:43:33.938079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.408 qpair failed and we were unable to recover it. 00:29:28.408 [2024-11-20 12:43:33.938276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.408 [2024-11-20 12:43:33.938300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.408 qpair failed and we were unable to recover it. 00:29:28.408 [2024-11-20 12:43:33.938404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.408 [2024-11-20 12:43:33.938427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.408 qpair failed and we were unable to recover it. 00:29:28.408 [2024-11-20 12:43:33.938514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.408 [2024-11-20 12:43:33.938534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.408 qpair failed and we were unable to recover it. 00:29:28.408 [2024-11-20 12:43:33.938632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.408 [2024-11-20 12:43:33.938656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.408 qpair failed and we were unable to recover it. 00:29:28.408 [2024-11-20 12:43:33.938757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.408 [2024-11-20 12:43:33.938781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.408 qpair failed and we were unable to recover it. 00:29:28.408 [2024-11-20 12:43:33.939034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.408 [2024-11-20 12:43:33.939056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.408 qpair failed and we were unable to recover it. 00:29:28.408 [2024-11-20 12:43:33.939167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.408 [2024-11-20 12:43:33.939189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.408 qpair failed and we were unable to recover it. 00:29:28.408 [2024-11-20 12:43:33.939407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.408 [2024-11-20 12:43:33.939430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.408 qpair failed and we were unable to recover it. 00:29:28.408 [2024-11-20 12:43:33.939609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.408 [2024-11-20 12:43:33.939631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.408 qpair failed and we were unable to recover it. 00:29:28.408 [2024-11-20 12:43:33.939749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.408 [2024-11-20 12:43:33.939771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.408 qpair failed and we were unable to recover it. 00:29:28.408 [2024-11-20 12:43:33.940004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.408 [2024-11-20 12:43:33.940026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.408 qpair failed and we were unable to recover it. 00:29:28.408 [2024-11-20 12:43:33.940115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.408 [2024-11-20 12:43:33.940138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.408 qpair failed and we were unable to recover it. 00:29:28.408 [2024-11-20 12:43:33.940299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.409 [2024-11-20 12:43:33.940322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.409 qpair failed and we were unable to recover it. 00:29:28.409 [2024-11-20 12:43:33.940428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.409 [2024-11-20 12:43:33.940454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.409 qpair failed and we were unable to recover it. 00:29:28.409 [2024-11-20 12:43:33.940605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.409 [2024-11-20 12:43:33.940627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.409 qpair failed and we were unable to recover it. 00:29:28.409 [2024-11-20 12:43:33.940844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.409 [2024-11-20 12:43:33.940867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.409 qpair failed and we were unable to recover it. 00:29:28.409 [2024-11-20 12:43:33.941015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.409 [2024-11-20 12:43:33.941038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.409 qpair failed and we were unable to recover it. 00:29:28.409 [2024-11-20 12:43:33.941136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.409 [2024-11-20 12:43:33.941158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.409 qpair failed and we were unable to recover it. 00:29:28.409 [2024-11-20 12:43:33.941326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.409 [2024-11-20 12:43:33.941349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.409 qpair failed and we were unable to recover it. 00:29:28.409 [2024-11-20 12:43:33.941431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.409 [2024-11-20 12:43:33.941453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.409 qpair failed and we were unable to recover it. 00:29:28.409 [2024-11-20 12:43:33.941562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.409 [2024-11-20 12:43:33.941584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.409 qpair failed and we were unable to recover it. 00:29:28.409 [2024-11-20 12:43:33.941670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.409 [2024-11-20 12:43:33.941693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.409 qpair failed and we were unable to recover it. 00:29:28.409 [2024-11-20 12:43:33.941874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.409 [2024-11-20 12:43:33.941897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.409 qpair failed and we were unable to recover it. 00:29:28.409 [2024-11-20 12:43:33.942083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.409 [2024-11-20 12:43:33.942105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.409 qpair failed and we were unable to recover it. 00:29:28.409 [2024-11-20 12:43:33.942259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.409 [2024-11-20 12:43:33.942282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.409 qpair failed and we were unable to recover it. 00:29:28.409 [2024-11-20 12:43:33.942452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.409 [2024-11-20 12:43:33.942474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.409 qpair failed and we were unable to recover it. 00:29:28.409 [2024-11-20 12:43:33.942562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.409 [2024-11-20 12:43:33.942584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.409 qpair failed and we were unable to recover it. 00:29:28.409 [2024-11-20 12:43:33.942698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.409 [2024-11-20 12:43:33.942722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.409 qpair failed and we were unable to recover it. 00:29:28.409 [2024-11-20 12:43:33.942833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.409 [2024-11-20 12:43:33.942855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.409 qpair failed and we were unable to recover it. 00:29:28.409 [2024-11-20 12:43:33.943040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.409 [2024-11-20 12:43:33.943062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.409 qpair failed and we were unable to recover it. 00:29:28.409 [2024-11-20 12:43:33.943254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.409 [2024-11-20 12:43:33.943278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.409 qpair failed and we were unable to recover it. 00:29:28.409 [2024-11-20 12:43:33.943439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.409 [2024-11-20 12:43:33.943461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.409 qpair failed and we were unable to recover it. 00:29:28.409 [2024-11-20 12:43:33.943623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.409 [2024-11-20 12:43:33.943645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.409 qpair failed and we were unable to recover it. 00:29:28.409 [2024-11-20 12:43:33.943732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.409 [2024-11-20 12:43:33.943755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.409 qpair failed and we were unable to recover it. 00:29:28.409 [2024-11-20 12:43:33.943849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.409 [2024-11-20 12:43:33.943871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.409 qpair failed and we were unable to recover it. 00:29:28.409 [2024-11-20 12:43:33.943986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.409 [2024-11-20 12:43:33.944008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.409 qpair failed and we were unable to recover it. 00:29:28.409 [2024-11-20 12:43:33.944163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.409 [2024-11-20 12:43:33.944187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.409 qpair failed and we were unable to recover it. 00:29:28.409 [2024-11-20 12:43:33.944295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.409 [2024-11-20 12:43:33.944316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.409 qpair failed and we were unable to recover it. 00:29:28.409 [2024-11-20 12:43:33.944399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.409 [2024-11-20 12:43:33.944421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.409 qpair failed and we were unable to recover it. 00:29:28.409 [2024-11-20 12:43:33.944574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.409 [2024-11-20 12:43:33.944596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.409 qpair failed and we were unable to recover it. 00:29:28.409 [2024-11-20 12:43:33.944749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.409 [2024-11-20 12:43:33.944771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.409 qpair failed and we were unable to recover it. 00:29:28.409 [2024-11-20 12:43:33.944940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.409 [2024-11-20 12:43:33.944966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.409 qpair failed and we were unable to recover it. 00:29:28.409 [2024-11-20 12:43:33.945138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.409 [2024-11-20 12:43:33.945162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.409 qpair failed and we were unable to recover it. 00:29:28.409 [2024-11-20 12:43:33.945275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.409 [2024-11-20 12:43:33.945296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.409 qpair failed and we were unable to recover it. 00:29:28.409 [2024-11-20 12:43:33.945414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.409 [2024-11-20 12:43:33.945435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.409 qpair failed and we were unable to recover it. 00:29:28.409 [2024-11-20 12:43:33.945652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.409 [2024-11-20 12:43:33.945674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.409 qpair failed and we were unable to recover it. 00:29:28.409 [2024-11-20 12:43:33.945765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.409 [2024-11-20 12:43:33.945787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.409 qpair failed and we were unable to recover it. 00:29:28.410 [2024-11-20 12:43:33.945934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.410 [2024-11-20 12:43:33.945956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.410 qpair failed and we were unable to recover it. 00:29:28.410 [2024-11-20 12:43:33.946056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.410 [2024-11-20 12:43:33.946079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.410 qpair failed and we were unable to recover it. 00:29:28.410 [2024-11-20 12:43:33.946239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.410 [2024-11-20 12:43:33.946263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.410 qpair failed and we were unable to recover it. 00:29:28.410 [2024-11-20 12:43:33.946387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.410 [2024-11-20 12:43:33.946409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.410 qpair failed and we were unable to recover it. 00:29:28.410 [2024-11-20 12:43:33.946674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.410 [2024-11-20 12:43:33.946697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.410 qpair failed and we were unable to recover it. 00:29:28.410 [2024-11-20 12:43:33.946917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.410 [2024-11-20 12:43:33.946939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.410 qpair failed and we were unable to recover it. 00:29:28.410 [2024-11-20 12:43:33.947107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.410 [2024-11-20 12:43:33.947129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.410 qpair failed and we were unable to recover it. 00:29:28.410 [2024-11-20 12:43:33.947243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.410 [2024-11-20 12:43:33.947270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.410 qpair failed and we were unable to recover it. 00:29:28.410 [2024-11-20 12:43:33.947373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.410 [2024-11-20 12:43:33.947396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.410 qpair failed and we were unable to recover it. 00:29:28.410 [2024-11-20 12:43:33.947576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.410 [2024-11-20 12:43:33.947599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.410 qpair failed and we were unable to recover it. 00:29:28.410 [2024-11-20 12:43:33.947764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.410 [2024-11-20 12:43:33.947786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.410 qpair failed and we were unable to recover it. 00:29:28.410 [2024-11-20 12:43:33.947885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.410 [2024-11-20 12:43:33.947909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.410 qpair failed and we were unable to recover it. 00:29:28.410 [2024-11-20 12:43:33.948078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.410 [2024-11-20 12:43:33.948102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.410 qpair failed and we were unable to recover it. 00:29:28.410 [2024-11-20 12:43:33.948291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.410 [2024-11-20 12:43:33.948314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.410 qpair failed and we were unable to recover it. 00:29:28.410 [2024-11-20 12:43:33.948511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.410 [2024-11-20 12:43:33.948535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.410 qpair failed and we were unable to recover it. 00:29:28.410 [2024-11-20 12:43:33.948633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.410 [2024-11-20 12:43:33.948657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.410 qpair failed and we were unable to recover it. 00:29:28.410 [2024-11-20 12:43:33.948899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.410 [2024-11-20 12:43:33.948924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.410 qpair failed and we were unable to recover it. 00:29:28.410 [2024-11-20 12:43:33.949024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.410 [2024-11-20 12:43:33.949048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.410 qpair failed and we were unable to recover it. 00:29:28.410 [2024-11-20 12:43:33.949271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.410 [2024-11-20 12:43:33.949297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.410 qpair failed and we were unable to recover it. 00:29:28.410 [2024-11-20 12:43:33.949461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.410 [2024-11-20 12:43:33.949484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.410 qpair failed and we were unable to recover it. 00:29:28.410 [2024-11-20 12:43:33.949674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.410 [2024-11-20 12:43:33.949697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.410 qpair failed and we were unable to recover it. 00:29:28.410 [2024-11-20 12:43:33.949864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.410 [2024-11-20 12:43:33.949887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.410 qpair failed and we were unable to recover it. 00:29:28.410 [2024-11-20 12:43:33.950046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.410 [2024-11-20 12:43:33.950069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.410 qpair failed and we were unable to recover it. 00:29:28.410 [2024-11-20 12:43:33.950305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.410 [2024-11-20 12:43:33.950329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.410 qpair failed and we were unable to recover it. 00:29:28.410 [2024-11-20 12:43:33.950423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.410 [2024-11-20 12:43:33.950445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.410 qpair failed and we were unable to recover it. 00:29:28.410 [2024-11-20 12:43:33.950544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.410 [2024-11-20 12:43:33.950567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.410 qpair failed and we were unable to recover it. 00:29:28.410 [2024-11-20 12:43:33.950732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.410 [2024-11-20 12:43:33.950754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.410 qpair failed and we were unable to recover it. 00:29:28.410 [2024-11-20 12:43:33.950905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.410 [2024-11-20 12:43:33.950927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.410 qpair failed and we were unable to recover it. 00:29:28.410 [2024-11-20 12:43:33.951178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.410 [2024-11-20 12:43:33.951224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.410 qpair failed and we were unable to recover it. 00:29:28.410 [2024-11-20 12:43:33.951476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.410 [2024-11-20 12:43:33.951500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.410 qpair failed and we were unable to recover it. 00:29:28.410 [2024-11-20 12:43:33.951579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.410 [2024-11-20 12:43:33.951602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.410 qpair failed and we were unable to recover it. 00:29:28.410 [2024-11-20 12:43:33.951710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.410 [2024-11-20 12:43:33.951732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.410 qpair failed and we were unable to recover it. 00:29:28.410 [2024-11-20 12:43:33.951895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.410 [2024-11-20 12:43:33.951917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.410 qpair failed and we were unable to recover it. 00:29:28.410 [2024-11-20 12:43:33.952076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.410 [2024-11-20 12:43:33.952098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.410 qpair failed and we were unable to recover it. 00:29:28.410 [2024-11-20 12:43:33.952263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.410 [2024-11-20 12:43:33.952286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.410 qpair failed and we were unable to recover it. 00:29:28.410 [2024-11-20 12:43:33.952375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.410 [2024-11-20 12:43:33.952396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.410 qpair failed and we were unable to recover it. 00:29:28.410 [2024-11-20 12:43:33.952579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.410 [2024-11-20 12:43:33.952600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.410 qpair failed and we were unable to recover it. 00:29:28.410 [2024-11-20 12:43:33.952749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.410 [2024-11-20 12:43:33.952771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.410 qpair failed and we were unable to recover it. 00:29:28.410 [2024-11-20 12:43:33.952890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.411 [2024-11-20 12:43:33.952912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.411 qpair failed and we were unable to recover it. 00:29:28.411 [2024-11-20 12:43:33.952991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.411 [2024-11-20 12:43:33.953012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.411 qpair failed and we were unable to recover it. 00:29:28.411 [2024-11-20 12:43:33.953188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.411 [2024-11-20 12:43:33.953216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.411 qpair failed and we were unable to recover it. 00:29:28.411 [2024-11-20 12:43:33.953366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.411 [2024-11-20 12:43:33.953388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.411 qpair failed and we were unable to recover it. 00:29:28.411 [2024-11-20 12:43:33.953559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.411 [2024-11-20 12:43:33.953582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.411 qpair failed and we were unable to recover it. 00:29:28.411 [2024-11-20 12:43:33.953674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.411 [2024-11-20 12:43:33.953695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.411 qpair failed and we were unable to recover it. 00:29:28.411 [2024-11-20 12:43:33.953854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.411 [2024-11-20 12:43:33.953876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.411 qpair failed and we were unable to recover it. 00:29:28.411 [2024-11-20 12:43:33.953970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.411 [2024-11-20 12:43:33.953990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.411 qpair failed and we were unable to recover it. 00:29:28.411 [2024-11-20 12:43:33.954088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.411 [2024-11-20 12:43:33.954111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.411 qpair failed and we were unable to recover it. 00:29:28.411 [2024-11-20 12:43:33.954292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.411 [2024-11-20 12:43:33.954316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.411 qpair failed and we were unable to recover it. 00:29:28.411 [2024-11-20 12:43:33.954478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.411 [2024-11-20 12:43:33.954500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.411 qpair failed and we were unable to recover it. 00:29:28.411 [2024-11-20 12:43:33.954671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.411 [2024-11-20 12:43:33.954694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.411 qpair failed and we were unable to recover it. 00:29:28.411 [2024-11-20 12:43:33.954782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.411 [2024-11-20 12:43:33.954804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.411 qpair failed and we were unable to recover it. 00:29:28.411 [2024-11-20 12:43:33.954967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.411 [2024-11-20 12:43:33.954990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.411 qpair failed and we were unable to recover it. 00:29:28.411 [2024-11-20 12:43:33.955089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.411 [2024-11-20 12:43:33.955111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.411 qpair failed and we were unable to recover it. 00:29:28.411 [2024-11-20 12:43:33.955280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.411 [2024-11-20 12:43:33.955303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.411 qpair failed and we were unable to recover it. 00:29:28.411 [2024-11-20 12:43:33.955452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.411 [2024-11-20 12:43:33.955474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.411 qpair failed and we were unable to recover it. 00:29:28.411 [2024-11-20 12:43:33.955637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.411 [2024-11-20 12:43:33.955660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.411 qpair failed and we were unable to recover it. 00:29:28.411 [2024-11-20 12:43:33.955927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.411 [2024-11-20 12:43:33.955949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.411 qpair failed and we were unable to recover it. 00:29:28.411 [2024-11-20 12:43:33.956054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.411 [2024-11-20 12:43:33.956075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.411 qpair failed and we were unable to recover it. 00:29:28.411 [2024-11-20 12:43:33.956181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.411 [2024-11-20 12:43:33.956206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.411 qpair failed and we were unable to recover it. 00:29:28.411 [2024-11-20 12:43:33.956320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.411 [2024-11-20 12:43:33.956341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.411 qpair failed and we were unable to recover it. 00:29:28.411 [2024-11-20 12:43:33.956494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.411 [2024-11-20 12:43:33.956515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.411 qpair failed and we were unable to recover it. 00:29:28.411 [2024-11-20 12:43:33.956664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.411 [2024-11-20 12:43:33.956687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.411 qpair failed and we were unable to recover it. 00:29:28.411 [2024-11-20 12:43:33.956863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.411 [2024-11-20 12:43:33.956885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.411 qpair failed and we were unable to recover it. 00:29:28.411 [2024-11-20 12:43:33.956965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.411 [2024-11-20 12:43:33.956986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.411 qpair failed and we were unable to recover it. 00:29:28.411 [2024-11-20 12:43:33.957098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.411 [2024-11-20 12:43:33.957120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.411 qpair failed and we were unable to recover it. 00:29:28.411 [2024-11-20 12:43:33.957290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.411 [2024-11-20 12:43:33.957314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.411 qpair failed and we were unable to recover it. 00:29:28.411 [2024-11-20 12:43:33.957500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.411 [2024-11-20 12:43:33.957523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.411 qpair failed and we were unable to recover it. 00:29:28.411 [2024-11-20 12:43:33.957609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.411 [2024-11-20 12:43:33.957632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.411 qpair failed and we were unable to recover it. 00:29:28.411 [2024-11-20 12:43:33.957725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.411 [2024-11-20 12:43:33.957747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.411 qpair failed and we were unable to recover it. 00:29:28.411 [2024-11-20 12:43:33.957841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.411 [2024-11-20 12:43:33.957862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.411 qpair failed and we were unable to recover it. 00:29:28.411 [2024-11-20 12:43:33.957969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.411 [2024-11-20 12:43:33.957992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.411 qpair failed and we were unable to recover it. 00:29:28.411 [2024-11-20 12:43:33.958077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.411 [2024-11-20 12:43:33.958100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.411 qpair failed and we were unable to recover it. 00:29:28.411 [2024-11-20 12:43:33.958251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.411 [2024-11-20 12:43:33.958274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.411 qpair failed and we were unable to recover it. 00:29:28.411 [2024-11-20 12:43:33.958425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.411 [2024-11-20 12:43:33.958447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.411 qpair failed and we were unable to recover it. 00:29:28.411 [2024-11-20 12:43:33.958555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.411 [2024-11-20 12:43:33.958577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.411 qpair failed and we were unable to recover it. 00:29:28.411 [2024-11-20 12:43:33.958672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.411 [2024-11-20 12:43:33.958698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.411 qpair failed and we were unable to recover it. 00:29:28.411 [2024-11-20 12:43:33.958850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.412 [2024-11-20 12:43:33.958873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.412 qpair failed and we were unable to recover it. 00:29:28.412 [2024-11-20 12:43:33.958986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.412 [2024-11-20 12:43:33.959009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.412 qpair failed and we were unable to recover it. 00:29:28.412 [2024-11-20 12:43:33.959168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.412 [2024-11-20 12:43:33.959191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.412 qpair failed and we were unable to recover it. 00:29:28.412 [2024-11-20 12:43:33.959314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.412 [2024-11-20 12:43:33.959336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.412 qpair failed and we were unable to recover it. 00:29:28.412 [2024-11-20 12:43:33.959485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.412 [2024-11-20 12:43:33.959507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.412 qpair failed and we were unable to recover it. 00:29:28.412 [2024-11-20 12:43:33.959594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.412 [2024-11-20 12:43:33.959616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.412 qpair failed and we were unable to recover it. 00:29:28.412 [2024-11-20 12:43:33.959766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.412 [2024-11-20 12:43:33.959788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.412 qpair failed and we were unable to recover it. 00:29:28.412 [2024-11-20 12:43:33.959954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.412 [2024-11-20 12:43:33.959976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.412 qpair failed and we were unable to recover it. 00:29:28.412 [2024-11-20 12:43:33.960074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.412 [2024-11-20 12:43:33.960096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.412 qpair failed and we were unable to recover it. 00:29:28.412 [2024-11-20 12:43:33.960261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.412 [2024-11-20 12:43:33.960284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.412 qpair failed and we were unable to recover it. 00:29:28.412 [2024-11-20 12:43:33.960385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.412 [2024-11-20 12:43:33.960407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.412 qpair failed and we were unable to recover it. 00:29:28.412 [2024-11-20 12:43:33.960559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.412 [2024-11-20 12:43:33.960581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.412 qpair failed and we were unable to recover it. 00:29:28.412 [2024-11-20 12:43:33.960681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.412 [2024-11-20 12:43:33.960703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.412 qpair failed and we were unable to recover it. 00:29:28.412 [2024-11-20 12:43:33.960905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.412 [2024-11-20 12:43:33.960928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.412 qpair failed and we were unable to recover it. 00:29:28.412 [2024-11-20 12:43:33.961146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.412 [2024-11-20 12:43:33.961168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.412 qpair failed and we were unable to recover it. 00:29:28.412 [2024-11-20 12:43:33.961335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.412 [2024-11-20 12:43:33.961358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.412 qpair failed and we were unable to recover it. 00:29:28.412 [2024-11-20 12:43:33.961577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.412 [2024-11-20 12:43:33.961598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.412 qpair failed and we were unable to recover it. 00:29:28.412 [2024-11-20 12:43:33.961681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.412 [2024-11-20 12:43:33.961704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.412 qpair failed and we were unable to recover it. 00:29:28.412 [2024-11-20 12:43:33.961791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.412 [2024-11-20 12:43:33.961811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.412 qpair failed and we were unable to recover it. 00:29:28.412 [2024-11-20 12:43:33.961904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.412 [2024-11-20 12:43:33.961926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.412 qpair failed and we were unable to recover it. 00:29:28.412 [2024-11-20 12:43:33.962016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.412 [2024-11-20 12:43:33.962039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.412 qpair failed and we were unable to recover it. 00:29:28.412 [2024-11-20 12:43:33.962191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.412 [2024-11-20 12:43:33.962220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.412 qpair failed and we were unable to recover it. 00:29:28.412 [2024-11-20 12:43:33.962393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.412 [2024-11-20 12:43:33.962415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.412 qpair failed and we were unable to recover it. 00:29:28.412 [2024-11-20 12:43:33.962533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.412 [2024-11-20 12:43:33.962556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.412 qpair failed and we were unable to recover it. 00:29:28.412 [2024-11-20 12:43:33.962708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.412 [2024-11-20 12:43:33.962731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.412 qpair failed and we were unable to recover it. 00:29:28.412 [2024-11-20 12:43:33.962878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.412 [2024-11-20 12:43:33.962906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.412 qpair failed and we were unable to recover it. 00:29:28.412 [2024-11-20 12:43:33.963014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.412 [2024-11-20 12:43:33.963037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.412 qpair failed and we were unable to recover it. 00:29:28.412 [2024-11-20 12:43:33.963190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.412 [2024-11-20 12:43:33.963219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.412 qpair failed and we were unable to recover it. 00:29:28.412 [2024-11-20 12:43:33.963392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.412 [2024-11-20 12:43:33.963415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.412 qpair failed and we were unable to recover it. 00:29:28.412 [2024-11-20 12:43:33.963521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.412 [2024-11-20 12:43:33.963544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.412 qpair failed and we were unable to recover it. 00:29:28.412 [2024-11-20 12:43:33.963657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.412 [2024-11-20 12:43:33.963679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.412 qpair failed and we were unable to recover it. 00:29:28.412 [2024-11-20 12:43:33.963771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.412 [2024-11-20 12:43:33.963794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.412 qpair failed and we were unable to recover it. 00:29:28.412 [2024-11-20 12:43:33.963978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.412 [2024-11-20 12:43:33.964000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.412 qpair failed and we were unable to recover it. 00:29:28.412 [2024-11-20 12:43:33.964081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.412 [2024-11-20 12:43:33.964104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.412 qpair failed and we were unable to recover it. 00:29:28.412 [2024-11-20 12:43:33.964275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.412 [2024-11-20 12:43:33.964298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.412 qpair failed and we were unable to recover it. 00:29:28.412 [2024-11-20 12:43:33.964447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.412 [2024-11-20 12:43:33.964469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.412 qpair failed and we were unable to recover it. 00:29:28.412 [2024-11-20 12:43:33.964636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.412 [2024-11-20 12:43:33.964658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.412 qpair failed and we were unable to recover it. 00:29:28.412 [2024-11-20 12:43:33.964742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.412 [2024-11-20 12:43:33.964765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.412 qpair failed and we were unable to recover it. 00:29:28.412 [2024-11-20 12:43:33.964882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.412 [2024-11-20 12:43:33.964904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.413 qpair failed and we were unable to recover it. 00:29:28.413 [2024-11-20 12:43:33.965013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.413 [2024-11-20 12:43:33.965035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.413 qpair failed and we were unable to recover it. 00:29:28.413 [2024-11-20 12:43:33.965184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.413 [2024-11-20 12:43:33.965215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.413 qpair failed and we were unable to recover it. 00:29:28.413 [2024-11-20 12:43:33.965415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.413 [2024-11-20 12:43:33.965437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.413 qpair failed and we were unable to recover it. 00:29:28.413 [2024-11-20 12:43:33.965657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.413 [2024-11-20 12:43:33.965680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.413 qpair failed and we were unable to recover it. 00:29:28.413 [2024-11-20 12:43:33.965781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.413 [2024-11-20 12:43:33.965804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.413 qpair failed and we were unable to recover it. 00:29:28.413 [2024-11-20 12:43:33.965980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.413 [2024-11-20 12:43:33.966003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.413 qpair failed and we were unable to recover it. 00:29:28.413 [2024-11-20 12:43:33.966086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.413 [2024-11-20 12:43:33.966108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.413 qpair failed and we were unable to recover it. 00:29:28.413 [2024-11-20 12:43:33.966256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.413 [2024-11-20 12:43:33.966279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.413 qpair failed and we were unable to recover it. 00:29:28.413 [2024-11-20 12:43:33.966361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.413 [2024-11-20 12:43:33.966383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.413 qpair failed and we were unable to recover it. 00:29:28.413 [2024-11-20 12:43:33.966478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.413 [2024-11-20 12:43:33.966501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.413 qpair failed and we were unable to recover it. 00:29:28.413 [2024-11-20 12:43:33.966661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.413 [2024-11-20 12:43:33.966683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.413 qpair failed and we were unable to recover it. 00:29:28.413 [2024-11-20 12:43:33.966795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.413 [2024-11-20 12:43:33.966818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.413 qpair failed and we were unable to recover it. 00:29:28.413 [2024-11-20 12:43:33.966917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.413 [2024-11-20 12:43:33.966939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.413 qpair failed and we were unable to recover it. 00:29:28.413 [2024-11-20 12:43:33.967027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.413 [2024-11-20 12:43:33.967050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.413 qpair failed and we were unable to recover it. 00:29:28.413 [2024-11-20 12:43:33.967199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.413 [2024-11-20 12:43:33.967243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.413 qpair failed and we were unable to recover it. 00:29:28.413 [2024-11-20 12:43:33.967401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.413 [2024-11-20 12:43:33.967423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.413 qpair failed and we were unable to recover it. 00:29:28.413 [2024-11-20 12:43:33.967526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.413 [2024-11-20 12:43:33.967549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.413 qpair failed and we were unable to recover it. 00:29:28.413 [2024-11-20 12:43:33.967650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.413 [2024-11-20 12:43:33.967672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.413 qpair failed and we were unable to recover it. 00:29:28.413 [2024-11-20 12:43:33.967823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.413 [2024-11-20 12:43:33.967846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.413 qpair failed and we were unable to recover it. 00:29:28.413 [2024-11-20 12:43:33.967942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.413 [2024-11-20 12:43:33.967965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.413 qpair failed and we were unable to recover it. 00:29:28.413 [2024-11-20 12:43:33.968118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.413 [2024-11-20 12:43:33.968140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.413 qpair failed and we were unable to recover it. 00:29:28.413 [2024-11-20 12:43:33.968222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.413 [2024-11-20 12:43:33.968244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.413 qpair failed and we were unable to recover it. 00:29:28.413 [2024-11-20 12:43:33.968410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.413 [2024-11-20 12:43:33.968433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.413 qpair failed and we were unable to recover it. 00:29:28.413 [2024-11-20 12:43:33.968597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.413 [2024-11-20 12:43:33.968620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.413 qpair failed and we were unable to recover it. 00:29:28.413 [2024-11-20 12:43:33.968697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.413 [2024-11-20 12:43:33.968720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.413 qpair failed and we were unable to recover it. 00:29:28.413 [2024-11-20 12:43:33.968874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.413 [2024-11-20 12:43:33.968896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.413 qpair failed and we were unable to recover it. 00:29:28.413 [2024-11-20 12:43:33.968985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.413 [2024-11-20 12:43:33.969008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.413 qpair failed and we were unable to recover it. 00:29:28.413 [2024-11-20 12:43:33.969168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.413 [2024-11-20 12:43:33.969190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.413 qpair failed and we were unable to recover it. 00:29:28.413 [2024-11-20 12:43:33.969343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.413 [2024-11-20 12:43:33.969370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.413 qpair failed and we were unable to recover it. 00:29:28.413 [2024-11-20 12:43:33.969462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.413 [2024-11-20 12:43:33.969484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.413 qpair failed and we were unable to recover it. 00:29:28.413 [2024-11-20 12:43:33.969658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.413 [2024-11-20 12:43:33.969681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.413 qpair failed and we were unable to recover it. 00:29:28.413 [2024-11-20 12:43:33.969842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.413 [2024-11-20 12:43:33.969864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.413 qpair failed and we were unable to recover it. 00:29:28.413 [2024-11-20 12:43:33.969965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.413 [2024-11-20 12:43:33.969988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.413 qpair failed and we were unable to recover it. 00:29:28.413 [2024-11-20 12:43:33.970141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.413 [2024-11-20 12:43:33.970164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.413 qpair failed and we were unable to recover it. 00:29:28.413 [2024-11-20 12:43:33.970279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.413 [2024-11-20 12:43:33.970302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.413 qpair failed and we were unable to recover it. 00:29:28.413 [2024-11-20 12:43:33.970384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.413 [2024-11-20 12:43:33.970406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.413 qpair failed and we were unable to recover it. 00:29:28.413 [2024-11-20 12:43:33.970507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.413 [2024-11-20 12:43:33.970530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.413 qpair failed and we were unable to recover it. 00:29:28.413 [2024-11-20 12:43:33.970708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.413 [2024-11-20 12:43:33.970730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.414 qpair failed and we were unable to recover it. 00:29:28.414 [2024-11-20 12:43:33.970822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.414 [2024-11-20 12:43:33.970845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.414 qpair failed and we were unable to recover it. 00:29:28.414 [2024-11-20 12:43:33.971029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.414 [2024-11-20 12:43:33.971051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.414 qpair failed and we were unable to recover it. 00:29:28.414 [2024-11-20 12:43:33.971131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.414 [2024-11-20 12:43:33.971153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.414 qpair failed and we were unable to recover it. 00:29:28.414 [2024-11-20 12:43:33.971303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.414 [2024-11-20 12:43:33.971326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.414 qpair failed and we were unable to recover it. 00:29:28.414 [2024-11-20 12:43:33.971413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.414 [2024-11-20 12:43:33.971435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.414 qpair failed and we were unable to recover it. 00:29:28.414 [2024-11-20 12:43:33.971655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.414 [2024-11-20 12:43:33.971678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.414 qpair failed and we were unable to recover it. 00:29:28.414 [2024-11-20 12:43:33.971795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.414 [2024-11-20 12:43:33.971818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.414 qpair failed and we were unable to recover it. 00:29:28.414 [2024-11-20 12:43:33.971918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.414 [2024-11-20 12:43:33.971941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.414 qpair failed and we were unable to recover it. 00:29:28.414 [2024-11-20 12:43:33.972023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.414 [2024-11-20 12:43:33.972045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.414 qpair failed and we were unable to recover it. 00:29:28.414 [2024-11-20 12:43:33.972135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.414 [2024-11-20 12:43:33.972157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.414 qpair failed and we were unable to recover it. 00:29:28.414 [2024-11-20 12:43:33.972359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.414 [2024-11-20 12:43:33.972383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.414 qpair failed and we were unable to recover it. 00:29:28.414 [2024-11-20 12:43:33.972548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.414 [2024-11-20 12:43:33.972570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.414 qpair failed and we were unable to recover it. 00:29:28.414 [2024-11-20 12:43:33.972727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.414 [2024-11-20 12:43:33.972750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.414 qpair failed and we were unable to recover it. 00:29:28.414 [2024-11-20 12:43:33.972832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.414 [2024-11-20 12:43:33.972854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.414 qpair failed and we were unable to recover it. 00:29:28.414 [2024-11-20 12:43:33.972952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.414 [2024-11-20 12:43:33.972975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.414 qpair failed and we were unable to recover it. 00:29:28.414 [2024-11-20 12:43:33.973075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.414 [2024-11-20 12:43:33.973097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.414 qpair failed and we were unable to recover it. 00:29:28.414 [2024-11-20 12:43:33.973247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.414 [2024-11-20 12:43:33.973269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.414 qpair failed and we were unable to recover it. 00:29:28.414 [2024-11-20 12:43:33.973353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.414 [2024-11-20 12:43:33.973375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.414 qpair failed and we were unable to recover it. 00:29:28.414 [2024-11-20 12:43:33.973463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.414 [2024-11-20 12:43:33.973485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.414 qpair failed and we were unable to recover it. 00:29:28.414 [2024-11-20 12:43:33.973705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.414 [2024-11-20 12:43:33.973727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.414 qpair failed and we were unable to recover it. 00:29:28.414 [2024-11-20 12:43:33.973878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.414 [2024-11-20 12:43:33.973900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.414 qpair failed and we were unable to recover it. 00:29:28.414 [2024-11-20 12:43:33.973996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.414 [2024-11-20 12:43:33.974018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.414 qpair failed and we were unable to recover it. 00:29:28.414 [2024-11-20 12:43:33.974255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.414 [2024-11-20 12:43:33.974279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.414 qpair failed and we were unable to recover it. 00:29:28.414 [2024-11-20 12:43:33.974377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.414 [2024-11-20 12:43:33.974399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.414 qpair failed and we were unable to recover it. 00:29:28.414 [2024-11-20 12:43:33.974573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.414 [2024-11-20 12:43:33.974595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.414 qpair failed and we were unable to recover it. 00:29:28.414 [2024-11-20 12:43:33.974694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.414 [2024-11-20 12:43:33.974717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.414 qpair failed and we were unable to recover it. 00:29:28.414 [2024-11-20 12:43:33.974883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.414 [2024-11-20 12:43:33.974905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.414 qpair failed and we were unable to recover it. 00:29:28.414 [2024-11-20 12:43:33.975143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.414 [2024-11-20 12:43:33.975165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.414 qpair failed and we were unable to recover it. 00:29:28.414 [2024-11-20 12:43:33.975411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.414 [2024-11-20 12:43:33.975435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.414 qpair failed and we were unable to recover it. 00:29:28.414 [2024-11-20 12:43:33.975585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.414 [2024-11-20 12:43:33.975607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.414 qpair failed and we were unable to recover it. 00:29:28.414 [2024-11-20 12:43:33.975710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.414 [2024-11-20 12:43:33.975733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.414 qpair failed and we were unable to recover it. 00:29:28.414 [2024-11-20 12:43:33.975829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.414 [2024-11-20 12:43:33.975857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.414 qpair failed and we were unable to recover it. 00:29:28.414 [2024-11-20 12:43:33.975974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.415 [2024-11-20 12:43:33.975997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.415 qpair failed and we were unable to recover it. 00:29:28.415 [2024-11-20 12:43:33.976103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.415 [2024-11-20 12:43:33.976125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.415 qpair failed and we were unable to recover it. 00:29:28.415 [2024-11-20 12:43:33.976217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.415 [2024-11-20 12:43:33.976241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.415 qpair failed and we were unable to recover it. 00:29:28.415 [2024-11-20 12:43:33.976326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.415 [2024-11-20 12:43:33.976349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.415 qpair failed and we were unable to recover it. 00:29:28.415 [2024-11-20 12:43:33.976531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.415 [2024-11-20 12:43:33.976553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.415 qpair failed and we were unable to recover it. 00:29:28.415 [2024-11-20 12:43:33.976637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.415 [2024-11-20 12:43:33.976659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.415 qpair failed and we were unable to recover it. 00:29:28.415 [2024-11-20 12:43:33.976827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.415 [2024-11-20 12:43:33.976850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.415 qpair failed and we were unable to recover it. 00:29:28.415 [2024-11-20 12:43:33.976962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.415 [2024-11-20 12:43:33.976985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.415 qpair failed and we were unable to recover it. 00:29:28.415 [2024-11-20 12:43:33.977220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.415 [2024-11-20 12:43:33.977244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.415 qpair failed and we were unable to recover it. 00:29:28.415 [2024-11-20 12:43:33.977410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.415 [2024-11-20 12:43:33.977432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.415 qpair failed and we were unable to recover it. 00:29:28.415 [2024-11-20 12:43:33.977590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.415 [2024-11-20 12:43:33.977612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.415 qpair failed and we were unable to recover it. 00:29:28.415 [2024-11-20 12:43:33.977803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.415 [2024-11-20 12:43:33.977825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.415 qpair failed and we were unable to recover it. 00:29:28.415 [2024-11-20 12:43:33.977919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.415 [2024-11-20 12:43:33.977942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.415 qpair failed and we were unable to recover it. 00:29:28.415 [2024-11-20 12:43:33.978124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.415 [2024-11-20 12:43:33.978146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.415 qpair failed and we were unable to recover it. 00:29:28.415 [2024-11-20 12:43:33.978229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.415 [2024-11-20 12:43:33.978252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.415 qpair failed and we were unable to recover it. 00:29:28.415 [2024-11-20 12:43:33.978354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.415 [2024-11-20 12:43:33.978377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.415 qpair failed and we were unable to recover it. 00:29:28.415 [2024-11-20 12:43:33.978458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.415 [2024-11-20 12:43:33.978479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.415 qpair failed and we were unable to recover it. 00:29:28.415 [2024-11-20 12:43:33.978644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.415 [2024-11-20 12:43:33.978666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.415 qpair failed and we were unable to recover it. 00:29:28.415 [2024-11-20 12:43:33.978743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.415 [2024-11-20 12:43:33.978763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.415 qpair failed and we were unable to recover it. 00:29:28.415 [2024-11-20 12:43:33.978932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.415 [2024-11-20 12:43:33.978954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.415 qpair failed and we were unable to recover it. 00:29:28.415 [2024-11-20 12:43:33.979036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.415 [2024-11-20 12:43:33.979058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.415 qpair failed and we were unable to recover it. 00:29:28.415 [2024-11-20 12:43:33.979302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.415 [2024-11-20 12:43:33.979324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.415 qpair failed and we were unable to recover it. 00:29:28.415 [2024-11-20 12:43:33.979477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.415 [2024-11-20 12:43:33.979499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.415 qpair failed and we were unable to recover it. 00:29:28.415 [2024-11-20 12:43:33.979597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.415 [2024-11-20 12:43:33.979620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.415 qpair failed and we were unable to recover it. 00:29:28.415 [2024-11-20 12:43:33.979703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.415 [2024-11-20 12:43:33.979724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.415 qpair failed and we were unable to recover it. 00:29:28.415 [2024-11-20 12:43:33.979904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.415 [2024-11-20 12:43:33.979926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.415 qpair failed and we were unable to recover it. 00:29:28.415 [2024-11-20 12:43:33.980029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.415 [2024-11-20 12:43:33.980055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.415 qpair failed and we were unable to recover it. 00:29:28.415 [2024-11-20 12:43:33.980212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.415 [2024-11-20 12:43:33.980235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.415 qpair failed and we were unable to recover it. 00:29:28.415 [2024-11-20 12:43:33.980397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.415 [2024-11-20 12:43:33.980420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.415 qpair failed and we were unable to recover it. 00:29:28.415 [2024-11-20 12:43:33.980579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.415 [2024-11-20 12:43:33.980601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.415 qpair failed and we were unable to recover it. 00:29:28.415 [2024-11-20 12:43:33.980700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.415 [2024-11-20 12:43:33.980722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.415 qpair failed and we were unable to recover it. 00:29:28.415 [2024-11-20 12:43:33.980945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.415 [2024-11-20 12:43:33.980967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.415 qpair failed and we were unable to recover it. 00:29:28.415 [2024-11-20 12:43:33.981140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.415 [2024-11-20 12:43:33.981163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.415 qpair failed and we were unable to recover it. 00:29:28.415 [2024-11-20 12:43:33.981280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.415 [2024-11-20 12:43:33.981303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.415 qpair failed and we were unable to recover it. 00:29:28.415 [2024-11-20 12:43:33.981553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.415 [2024-11-20 12:43:33.981575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.415 qpair failed and we were unable to recover it. 00:29:28.415 [2024-11-20 12:43:33.981742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.415 [2024-11-20 12:43:33.981764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.415 qpair failed and we were unable to recover it. 00:29:28.415 [2024-11-20 12:43:33.981848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.415 [2024-11-20 12:43:33.981870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.415 qpair failed and we were unable to recover it. 00:29:28.415 [2024-11-20 12:43:33.981963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.415 [2024-11-20 12:43:33.981985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.415 qpair failed and we were unable to recover it. 00:29:28.415 [2024-11-20 12:43:33.982132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.415 [2024-11-20 12:43:33.982155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.415 qpair failed and we were unable to recover it. 00:29:28.416 [2024-11-20 12:43:33.982249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.416 [2024-11-20 12:43:33.982272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.416 qpair failed and we were unable to recover it. 00:29:28.416 [2024-11-20 12:43:33.982376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.416 [2024-11-20 12:43:33.982399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.416 qpair failed and we were unable to recover it. 00:29:28.416 [2024-11-20 12:43:33.982583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.416 [2024-11-20 12:43:33.982605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.416 qpair failed and we were unable to recover it. 00:29:28.416 [2024-11-20 12:43:33.982688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.416 [2024-11-20 12:43:33.982710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.416 qpair failed and we were unable to recover it. 00:29:28.416 [2024-11-20 12:43:33.982793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.416 [2024-11-20 12:43:33.982815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.416 qpair failed and we were unable to recover it. 00:29:28.416 [2024-11-20 12:43:33.982969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.416 [2024-11-20 12:43:33.982991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.416 qpair failed and we were unable to recover it. 00:29:28.416 [2024-11-20 12:43:33.983183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.416 [2024-11-20 12:43:33.983230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.416 qpair failed and we were unable to recover it. 00:29:28.416 [2024-11-20 12:43:33.983480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.416 [2024-11-20 12:43:33.983503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.416 qpair failed and we were unable to recover it. 00:29:28.416 [2024-11-20 12:43:33.983605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.416 [2024-11-20 12:43:33.983627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.416 qpair failed and we were unable to recover it. 00:29:28.416 [2024-11-20 12:43:33.983735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.416 [2024-11-20 12:43:33.983758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.416 qpair failed and we were unable to recover it. 00:29:28.416 [2024-11-20 12:43:33.983860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.416 [2024-11-20 12:43:33.983882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.416 qpair failed and we were unable to recover it. 00:29:28.416 [2024-11-20 12:43:33.984149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.416 [2024-11-20 12:43:33.984172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.416 qpair failed and we were unable to recover it. 00:29:28.416 [2024-11-20 12:43:33.984296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.416 [2024-11-20 12:43:33.984320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.416 qpair failed and we were unable to recover it. 00:29:28.416 [2024-11-20 12:43:33.984484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.416 [2024-11-20 12:43:33.984507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.416 qpair failed and we were unable to recover it. 00:29:28.416 [2024-11-20 12:43:33.984597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.416 [2024-11-20 12:43:33.984619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.416 qpair failed and we were unable to recover it. 00:29:28.416 [2024-11-20 12:43:33.984727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.416 [2024-11-20 12:43:33.984750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.416 qpair failed and we were unable to recover it. 00:29:28.416 [2024-11-20 12:43:33.984908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.416 [2024-11-20 12:43:33.984930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.416 qpair failed and we were unable to recover it. 00:29:28.416 [2024-11-20 12:43:33.985107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.416 [2024-11-20 12:43:33.985130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.416 qpair failed and we were unable to recover it. 00:29:28.416 [2024-11-20 12:43:33.985229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.416 [2024-11-20 12:43:33.985252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.416 qpair failed and we were unable to recover it. 00:29:28.416 [2024-11-20 12:43:33.985343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.416 [2024-11-20 12:43:33.985365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.416 qpair failed and we were unable to recover it. 00:29:28.416 [2024-11-20 12:43:33.985512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.416 [2024-11-20 12:43:33.985535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.416 qpair failed and we were unable to recover it. 00:29:28.416 [2024-11-20 12:43:33.985694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.416 [2024-11-20 12:43:33.985717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.416 qpair failed and we were unable to recover it. 00:29:28.416 [2024-11-20 12:43:33.985816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.416 [2024-11-20 12:43:33.985838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.416 qpair failed and we were unable to recover it. 00:29:28.416 [2024-11-20 12:43:33.986021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.416 [2024-11-20 12:43:33.986043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.416 qpair failed and we were unable to recover it. 00:29:28.416 [2024-11-20 12:43:33.986139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.416 [2024-11-20 12:43:33.986162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.416 qpair failed and we were unable to recover it. 00:29:28.416 [2024-11-20 12:43:33.986332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.416 [2024-11-20 12:43:33.986355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.416 qpair failed and we were unable to recover it. 00:29:28.416 [2024-11-20 12:43:33.986459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.416 [2024-11-20 12:43:33.986482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.416 qpair failed and we were unable to recover it. 00:29:28.416 [2024-11-20 12:43:33.986630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.416 [2024-11-20 12:43:33.986653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.416 qpair failed and we were unable to recover it. 00:29:28.416 [2024-11-20 12:43:33.986812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.416 [2024-11-20 12:43:33.986838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.416 qpair failed and we were unable to recover it. 00:29:28.416 [2024-11-20 12:43:33.986920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.416 [2024-11-20 12:43:33.986943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.416 qpair failed and we were unable to recover it. 00:29:28.416 [2024-11-20 12:43:33.987160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.416 [2024-11-20 12:43:33.987182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.416 qpair failed and we were unable to recover it. 00:29:28.416 [2024-11-20 12:43:33.987356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.416 [2024-11-20 12:43:33.987379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.416 qpair failed and we were unable to recover it. 00:29:28.416 [2024-11-20 12:43:33.987466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.416 [2024-11-20 12:43:33.987488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.416 qpair failed and we were unable to recover it. 00:29:28.416 [2024-11-20 12:43:33.987682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.416 [2024-11-20 12:43:33.987704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.416 qpair failed and we were unable to recover it. 00:29:28.416 [2024-11-20 12:43:33.987870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.416 [2024-11-20 12:43:33.987893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.416 qpair failed and we were unable to recover it. 00:29:28.416 [2024-11-20 12:43:33.988089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.416 [2024-11-20 12:43:33.988112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.416 qpair failed and we were unable to recover it. 00:29:28.416 [2024-11-20 12:43:33.988193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.416 [2024-11-20 12:43:33.988223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.416 qpair failed and we were unable to recover it. 00:29:28.416 [2024-11-20 12:43:33.988442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.416 [2024-11-20 12:43:33.988464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.416 qpair failed and we were unable to recover it. 00:29:28.417 [2024-11-20 12:43:33.988556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.417 [2024-11-20 12:43:33.988579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.417 qpair failed and we were unable to recover it. 00:29:28.417 [2024-11-20 12:43:33.988678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.417 [2024-11-20 12:43:33.988700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.417 qpair failed and we were unable to recover it. 00:29:28.417 [2024-11-20 12:43:33.988854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.417 [2024-11-20 12:43:33.988878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.417 qpair failed and we were unable to recover it. 00:29:28.417 [2024-11-20 12:43:33.988980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.417 [2024-11-20 12:43:33.989003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.417 qpair failed and we were unable to recover it. 00:29:28.417 [2024-11-20 12:43:33.989170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.417 [2024-11-20 12:43:33.989192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.417 qpair failed and we were unable to recover it. 00:29:28.417 [2024-11-20 12:43:33.989293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.417 [2024-11-20 12:43:33.989315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.417 qpair failed and we were unable to recover it. 00:29:28.417 [2024-11-20 12:43:33.989408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.417 [2024-11-20 12:43:33.989430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.417 qpair failed and we were unable to recover it. 00:29:28.417 [2024-11-20 12:43:33.989601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.417 [2024-11-20 12:43:33.989624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.417 qpair failed and we were unable to recover it. 00:29:28.417 [2024-11-20 12:43:33.989709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.417 [2024-11-20 12:43:33.989731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.417 qpair failed and we were unable to recover it. 00:29:28.417 [2024-11-20 12:43:33.989902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.417 [2024-11-20 12:43:33.989925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.417 qpair failed and we were unable to recover it. 00:29:28.417 [2024-11-20 12:43:33.990214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.417 [2024-11-20 12:43:33.990237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.417 qpair failed and we were unable to recover it. 00:29:28.417 [2024-11-20 12:43:33.990440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.417 [2024-11-20 12:43:33.990462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.417 qpair failed and we were unable to recover it. 00:29:28.417 [2024-11-20 12:43:33.990612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.417 [2024-11-20 12:43:33.990634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.417 qpair failed and we were unable to recover it. 00:29:28.417 [2024-11-20 12:43:33.990742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.417 [2024-11-20 12:43:33.990764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.417 qpair failed and we were unable to recover it. 00:29:28.417 [2024-11-20 12:43:33.990917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.417 [2024-11-20 12:43:33.990938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.417 qpair failed and we were unable to recover it. 00:29:28.417 [2024-11-20 12:43:33.991189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.417 [2024-11-20 12:43:33.991237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.417 qpair failed and we were unable to recover it. 00:29:28.417 [2024-11-20 12:43:33.991394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.417 [2024-11-20 12:43:33.991418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.417 qpair failed and we were unable to recover it. 00:29:28.417 [2024-11-20 12:43:33.991507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.417 [2024-11-20 12:43:33.991534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.417 qpair failed and we were unable to recover it. 00:29:28.417 [2024-11-20 12:43:33.991707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.417 [2024-11-20 12:43:33.991729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.417 qpair failed and we were unable to recover it. 00:29:28.417 [2024-11-20 12:43:33.991951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.417 [2024-11-20 12:43:33.991974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.417 qpair failed and we were unable to recover it. 00:29:28.417 [2024-11-20 12:43:33.992070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.417 [2024-11-20 12:43:33.992091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.417 qpair failed and we were unable to recover it. 00:29:28.417 [2024-11-20 12:43:33.992276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.417 [2024-11-20 12:43:33.992299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.417 qpair failed and we were unable to recover it. 00:29:28.417 [2024-11-20 12:43:33.992519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.417 [2024-11-20 12:43:33.992551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.417 qpair failed and we were unable to recover it. 00:29:28.417 [2024-11-20 12:43:33.992674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.417 [2024-11-20 12:43:33.992705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.417 qpair failed and we were unable to recover it. 00:29:28.417 [2024-11-20 12:43:33.992900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.417 [2024-11-20 12:43:33.992933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.417 qpair failed and we were unable to recover it. 00:29:28.417 [2024-11-20 12:43:33.993053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.417 [2024-11-20 12:43:33.993073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.417 qpair failed and we were unable to recover it. 00:29:28.417 [2024-11-20 12:43:33.993227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.417 [2024-11-20 12:43:33.993248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.417 qpair failed and we were unable to recover it. 00:29:28.417 [2024-11-20 12:43:33.993407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.417 [2024-11-20 12:43:33.993429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.417 qpair failed and we were unable to recover it. 00:29:28.417 [2024-11-20 12:43:33.993540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.417 [2024-11-20 12:43:33.993561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.417 qpair failed and we were unable to recover it. 00:29:28.417 [2024-11-20 12:43:33.993707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.417 [2024-11-20 12:43:33.993728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.417 qpair failed and we were unable to recover it. 00:29:28.417 [2024-11-20 12:43:33.993915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.417 [2024-11-20 12:43:33.993937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.417 qpair failed and we were unable to recover it. 00:29:28.417 [2024-11-20 12:43:33.994166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.417 [2024-11-20 12:43:33.994254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.417 qpair failed and we were unable to recover it. 00:29:28.417 [2024-11-20 12:43:33.994406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.417 [2024-11-20 12:43:33.994444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.417 qpair failed and we were unable to recover it. 00:29:28.417 [2024-11-20 12:43:33.994676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.417 [2024-11-20 12:43:33.994710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.417 qpair failed and we were unable to recover it. 00:29:28.417 [2024-11-20 12:43:33.994915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.417 [2024-11-20 12:43:33.994949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.417 qpair failed and we were unable to recover it. 00:29:28.417 [2024-11-20 12:43:33.995075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.417 [2024-11-20 12:43:33.995109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.417 qpair failed and we were unable to recover it. 00:29:28.417 [2024-11-20 12:43:33.995286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.417 [2024-11-20 12:43:33.995321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.417 qpair failed and we were unable to recover it. 00:29:28.417 [2024-11-20 12:43:33.995504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.417 [2024-11-20 12:43:33.995538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.417 qpair failed and we were unable to recover it. 00:29:28.417 [2024-11-20 12:43:33.995653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.418 [2024-11-20 12:43:33.995685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.418 qpair failed and we were unable to recover it. 00:29:28.418 [2024-11-20 12:43:33.995924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.418 [2024-11-20 12:43:33.995958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.418 qpair failed and we were unable to recover it. 00:29:28.418 [2024-11-20 12:43:33.996096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.418 [2024-11-20 12:43:33.996130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.418 qpair failed and we were unable to recover it. 00:29:28.418 [2024-11-20 12:43:33.996312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.418 [2024-11-20 12:43:33.996347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.418 qpair failed and we were unable to recover it. 00:29:28.418 [2024-11-20 12:43:33.996518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.418 [2024-11-20 12:43:33.996551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.418 qpair failed and we were unable to recover it. 00:29:28.418 [2024-11-20 12:43:33.996674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.418 [2024-11-20 12:43:33.996707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.418 qpair failed and we were unable to recover it. 00:29:28.418 [2024-11-20 12:43:33.996878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.418 [2024-11-20 12:43:33.996920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.418 qpair failed and we were unable to recover it. 00:29:28.418 [2024-11-20 12:43:33.997112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.418 [2024-11-20 12:43:33.997146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.418 qpair failed and we were unable to recover it. 00:29:28.418 [2024-11-20 12:43:33.997280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.418 [2024-11-20 12:43:33.997315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.418 qpair failed and we were unable to recover it. 00:29:28.418 [2024-11-20 12:43:33.997501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.418 [2024-11-20 12:43:33.997534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.418 qpair failed and we were unable to recover it. 00:29:28.418 [2024-11-20 12:43:33.997721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.418 [2024-11-20 12:43:33.997753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.418 qpair failed and we were unable to recover it. 00:29:28.418 [2024-11-20 12:43:33.997862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.418 [2024-11-20 12:43:33.997895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.418 qpair failed and we were unable to recover it. 00:29:28.418 [2024-11-20 12:43:33.998066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.418 [2024-11-20 12:43:33.998100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.418 qpair failed and we were unable to recover it. 00:29:28.418 [2024-11-20 12:43:33.998301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.418 [2024-11-20 12:43:33.998336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.418 qpair failed and we were unable to recover it. 00:29:28.418 [2024-11-20 12:43:33.998527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.418 [2024-11-20 12:43:33.998561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.418 qpair failed and we were unable to recover it. 00:29:28.418 [2024-11-20 12:43:33.998698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.418 [2024-11-20 12:43:33.998731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.418 qpair failed and we were unable to recover it. 00:29:28.418 [2024-11-20 12:43:33.998918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.418 [2024-11-20 12:43:33.998951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.418 qpair failed and we were unable to recover it. 00:29:28.418 [2024-11-20 12:43:33.999126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.418 [2024-11-20 12:43:33.999159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.418 qpair failed and we were unable to recover it. 00:29:28.418 [2024-11-20 12:43:33.999433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.418 [2024-11-20 12:43:33.999467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.418 qpair failed and we were unable to recover it. 00:29:28.418 [2024-11-20 12:43:33.999587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.418 [2024-11-20 12:43:33.999621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.418 qpair failed and we were unable to recover it. 00:29:28.418 [2024-11-20 12:43:33.999757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.418 [2024-11-20 12:43:33.999791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.418 qpair failed and we were unable to recover it. 00:29:28.418 [2024-11-20 12:43:33.999914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.418 [2024-11-20 12:43:33.999948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.418 qpair failed and we were unable to recover it. 00:29:28.418 [2024-11-20 12:43:34.000280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.418 [2024-11-20 12:43:34.000316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.418 qpair failed and we were unable to recover it. 00:29:28.418 [2024-11-20 12:43:34.000508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.418 [2024-11-20 12:43:34.000542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.418 qpair failed and we were unable to recover it. 00:29:28.418 [2024-11-20 12:43:34.000737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.418 [2024-11-20 12:43:34.000771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.418 qpair failed and we were unable to recover it. 00:29:28.418 [2024-11-20 12:43:34.001037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.418 [2024-11-20 12:43:34.001071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.418 qpair failed and we were unable to recover it. 00:29:28.418 [2024-11-20 12:43:34.001197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.418 [2024-11-20 12:43:34.001242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.418 qpair failed and we were unable to recover it. 00:29:28.418 [2024-11-20 12:43:34.001374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.418 [2024-11-20 12:43:34.001408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.418 qpair failed and we were unable to recover it. 00:29:28.418 [2024-11-20 12:43:34.001589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.418 [2024-11-20 12:43:34.001622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.418 qpair failed and we were unable to recover it. 00:29:28.418 [2024-11-20 12:43:34.001808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.418 [2024-11-20 12:43:34.001841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.418 qpair failed and we were unable to recover it. 00:29:28.418 [2024-11-20 12:43:34.002049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.418 [2024-11-20 12:43:34.002082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.418 qpair failed and we were unable to recover it. 00:29:28.418 [2024-11-20 12:43:34.002279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.418 [2024-11-20 12:43:34.002315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.418 qpair failed and we were unable to recover it. 00:29:28.418 [2024-11-20 12:43:34.002434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.418 [2024-11-20 12:43:34.002467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.418 qpair failed and we were unable to recover it. 00:29:28.418 [2024-11-20 12:43:34.002635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.418 [2024-11-20 12:43:34.002707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.418 qpair failed and we were unable to recover it. 00:29:28.419 [2024-11-20 12:43:34.002852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.419 [2024-11-20 12:43:34.002889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.419 qpair failed and we were unable to recover it. 00:29:28.419 [2024-11-20 12:43:34.003004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.419 [2024-11-20 12:43:34.003038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.419 qpair failed and we were unable to recover it. 00:29:28.419 [2024-11-20 12:43:34.003177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.419 [2024-11-20 12:43:34.003229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.419 qpair failed and we were unable to recover it. 00:29:28.419 [2024-11-20 12:43:34.003351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.419 [2024-11-20 12:43:34.003384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.419 qpair failed and we were unable to recover it. 00:29:28.419 [2024-11-20 12:43:34.003590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.419 [2024-11-20 12:43:34.003623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.419 qpair failed and we were unable to recover it. 00:29:28.419 [2024-11-20 12:43:34.003808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.419 [2024-11-20 12:43:34.003841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.419 qpair failed and we were unable to recover it. 00:29:28.419 [2024-11-20 12:43:34.003963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.419 [2024-11-20 12:43:34.003996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.419 qpair failed and we were unable to recover it. 00:29:28.419 [2024-11-20 12:43:34.004181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.419 [2024-11-20 12:43:34.004229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.419 qpair failed and we were unable to recover it. 00:29:28.419 [2024-11-20 12:43:34.004371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.419 [2024-11-20 12:43:34.004404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.419 qpair failed and we were unable to recover it. 00:29:28.419 [2024-11-20 12:43:34.004581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.419 [2024-11-20 12:43:34.004614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.419 qpair failed and we were unable to recover it. 00:29:28.419 [2024-11-20 12:43:34.004860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.419 [2024-11-20 12:43:34.004893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.419 qpair failed and we were unable to recover it. 00:29:28.419 [2024-11-20 12:43:34.005064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.419 [2024-11-20 12:43:34.005098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.419 qpair failed and we were unable to recover it. 00:29:28.419 [2024-11-20 12:43:34.005283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.419 [2024-11-20 12:43:34.005327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.419 qpair failed and we were unable to recover it. 00:29:28.419 [2024-11-20 12:43:34.005503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.419 [2024-11-20 12:43:34.005532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.419 qpair failed and we were unable to recover it. 00:29:28.419 [2024-11-20 12:43:34.005654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.419 [2024-11-20 12:43:34.005688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.419 qpair failed and we were unable to recover it. 00:29:28.419 [2024-11-20 12:43:34.005806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.419 [2024-11-20 12:43:34.005838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.419 qpair failed and we were unable to recover it. 00:29:28.419 [2024-11-20 12:43:34.006014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.419 [2024-11-20 12:43:34.006046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.419 qpair failed and we were unable to recover it. 00:29:28.419 [2024-11-20 12:43:34.006173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.419 [2024-11-20 12:43:34.006212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.419 qpair failed and we were unable to recover it. 00:29:28.419 [2024-11-20 12:43:34.006337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.419 [2024-11-20 12:43:34.006369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.419 qpair failed and we were unable to recover it. 00:29:28.419 [2024-11-20 12:43:34.006478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.419 [2024-11-20 12:43:34.006510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.419 qpair failed and we were unable to recover it. 00:29:28.419 [2024-11-20 12:43:34.006696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.419 [2024-11-20 12:43:34.006728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.419 qpair failed and we were unable to recover it. 00:29:28.419 [2024-11-20 12:43:34.006841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.419 [2024-11-20 12:43:34.006873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.419 qpair failed and we were unable to recover it. 00:29:28.419 [2024-11-20 12:43:34.007099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.419 [2024-11-20 12:43:34.007122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.419 qpair failed and we were unable to recover it. 00:29:28.419 [2024-11-20 12:43:34.007230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.419 [2024-11-20 12:43:34.007254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.419 qpair failed and we were unable to recover it. 00:29:28.419 [2024-11-20 12:43:34.007421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.419 [2024-11-20 12:43:34.007444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.419 qpair failed and we were unable to recover it. 00:29:28.419 [2024-11-20 12:43:34.007548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.419 [2024-11-20 12:43:34.007570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.419 qpair failed and we were unable to recover it. 00:29:28.419 [2024-11-20 12:43:34.007683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.419 [2024-11-20 12:43:34.007720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.419 qpair failed and we were unable to recover it. 00:29:28.419 [2024-11-20 12:43:34.007899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.419 [2024-11-20 12:43:34.007933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.419 qpair failed and we were unable to recover it. 00:29:28.419 [2024-11-20 12:43:34.008119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.419 [2024-11-20 12:43:34.008153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.419 qpair failed and we were unable to recover it. 00:29:28.419 [2024-11-20 12:43:34.008296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.419 [2024-11-20 12:43:34.008332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.419 qpair failed and we were unable to recover it. 00:29:28.419 [2024-11-20 12:43:34.008517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.419 [2024-11-20 12:43:34.008551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.419 qpair failed and we were unable to recover it. 00:29:28.419 [2024-11-20 12:43:34.008752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.419 [2024-11-20 12:43:34.008786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.419 qpair failed and we were unable to recover it. 00:29:28.419 [2024-11-20 12:43:34.008907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.419 [2024-11-20 12:43:34.008941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.419 qpair failed and we were unable to recover it. 00:29:28.419 [2024-11-20 12:43:34.009137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.419 [2024-11-20 12:43:34.009172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.419 qpair failed and we were unable to recover it. 00:29:28.419 [2024-11-20 12:43:34.009344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.419 [2024-11-20 12:43:34.009384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.419 qpair failed and we were unable to recover it. 00:29:28.419 [2024-11-20 12:43:34.009513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.419 [2024-11-20 12:43:34.009546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.419 qpair failed and we were unable to recover it. 00:29:28.419 [2024-11-20 12:43:34.009680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.419 [2024-11-20 12:43:34.009713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.419 qpair failed and we were unable to recover it. 00:29:28.419 [2024-11-20 12:43:34.009841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.419 [2024-11-20 12:43:34.009874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.420 qpair failed and we were unable to recover it. 00:29:28.420 [2024-11-20 12:43:34.010141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.420 [2024-11-20 12:43:34.010173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.420 qpair failed and we were unable to recover it. 00:29:28.420 [2024-11-20 12:43:34.010313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.420 [2024-11-20 12:43:34.010356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.420 qpair failed and we were unable to recover it. 00:29:28.420 [2024-11-20 12:43:34.010481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.420 [2024-11-20 12:43:34.010514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.420 qpair failed and we were unable to recover it. 00:29:28.420 [2024-11-20 12:43:34.010637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.420 [2024-11-20 12:43:34.010670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.420 qpair failed and we were unable to recover it. 00:29:28.420 [2024-11-20 12:43:34.010800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.420 [2024-11-20 12:43:34.010832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.420 qpair failed and we were unable to recover it. 00:29:28.420 [2024-11-20 12:43:34.011023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.420 [2024-11-20 12:43:34.011056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.420 qpair failed and we were unable to recover it. 00:29:28.420 [2024-11-20 12:43:34.011225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.420 [2024-11-20 12:43:34.011251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.420 qpair failed and we were unable to recover it. 00:29:28.420 [2024-11-20 12:43:34.011365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.420 [2024-11-20 12:43:34.011397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.420 qpair failed and we were unable to recover it. 00:29:28.420 [2024-11-20 12:43:34.011503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.420 [2024-11-20 12:43:34.011536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.420 qpair failed and we were unable to recover it. 00:29:28.420 [2024-11-20 12:43:34.011643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.420 [2024-11-20 12:43:34.011676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.420 qpair failed and we were unable to recover it. 00:29:28.420 [2024-11-20 12:43:34.011785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.420 [2024-11-20 12:43:34.011817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.420 qpair failed and we were unable to recover it. 00:29:28.420 [2024-11-20 12:43:34.011933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.420 [2024-11-20 12:43:34.011966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.420 qpair failed and we were unable to recover it. 00:29:28.420 [2024-11-20 12:43:34.012146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.420 [2024-11-20 12:43:34.012177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.420 qpair failed and we were unable to recover it. 00:29:28.420 [2024-11-20 12:43:34.012379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.420 [2024-11-20 12:43:34.012412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.420 qpair failed and we were unable to recover it. 00:29:28.420 [2024-11-20 12:43:34.012519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.420 [2024-11-20 12:43:34.012551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.420 qpair failed and we were unable to recover it. 00:29:28.420 [2024-11-20 12:43:34.012737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.420 [2024-11-20 12:43:34.012769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.420 qpair failed and we were unable to recover it. 00:29:28.420 [2024-11-20 12:43:34.012893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.420 [2024-11-20 12:43:34.012925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.420 qpair failed and we were unable to recover it. 00:29:28.420 [2024-11-20 12:43:34.013037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.420 [2024-11-20 12:43:34.013059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.420 qpair failed and we were unable to recover it. 00:29:28.420 [2024-11-20 12:43:34.013154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.420 [2024-11-20 12:43:34.013176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.420 qpair failed and we were unable to recover it. 00:29:28.420 [2024-11-20 12:43:34.013340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.420 [2024-11-20 12:43:34.013364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.420 qpair failed and we were unable to recover it. 00:29:28.420 [2024-11-20 12:43:34.013537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.420 [2024-11-20 12:43:34.013569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.420 qpair failed and we were unable to recover it. 00:29:28.420 [2024-11-20 12:43:34.013673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.420 [2024-11-20 12:43:34.013705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.420 qpair failed and we were unable to recover it. 00:29:28.420 [2024-11-20 12:43:34.013835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.420 [2024-11-20 12:43:34.013867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.420 qpair failed and we were unable to recover it. 00:29:28.420 [2024-11-20 12:43:34.013980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.420 [2024-11-20 12:43:34.014012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.420 qpair failed and we were unable to recover it. 00:29:28.420 [2024-11-20 12:43:34.014126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.420 [2024-11-20 12:43:34.014150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.420 qpair failed and we were unable to recover it. 00:29:28.420 [2024-11-20 12:43:34.014257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.420 [2024-11-20 12:43:34.014281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.420 qpair failed and we were unable to recover it. 00:29:28.420 [2024-11-20 12:43:34.014365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.420 [2024-11-20 12:43:34.014386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.420 qpair failed and we were unable to recover it. 00:29:28.420 [2024-11-20 12:43:34.014484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.420 [2024-11-20 12:43:34.014507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.420 qpair failed and we were unable to recover it. 00:29:28.420 [2024-11-20 12:43:34.015534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.420 [2024-11-20 12:43:34.015578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.420 qpair failed and we were unable to recover it. 00:29:28.420 [2024-11-20 12:43:34.015702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.420 [2024-11-20 12:43:34.015726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.420 qpair failed and we were unable to recover it. 00:29:28.420 [2024-11-20 12:43:34.015950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.420 [2024-11-20 12:43:34.015990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.420 qpair failed and we were unable to recover it. 00:29:28.420 [2024-11-20 12:43:34.016180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.420 [2024-11-20 12:43:34.016225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.420 qpair failed and we were unable to recover it. 00:29:28.420 [2024-11-20 12:43:34.016349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.420 [2024-11-20 12:43:34.016382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.420 qpair failed and we were unable to recover it. 00:29:28.420 [2024-11-20 12:43:34.016558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.420 [2024-11-20 12:43:34.016591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.420 qpair failed and we were unable to recover it. 00:29:28.420 [2024-11-20 12:43:34.016710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.420 [2024-11-20 12:43:34.016742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.420 qpair failed and we were unable to recover it. 00:29:28.420 [2024-11-20 12:43:34.016861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.420 [2024-11-20 12:43:34.016892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.420 qpair failed and we were unable to recover it. 00:29:28.420 [2024-11-20 12:43:34.017012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.420 [2024-11-20 12:43:34.017044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.420 qpair failed and we were unable to recover it. 00:29:28.420 [2024-11-20 12:43:34.017240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.420 [2024-11-20 12:43:34.017264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.420 qpair failed and we were unable to recover it. 00:29:28.421 [2024-11-20 12:43:34.017348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.421 [2024-11-20 12:43:34.017370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.421 qpair failed and we were unable to recover it. 00:29:28.421 [2024-11-20 12:43:34.017595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.421 [2024-11-20 12:43:34.017618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.421 qpair failed and we were unable to recover it. 00:29:28.421 [2024-11-20 12:43:34.017781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.421 [2024-11-20 12:43:34.017804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.421 qpair failed and we were unable to recover it. 00:29:28.421 [2024-11-20 12:43:34.017900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.421 [2024-11-20 12:43:34.017922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.421 qpair failed and we were unable to recover it. 00:29:28.421 [2024-11-20 12:43:34.018028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.421 [2024-11-20 12:43:34.018066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.421 qpair failed and we were unable to recover it. 00:29:28.421 [2024-11-20 12:43:34.018243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.421 [2024-11-20 12:43:34.018280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.421 qpair failed and we were unable to recover it. 00:29:28.421 [2024-11-20 12:43:34.018415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.421 [2024-11-20 12:43:34.018449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.421 qpair failed and we were unable to recover it. 00:29:28.421 [2024-11-20 12:43:34.018570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.421 [2024-11-20 12:43:34.018602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.421 qpair failed and we were unable to recover it. 00:29:28.421 [2024-11-20 12:43:34.018713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.421 [2024-11-20 12:43:34.018747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.421 qpair failed and we were unable to recover it. 00:29:28.421 [2024-11-20 12:43:34.018852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.421 [2024-11-20 12:43:34.018884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.421 qpair failed and we were unable to recover it. 00:29:28.421 [2024-11-20 12:43:34.019068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.421 [2024-11-20 12:43:34.019101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.421 qpair failed and we were unable to recover it. 00:29:28.421 [2024-11-20 12:43:34.019238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.421 [2024-11-20 12:43:34.019273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.421 qpair failed and we were unable to recover it. 00:29:28.421 [2024-11-20 12:43:34.019388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.421 [2024-11-20 12:43:34.019422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.421 qpair failed and we were unable to recover it. 00:29:28.421 [2024-11-20 12:43:34.019531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.421 [2024-11-20 12:43:34.019563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.421 qpair failed and we were unable to recover it. 00:29:28.421 [2024-11-20 12:43:34.019734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.421 [2024-11-20 12:43:34.019767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.421 qpair failed and we were unable to recover it. 00:29:28.421 [2024-11-20 12:43:34.019885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.421 [2024-11-20 12:43:34.019918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.421 qpair failed and we were unable to recover it. 00:29:28.421 [2024-11-20 12:43:34.020033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.421 [2024-11-20 12:43:34.020059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.421 qpair failed and we were unable to recover it. 00:29:28.421 [2024-11-20 12:43:34.020159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.421 [2024-11-20 12:43:34.020186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.421 qpair failed and we were unable to recover it. 00:29:28.421 [2024-11-20 12:43:34.020434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.421 [2024-11-20 12:43:34.020457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.421 qpair failed and we were unable to recover it. 00:29:28.421 [2024-11-20 12:43:34.020561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.421 [2024-11-20 12:43:34.020584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.421 qpair failed and we were unable to recover it. 00:29:28.421 [2024-11-20 12:43:34.020736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.421 [2024-11-20 12:43:34.020758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.421 qpair failed and we were unable to recover it. 00:29:28.421 [2024-11-20 12:43:34.020921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.421 [2024-11-20 12:43:34.020943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.421 qpair failed and we were unable to recover it. 00:29:28.421 [2024-11-20 12:43:34.021032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.421 [2024-11-20 12:43:34.021052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.421 qpair failed and we were unable to recover it. 00:29:28.421 [2024-11-20 12:43:34.021217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.421 [2024-11-20 12:43:34.021241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.421 qpair failed and we were unable to recover it. 00:29:28.421 [2024-11-20 12:43:34.021342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.421 [2024-11-20 12:43:34.021366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.421 qpair failed and we were unable to recover it. 00:29:28.421 [2024-11-20 12:43:34.021466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.421 [2024-11-20 12:43:34.021491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.421 qpair failed and we were unable to recover it. 00:29:28.421 [2024-11-20 12:43:34.021586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.421 [2024-11-20 12:43:34.021609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.421 qpair failed and we were unable to recover it. 00:29:28.421 [2024-11-20 12:43:34.021691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.421 [2024-11-20 12:43:34.021713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.421 qpair failed and we were unable to recover it. 00:29:28.421 [2024-11-20 12:43:34.021807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.421 [2024-11-20 12:43:34.021829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.421 qpair failed and we were unable to recover it. 00:29:28.421 [2024-11-20 12:43:34.022048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.421 [2024-11-20 12:43:34.022070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.421 qpair failed and we were unable to recover it. 00:29:28.421 [2024-11-20 12:43:34.022166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.421 [2024-11-20 12:43:34.022189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.421 qpair failed and we were unable to recover it. 00:29:28.421 [2024-11-20 12:43:34.022298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.421 [2024-11-20 12:43:34.022321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.421 qpair failed and we were unable to recover it. 00:29:28.421 [2024-11-20 12:43:34.022415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.421 [2024-11-20 12:43:34.022438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.421 qpair failed and we were unable to recover it. 00:29:28.421 [2024-11-20 12:43:34.022527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.421 [2024-11-20 12:43:34.022550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.421 qpair failed and we were unable to recover it. 00:29:28.421 [2024-11-20 12:43:34.022635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.421 [2024-11-20 12:43:34.022655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.421 qpair failed and we were unable to recover it. 00:29:28.421 [2024-11-20 12:43:34.022747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.421 [2024-11-20 12:43:34.022768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.421 qpair failed and we were unable to recover it. 00:29:28.421 [2024-11-20 12:43:34.022849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.421 [2024-11-20 12:43:34.022872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.421 qpair failed and we were unable to recover it. 00:29:28.421 [2024-11-20 12:43:34.023041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.421 [2024-11-20 12:43:34.023064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.421 qpair failed and we were unable to recover it. 00:29:28.421 [2024-11-20 12:43:34.023159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.422 [2024-11-20 12:43:34.023180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.422 qpair failed and we were unable to recover it. 00:29:28.422 [2024-11-20 12:43:34.023288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.422 [2024-11-20 12:43:34.023312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.422 qpair failed and we were unable to recover it. 00:29:28.422 [2024-11-20 12:43:34.023462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.422 [2024-11-20 12:43:34.023485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.422 qpair failed and we were unable to recover it. 00:29:28.422 [2024-11-20 12:43:34.023573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.422 [2024-11-20 12:43:34.023595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.422 qpair failed and we were unable to recover it. 00:29:28.422 [2024-11-20 12:43:34.023774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.422 [2024-11-20 12:43:34.023797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.422 qpair failed and we were unable to recover it. 00:29:28.422 [2024-11-20 12:43:34.023969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.422 [2024-11-20 12:43:34.023992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.422 qpair failed and we were unable to recover it. 00:29:28.422 [2024-11-20 12:43:34.024094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.422 [2024-11-20 12:43:34.024117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.422 qpair failed and we were unable to recover it. 00:29:28.422 [2024-11-20 12:43:34.024220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.422 [2024-11-20 12:43:34.024243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.422 qpair failed and we were unable to recover it. 00:29:28.422 [2024-11-20 12:43:34.024324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.422 [2024-11-20 12:43:34.024347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.422 qpair failed and we were unable to recover it. 00:29:28.422 [2024-11-20 12:43:34.024454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.422 [2024-11-20 12:43:34.024477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.422 qpair failed and we were unable to recover it. 00:29:28.422 [2024-11-20 12:43:34.024561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.422 [2024-11-20 12:43:34.024583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.422 qpair failed and we were unable to recover it. 00:29:28.422 [2024-11-20 12:43:34.024664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.422 [2024-11-20 12:43:34.024687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.422 qpair failed and we were unable to recover it. 00:29:28.422 [2024-11-20 12:43:34.024781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.422 [2024-11-20 12:43:34.024804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.422 qpair failed and we were unable to recover it. 00:29:28.422 [2024-11-20 12:43:34.024915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.422 [2024-11-20 12:43:34.024938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.422 qpair failed and we were unable to recover it. 00:29:28.422 [2024-11-20 12:43:34.025044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.422 [2024-11-20 12:43:34.025066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.422 qpair failed and we were unable to recover it. 00:29:28.422 [2024-11-20 12:43:34.025165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.422 [2024-11-20 12:43:34.025187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.422 qpair failed and we were unable to recover it. 00:29:28.422 [2024-11-20 12:43:34.025348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.422 [2024-11-20 12:43:34.025372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.422 qpair failed and we were unable to recover it. 00:29:28.422 [2024-11-20 12:43:34.025454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.422 [2024-11-20 12:43:34.025475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.422 qpair failed and we were unable to recover it. 00:29:28.422 [2024-11-20 12:43:34.025574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.422 [2024-11-20 12:43:34.025596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.422 qpair failed and we were unable to recover it. 00:29:28.422 [2024-11-20 12:43:34.025689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.422 [2024-11-20 12:43:34.025712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.422 qpair failed and we were unable to recover it. 00:29:28.422 [2024-11-20 12:43:34.025805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.422 [2024-11-20 12:43:34.025831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.422 qpair failed and we were unable to recover it. 00:29:28.422 [2024-11-20 12:43:34.025920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.422 [2024-11-20 12:43:34.025942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.422 qpair failed and we were unable to recover it. 00:29:28.422 [2024-11-20 12:43:34.026091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.422 [2024-11-20 12:43:34.026114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.422 qpair failed and we were unable to recover it. 00:29:28.422 [2024-11-20 12:43:34.026212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.422 [2024-11-20 12:43:34.026237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.422 qpair failed and we were unable to recover it. 00:29:28.422 [2024-11-20 12:43:34.026392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.422 [2024-11-20 12:43:34.026415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.422 qpair failed and we were unable to recover it. 00:29:28.422 [2024-11-20 12:43:34.026542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.422 [2024-11-20 12:43:34.026564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.422 qpair failed and we were unable to recover it. 00:29:28.422 [2024-11-20 12:43:34.026648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.422 [2024-11-20 12:43:34.026673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.422 qpair failed and we were unable to recover it. 00:29:28.422 [2024-11-20 12:43:34.026837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.422 [2024-11-20 12:43:34.026859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.422 qpair failed and we were unable to recover it. 00:29:28.422 [2024-11-20 12:43:34.026990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.422 [2024-11-20 12:43:34.027013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.422 qpair failed and we were unable to recover it. 00:29:28.422 [2024-11-20 12:43:34.027095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.422 [2024-11-20 12:43:34.027116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.422 qpair failed and we were unable to recover it. 00:29:28.422 [2024-11-20 12:43:34.027217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.422 [2024-11-20 12:43:34.027240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.422 qpair failed and we were unable to recover it. 00:29:28.422 [2024-11-20 12:43:34.027324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.422 [2024-11-20 12:43:34.027347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.422 qpair failed and we were unable to recover it. 00:29:28.422 [2024-11-20 12:43:34.027432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.422 [2024-11-20 12:43:34.027455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.422 qpair failed and we were unable to recover it. 00:29:28.422 [2024-11-20 12:43:34.027556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.422 [2024-11-20 12:43:34.027578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.422 qpair failed and we were unable to recover it. 00:29:28.422 [2024-11-20 12:43:34.027732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.422 [2024-11-20 12:43:34.027755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.422 qpair failed and we were unable to recover it. 00:29:28.422 [2024-11-20 12:43:34.027907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.422 [2024-11-20 12:43:34.027929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.422 qpair failed and we were unable to recover it. 00:29:28.422 [2024-11-20 12:43:34.028099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.422 [2024-11-20 12:43:34.028122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.422 qpair failed and we were unable to recover it. 00:29:28.422 [2024-11-20 12:43:34.028212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.422 [2024-11-20 12:43:34.028233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.422 qpair failed and we were unable to recover it. 00:29:28.422 [2024-11-20 12:43:34.028331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.423 [2024-11-20 12:43:34.028354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.423 qpair failed and we were unable to recover it. 00:29:28.423 [2024-11-20 12:43:34.028522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.423 [2024-11-20 12:43:34.028544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.423 qpair failed and we were unable to recover it. 00:29:28.423 [2024-11-20 12:43:34.028695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.423 [2024-11-20 12:43:34.028717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.423 qpair failed and we were unable to recover it. 00:29:28.423 [2024-11-20 12:43:34.028830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.423 [2024-11-20 12:43:34.028853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.423 qpair failed and we were unable to recover it. 00:29:28.423 [2024-11-20 12:43:34.029023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.423 [2024-11-20 12:43:34.029046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.423 qpair failed and we were unable to recover it. 00:29:28.423 [2024-11-20 12:43:34.029153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.423 [2024-11-20 12:43:34.029176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.423 qpair failed and we were unable to recover it. 00:29:28.423 [2024-11-20 12:43:34.029285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.423 [2024-11-20 12:43:34.029309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.423 qpair failed and we were unable to recover it. 00:29:28.423 [2024-11-20 12:43:34.029415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.423 [2024-11-20 12:43:34.029438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.423 qpair failed and we were unable to recover it. 00:29:28.423 [2024-11-20 12:43:34.029658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.423 [2024-11-20 12:43:34.029680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.423 qpair failed and we were unable to recover it. 00:29:28.423 [2024-11-20 12:43:34.029832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.423 [2024-11-20 12:43:34.029854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.423 qpair failed and we were unable to recover it. 00:29:28.423 [2024-11-20 12:43:34.029956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.423 [2024-11-20 12:43:34.029979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.423 qpair failed and we were unable to recover it. 00:29:28.423 [2024-11-20 12:43:34.030200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.423 [2024-11-20 12:43:34.030231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.423 qpair failed and we were unable to recover it. 00:29:28.423 [2024-11-20 12:43:34.030392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.423 [2024-11-20 12:43:34.030414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.423 qpair failed and we were unable to recover it. 00:29:28.423 [2024-11-20 12:43:34.030493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.423 [2024-11-20 12:43:34.030514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.423 qpair failed and we were unable to recover it. 00:29:28.423 [2024-11-20 12:43:34.030612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.423 [2024-11-20 12:43:34.030635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.423 qpair failed and we were unable to recover it. 00:29:28.423 [2024-11-20 12:43:34.030736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.423 [2024-11-20 12:43:34.030759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.423 qpair failed and we were unable to recover it. 00:29:28.423 [2024-11-20 12:43:34.030951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.423 [2024-11-20 12:43:34.030974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.423 qpair failed and we were unable to recover it. 00:29:28.423 [2024-11-20 12:43:34.031061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.423 [2024-11-20 12:43:34.031081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.423 qpair failed and we were unable to recover it. 00:29:28.423 [2024-11-20 12:43:34.031179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.423 [2024-11-20 12:43:34.031226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.423 qpair failed and we were unable to recover it. 00:29:28.423 [2024-11-20 12:43:34.031380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.423 [2024-11-20 12:43:34.031402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.423 qpair failed and we were unable to recover it. 00:29:28.423 [2024-11-20 12:43:34.031500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.423 [2024-11-20 12:43:34.031523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.423 qpair failed and we were unable to recover it. 00:29:28.423 [2024-11-20 12:43:34.031687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.423 [2024-11-20 12:43:34.031710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.423 qpair failed and we were unable to recover it. 00:29:28.423 [2024-11-20 12:43:34.031809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.423 [2024-11-20 12:43:34.031832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.423 qpair failed and we were unable to recover it. 00:29:28.423 [2024-11-20 12:43:34.031940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.423 [2024-11-20 12:43:34.031964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.423 qpair failed and we were unable to recover it. 00:29:28.423 [2024-11-20 12:43:34.032212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.423 [2024-11-20 12:43:34.032237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.423 qpair failed and we were unable to recover it. 00:29:28.423 [2024-11-20 12:43:34.032322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.423 [2024-11-20 12:43:34.032345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.423 qpair failed and we were unable to recover it. 00:29:28.423 [2024-11-20 12:43:34.032495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.423 [2024-11-20 12:43:34.032518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.423 qpair failed and we were unable to recover it. 00:29:28.423 [2024-11-20 12:43:34.032686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.423 [2024-11-20 12:43:34.032708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.423 qpair failed and we were unable to recover it. 00:29:28.423 [2024-11-20 12:43:34.032792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.423 [2024-11-20 12:43:34.032813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.423 qpair failed and we were unable to recover it. 00:29:28.423 [2024-11-20 12:43:34.032916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.423 [2024-11-20 12:43:34.032939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.423 qpair failed and we were unable to recover it. 00:29:28.423 [2024-11-20 12:43:34.033056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.423 [2024-11-20 12:43:34.033078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.423 qpair failed and we were unable to recover it. 00:29:28.423 [2024-11-20 12:43:34.033230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.423 [2024-11-20 12:43:34.033254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.423 qpair failed and we were unable to recover it. 00:29:28.423 [2024-11-20 12:43:34.033341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.423 [2024-11-20 12:43:34.033361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.423 qpair failed and we were unable to recover it. 00:29:28.423 [2024-11-20 12:43:34.033443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.423 [2024-11-20 12:43:34.033466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.423 qpair failed and we were unable to recover it. 00:29:28.423 [2024-11-20 12:43:34.033625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.424 [2024-11-20 12:43:34.033648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.424 qpair failed and we were unable to recover it. 00:29:28.424 [2024-11-20 12:43:34.033822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.424 [2024-11-20 12:43:34.033845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.424 qpair failed and we were unable to recover it. 00:29:28.424 [2024-11-20 12:43:34.033941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.424 [2024-11-20 12:43:34.033964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.424 qpair failed and we were unable to recover it. 00:29:28.424 [2024-11-20 12:43:34.034146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.424 [2024-11-20 12:43:34.034170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.424 qpair failed and we were unable to recover it. 00:29:28.424 [2024-11-20 12:43:34.034329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.424 [2024-11-20 12:43:34.034354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.424 qpair failed and we were unable to recover it. 00:29:28.424 [2024-11-20 12:43:34.034439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.424 [2024-11-20 12:43:34.034461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.424 qpair failed and we were unable to recover it. 00:29:28.424 [2024-11-20 12:43:34.034559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.424 [2024-11-20 12:43:34.034582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.424 qpair failed and we were unable to recover it. 00:29:28.424 [2024-11-20 12:43:34.034765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.424 [2024-11-20 12:43:34.034789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.424 qpair failed and we were unable to recover it. 00:29:28.424 [2024-11-20 12:43:34.034888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.424 [2024-11-20 12:43:34.034911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.424 qpair failed and we were unable to recover it. 00:29:28.424 [2024-11-20 12:43:34.035002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.424 [2024-11-20 12:43:34.035026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.424 qpair failed and we were unable to recover it. 00:29:28.424 [2024-11-20 12:43:34.035131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.424 [2024-11-20 12:43:34.035154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.424 qpair failed and we were unable to recover it. 00:29:28.424 [2024-11-20 12:43:34.035327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.424 [2024-11-20 12:43:34.035350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.424 qpair failed and we were unable to recover it. 00:29:28.424 [2024-11-20 12:43:34.035500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.424 [2024-11-20 12:43:34.035523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.424 qpair failed and we were unable to recover it. 00:29:28.424 [2024-11-20 12:43:34.035605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.424 [2024-11-20 12:43:34.035628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.424 qpair failed and we were unable to recover it. 00:29:28.424 [2024-11-20 12:43:34.035857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.424 [2024-11-20 12:43:34.035881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.424 qpair failed and we were unable to recover it. 00:29:28.424 [2024-11-20 12:43:34.036032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.424 [2024-11-20 12:43:34.036054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.424 qpair failed and we were unable to recover it. 00:29:28.424 [2024-11-20 12:43:34.036242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.424 [2024-11-20 12:43:34.036270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.424 qpair failed and we were unable to recover it. 00:29:28.424 [2024-11-20 12:43:34.036355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.424 [2024-11-20 12:43:34.036378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.424 qpair failed and we were unable to recover it. 00:29:28.424 [2024-11-20 12:43:34.036468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.424 [2024-11-20 12:43:34.036491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.424 qpair failed and we were unable to recover it. 00:29:28.424 [2024-11-20 12:43:34.036652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.424 [2024-11-20 12:43:34.036676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.424 qpair failed and we were unable to recover it. 00:29:28.424 [2024-11-20 12:43:34.036835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.424 [2024-11-20 12:43:34.036858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.424 qpair failed and we were unable to recover it. 00:29:28.424 [2024-11-20 12:43:34.036965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.424 [2024-11-20 12:43:34.036988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.424 qpair failed and we were unable to recover it. 00:29:28.424 [2024-11-20 12:43:34.037109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.424 [2024-11-20 12:43:34.037132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.424 qpair failed and we were unable to recover it. 00:29:28.424 [2024-11-20 12:43:34.037286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.424 [2024-11-20 12:43:34.037320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.424 qpair failed and we were unable to recover it. 00:29:28.424 [2024-11-20 12:43:34.037483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.424 [2024-11-20 12:43:34.037508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.424 qpair failed and we were unable to recover it. 00:29:28.424 [2024-11-20 12:43:34.037658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.424 [2024-11-20 12:43:34.037681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.424 qpair failed and we were unable to recover it. 00:29:28.424 [2024-11-20 12:43:34.037897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.424 [2024-11-20 12:43:34.037921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.424 qpair failed and we were unable to recover it. 00:29:28.424 [2024-11-20 12:43:34.038076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.424 [2024-11-20 12:43:34.038099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.424 qpair failed and we were unable to recover it. 00:29:28.424 [2024-11-20 12:43:34.038269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.424 [2024-11-20 12:43:34.038294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.424 qpair failed and we were unable to recover it. 00:29:28.424 [2024-11-20 12:43:34.038382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.424 [2024-11-20 12:43:34.038404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.424 qpair failed and we were unable to recover it. 00:29:28.424 [2024-11-20 12:43:34.038521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.424 [2024-11-20 12:43:34.038545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.424 qpair failed and we were unable to recover it. 00:29:28.424 [2024-11-20 12:43:34.038774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.424 [2024-11-20 12:43:34.038797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.424 qpair failed and we were unable to recover it. 00:29:28.424 [2024-11-20 12:43:34.038907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.424 [2024-11-20 12:43:34.038930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.424 qpair failed and we were unable to recover it. 00:29:28.424 [2024-11-20 12:43:34.039098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.424 [2024-11-20 12:43:34.039121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.424 qpair failed and we were unable to recover it. 00:29:28.424 [2024-11-20 12:43:34.039212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.424 [2024-11-20 12:43:34.039235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.424 qpair failed and we were unable to recover it. 00:29:28.424 [2024-11-20 12:43:34.039416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.424 [2024-11-20 12:43:34.039440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.424 qpair failed and we were unable to recover it. 00:29:28.424 [2024-11-20 12:43:34.039609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.424 [2024-11-20 12:43:34.039631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.424 qpair failed and we were unable to recover it. 00:29:28.424 [2024-11-20 12:43:34.039725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.424 [2024-11-20 12:43:34.039748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.424 qpair failed and we were unable to recover it. 00:29:28.424 [2024-11-20 12:43:34.039906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.424 [2024-11-20 12:43:34.039929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.425 qpair failed and we were unable to recover it. 00:29:28.425 [2024-11-20 12:43:34.040036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.425 [2024-11-20 12:43:34.040059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.425 qpair failed and we were unable to recover it. 00:29:28.425 [2024-11-20 12:43:34.040221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.425 [2024-11-20 12:43:34.040245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.425 qpair failed and we were unable to recover it. 00:29:28.425 [2024-11-20 12:43:34.040467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.425 [2024-11-20 12:43:34.040491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.425 qpair failed and we were unable to recover it. 00:29:28.425 [2024-11-20 12:43:34.040642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.425 [2024-11-20 12:43:34.040665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.425 qpair failed and we were unable to recover it. 00:29:28.425 [2024-11-20 12:43:34.040827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.425 [2024-11-20 12:43:34.040850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.425 qpair failed and we were unable to recover it. 00:29:28.425 [2024-11-20 12:43:34.041015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.425 [2024-11-20 12:43:34.041038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.425 qpair failed and we were unable to recover it. 00:29:28.425 [2024-11-20 12:43:34.041125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.425 [2024-11-20 12:43:34.041148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.425 qpair failed and we were unable to recover it. 00:29:28.425 [2024-11-20 12:43:34.041335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.425 [2024-11-20 12:43:34.041359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.425 qpair failed and we were unable to recover it. 00:29:28.425 [2024-11-20 12:43:34.041441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.425 [2024-11-20 12:43:34.041463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.425 qpair failed and we were unable to recover it. 00:29:28.425 [2024-11-20 12:43:34.041569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.425 [2024-11-20 12:43:34.041593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.425 qpair failed and we were unable to recover it. 00:29:28.425 [2024-11-20 12:43:34.041691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.425 [2024-11-20 12:43:34.041714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.425 qpair failed and we were unable to recover it. 00:29:28.425 [2024-11-20 12:43:34.041867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.425 [2024-11-20 12:43:34.041889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.425 qpair failed and we were unable to recover it. 00:29:28.425 [2024-11-20 12:43:34.042025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.425 [2024-11-20 12:43:34.042048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.425 qpair failed and we were unable to recover it. 00:29:28.425 [2024-11-20 12:43:34.042168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.425 [2024-11-20 12:43:34.042191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.425 qpair failed and we were unable to recover it. 00:29:28.425 [2024-11-20 12:43:34.042304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.425 [2024-11-20 12:43:34.042326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.425 qpair failed and we were unable to recover it. 00:29:28.425 [2024-11-20 12:43:34.042426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.425 [2024-11-20 12:43:34.042449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.425 qpair failed and we were unable to recover it. 00:29:28.425 [2024-11-20 12:43:34.042546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.425 [2024-11-20 12:43:34.042570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.425 qpair failed and we were unable to recover it. 00:29:28.425 [2024-11-20 12:43:34.042665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.425 [2024-11-20 12:43:34.042688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.425 qpair failed and we were unable to recover it. 00:29:28.425 [2024-11-20 12:43:34.042845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.425 [2024-11-20 12:43:34.042869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.425 qpair failed and we were unable to recover it. 00:29:28.425 [2024-11-20 12:43:34.043024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.425 [2024-11-20 12:43:34.043048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.425 qpair failed and we were unable to recover it. 00:29:28.425 [2024-11-20 12:43:34.043213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.425 [2024-11-20 12:43:34.043238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.425 qpair failed and we were unable to recover it. 00:29:28.425 [2024-11-20 12:43:34.043391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.425 [2024-11-20 12:43:34.043414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.425 qpair failed and we were unable to recover it. 00:29:28.425 [2024-11-20 12:43:34.043525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.425 [2024-11-20 12:43:34.043548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.425 qpair failed and we were unable to recover it. 00:29:28.425 [2024-11-20 12:43:34.043643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.425 [2024-11-20 12:43:34.043666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.425 qpair failed and we were unable to recover it. 00:29:28.425 [2024-11-20 12:43:34.043770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.425 [2024-11-20 12:43:34.043793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.425 qpair failed and we were unable to recover it. 00:29:28.425 [2024-11-20 12:43:34.043890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.425 [2024-11-20 12:43:34.043913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.425 qpair failed and we were unable to recover it. 00:29:28.425 [2024-11-20 12:43:34.044068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.425 [2024-11-20 12:43:34.044091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.425 qpair failed and we were unable to recover it. 00:29:28.425 [2024-11-20 12:43:34.044257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.425 [2024-11-20 12:43:34.044281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.425 qpair failed and we were unable to recover it. 00:29:28.425 [2024-11-20 12:43:34.044452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.425 [2024-11-20 12:43:34.044475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.425 qpair failed and we were unable to recover it. 00:29:28.425 [2024-11-20 12:43:34.044631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.425 [2024-11-20 12:43:34.044654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.425 qpair failed and we were unable to recover it. 00:29:28.425 [2024-11-20 12:43:34.044738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.425 [2024-11-20 12:43:34.044759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.425 qpair failed and we were unable to recover it. 00:29:28.425 [2024-11-20 12:43:34.044923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.425 [2024-11-20 12:43:34.044946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.425 qpair failed and we were unable to recover it. 00:29:28.425 [2024-11-20 12:43:34.045047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.425 [2024-11-20 12:43:34.045070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.425 qpair failed and we were unable to recover it. 00:29:28.425 [2024-11-20 12:43:34.045243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.425 [2024-11-20 12:43:34.045267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.425 qpair failed and we were unable to recover it. 00:29:28.425 [2024-11-20 12:43:34.045354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.425 [2024-11-20 12:43:34.045377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.425 qpair failed and we were unable to recover it. 00:29:28.425 [2024-11-20 12:43:34.045531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.425 [2024-11-20 12:43:34.045553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.425 qpair failed and we were unable to recover it. 00:29:28.425 [2024-11-20 12:43:34.045681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.425 [2024-11-20 12:43:34.045705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.425 qpair failed and we were unable to recover it. 00:29:28.425 [2024-11-20 12:43:34.045796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.425 [2024-11-20 12:43:34.045818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.425 qpair failed and we were unable to recover it. 00:29:28.426 [2024-11-20 12:43:34.045909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-11-20 12:43:34.045932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-11-20 12:43:34.046093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-11-20 12:43:34.046116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-11-20 12:43:34.046210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-11-20 12:43:34.046234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-11-20 12:43:34.046317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-11-20 12:43:34.046340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-11-20 12:43:34.046429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-11-20 12:43:34.046452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-11-20 12:43:34.046550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-11-20 12:43:34.046573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-11-20 12:43:34.046735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-11-20 12:43:34.046758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-11-20 12:43:34.046921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-11-20 12:43:34.046948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-11-20 12:43:34.047111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-11-20 12:43:34.047134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-11-20 12:43:34.047219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-11-20 12:43:34.047243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-11-20 12:43:34.047338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-11-20 12:43:34.047360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-11-20 12:43:34.047451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-11-20 12:43:34.047474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-11-20 12:43:34.047563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-11-20 12:43:34.047585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-11-20 12:43:34.047670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-11-20 12:43:34.047691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-11-20 12:43:34.047853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-11-20 12:43:34.047875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-11-20 12:43:34.048044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-11-20 12:43:34.048068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-11-20 12:43:34.048247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-11-20 12:43:34.048272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-11-20 12:43:34.048424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-11-20 12:43:34.048446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-11-20 12:43:34.048545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-11-20 12:43:34.048569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-11-20 12:43:34.048669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-11-20 12:43:34.048691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-11-20 12:43:34.048778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-11-20 12:43:34.048801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-11-20 12:43:34.048898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-11-20 12:43:34.048921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-11-20 12:43:34.049075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-11-20 12:43:34.049098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-11-20 12:43:34.049299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-11-20 12:43:34.049323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-11-20 12:43:34.049486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-11-20 12:43:34.049508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-11-20 12:43:34.049661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-11-20 12:43:34.049684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-11-20 12:43:34.049779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-11-20 12:43:34.049802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-11-20 12:43:34.049964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-11-20 12:43:34.049986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-11-20 12:43:34.050070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-11-20 12:43:34.050093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-11-20 12:43:34.050175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-11-20 12:43:34.050196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-11-20 12:43:34.050306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-11-20 12:43:34.050328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-11-20 12:43:34.050422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-11-20 12:43:34.050445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-11-20 12:43:34.050604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-11-20 12:43:34.050626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-11-20 12:43:34.050724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-11-20 12:43:34.050746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-11-20 12:43:34.050852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-11-20 12:43:34.050875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-11-20 12:43:34.050981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-11-20 12:43:34.051004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-11-20 12:43:34.051157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-11-20 12:43:34.051180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-11-20 12:43:34.051400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.426 [2024-11-20 12:43:34.051470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.426 qpair failed and we were unable to recover it. 00:29:28.426 [2024-11-20 12:43:34.051711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-11-20 12:43:34.051747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-11-20 12:43:34.051863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-11-20 12:43:34.051897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-11-20 12:43:34.052073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-11-20 12:43:34.052106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-11-20 12:43:34.052248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-11-20 12:43:34.052284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-11-20 12:43:34.052389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-11-20 12:43:34.052422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-11-20 12:43:34.052528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-11-20 12:43:34.052561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-11-20 12:43:34.052739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-11-20 12:43:34.052772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-11-20 12:43:34.052968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-11-20 12:43:34.053000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-11-20 12:43:34.053108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-11-20 12:43:34.053133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-11-20 12:43:34.053239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-11-20 12:43:34.053263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-11-20 12:43:34.053350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-11-20 12:43:34.053373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-11-20 12:43:34.053495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-11-20 12:43:34.053517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-11-20 12:43:34.053669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-11-20 12:43:34.053692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-11-20 12:43:34.053856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-11-20 12:43:34.053879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-11-20 12:43:34.053981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-11-20 12:43:34.054003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-11-20 12:43:34.054105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-11-20 12:43:34.054127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-11-20 12:43:34.054226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-11-20 12:43:34.054249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-11-20 12:43:34.054333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-11-20 12:43:34.054356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-11-20 12:43:34.054512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-11-20 12:43:34.054535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-11-20 12:43:34.054684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-11-20 12:43:34.054707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-11-20 12:43:34.054874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-11-20 12:43:34.054896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-11-20 12:43:34.054981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-11-20 12:43:34.055004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-11-20 12:43:34.055087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-11-20 12:43:34.055110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-11-20 12:43:34.055281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-11-20 12:43:34.055304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-11-20 12:43:34.055395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-11-20 12:43:34.055418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-11-20 12:43:34.055505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-11-20 12:43:34.055528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-11-20 12:43:34.055624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-11-20 12:43:34.055646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-11-20 12:43:34.055742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-11-20 12:43:34.055764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-11-20 12:43:34.055856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-11-20 12:43:34.055879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-11-20 12:43:34.055968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-11-20 12:43:34.055991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-11-20 12:43:34.056141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-11-20 12:43:34.056164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-11-20 12:43:34.056286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-11-20 12:43:34.056310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-11-20 12:43:34.056533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-11-20 12:43:34.056555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-11-20 12:43:34.056712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-11-20 12:43:34.056735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-11-20 12:43:34.056884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.427 [2024-11-20 12:43:34.056906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.427 qpair failed and we were unable to recover it. 00:29:28.427 [2024-11-20 12:43:34.057085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-11-20 12:43:34.057107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-11-20 12:43:34.057218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-11-20 12:43:34.057241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-11-20 12:43:34.057343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-11-20 12:43:34.057368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-11-20 12:43:34.057454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-11-20 12:43:34.057476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-11-20 12:43:34.057593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-11-20 12:43:34.057616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-11-20 12:43:34.057722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-11-20 12:43:34.057745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-11-20 12:43:34.057843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-11-20 12:43:34.057866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-11-20 12:43:34.057953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-11-20 12:43:34.057975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-11-20 12:43:34.058057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-11-20 12:43:34.058078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-11-20 12:43:34.058177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-11-20 12:43:34.058200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-11-20 12:43:34.058318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-11-20 12:43:34.058341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-11-20 12:43:34.058436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-11-20 12:43:34.058459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-11-20 12:43:34.058556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-11-20 12:43:34.058579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-11-20 12:43:34.058679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-11-20 12:43:34.058702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-11-20 12:43:34.058810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-11-20 12:43:34.058833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-11-20 12:43:34.059077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-11-20 12:43:34.059099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-11-20 12:43:34.059196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-11-20 12:43:34.059228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-11-20 12:43:34.059387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-11-20 12:43:34.059409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-11-20 12:43:34.059513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-11-20 12:43:34.059535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-11-20 12:43:34.059689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-11-20 12:43:34.059712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-11-20 12:43:34.059817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-11-20 12:43:34.059839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-11-20 12:43:34.059934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-11-20 12:43:34.059957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-11-20 12:43:34.060107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-11-20 12:43:34.060129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-11-20 12:43:34.060249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-11-20 12:43:34.060274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-11-20 12:43:34.060363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-11-20 12:43:34.060386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-11-20 12:43:34.060537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-11-20 12:43:34.060559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-11-20 12:43:34.060726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-11-20 12:43:34.060758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-11-20 12:43:34.060917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-11-20 12:43:34.060940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-11-20 12:43:34.061050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-11-20 12:43:34.061074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-11-20 12:43:34.061168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-11-20 12:43:34.061195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-11-20 12:43:34.061298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-11-20 12:43:34.061321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-11-20 12:43:34.061416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-11-20 12:43:34.061439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-11-20 12:43:34.061530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-11-20 12:43:34.061552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-11-20 12:43:34.061725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-11-20 12:43:34.061748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-11-20 12:43:34.061834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-11-20 12:43:34.061856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-11-20 12:43:34.062013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-11-20 12:43:34.062035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-11-20 12:43:34.062193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-11-20 12:43:34.062221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-11-20 12:43:34.062317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.428 [2024-11-20 12:43:34.062340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.428 qpair failed and we were unable to recover it. 00:29:28.428 [2024-11-20 12:43:34.062507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-11-20 12:43:34.062529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-11-20 12:43:34.062691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-11-20 12:43:34.062713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-11-20 12:43:34.062824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-11-20 12:43:34.062846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-11-20 12:43:34.062950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-11-20 12:43:34.062973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-11-20 12:43:34.063120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-11-20 12:43:34.063143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-11-20 12:43:34.063336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7c7af0 is same with the state(6) to be set 00:29:28.429 [2024-11-20 12:43:34.063603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-11-20 12:43:34.063676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-11-20 12:43:34.063833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-11-20 12:43:34.063870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-11-20 12:43:34.064003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-11-20 12:43:34.064038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-11-20 12:43:34.064224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-11-20 12:43:34.064259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-11-20 12:43:34.064388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-11-20 12:43:34.064422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-11-20 12:43:34.064539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-11-20 12:43:34.064571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-11-20 12:43:34.064751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-11-20 12:43:34.064778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-11-20 12:43:34.064867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-11-20 12:43:34.064889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-11-20 12:43:34.065044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-11-20 12:43:34.065066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-11-20 12:43:34.065242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-11-20 12:43:34.065267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-11-20 12:43:34.065347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-11-20 12:43:34.065370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-11-20 12:43:34.065457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-11-20 12:43:34.065479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-11-20 12:43:34.065632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-11-20 12:43:34.065654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-11-20 12:43:34.065815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-11-20 12:43:34.065887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-11-20 12:43:34.066034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-11-20 12:43:34.066082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-11-20 12:43:34.066300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-11-20 12:43:34.066338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-11-20 12:43:34.066439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-11-20 12:43:34.066465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-11-20 12:43:34.066551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-11-20 12:43:34.066573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-11-20 12:43:34.066722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-11-20 12:43:34.066745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-11-20 12:43:34.066848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-11-20 12:43:34.066871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-11-20 12:43:34.066963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-11-20 12:43:34.066984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-11-20 12:43:34.067140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-11-20 12:43:34.067163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-11-20 12:43:34.067273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-11-20 12:43:34.067297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-11-20 12:43:34.067392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-11-20 12:43:34.067415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-11-20 12:43:34.067522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-11-20 12:43:34.067546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-11-20 12:43:34.067640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-11-20 12:43:34.067662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-11-20 12:43:34.067749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-11-20 12:43:34.067771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-11-20 12:43:34.067885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-11-20 12:43:34.067908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-11-20 12:43:34.067990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-11-20 12:43:34.068013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-11-20 12:43:34.068094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-11-20 12:43:34.068116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-11-20 12:43:34.068270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-11-20 12:43:34.068294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-11-20 12:43:34.068375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-11-20 12:43:34.068399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-11-20 12:43:34.068548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.429 [2024-11-20 12:43:34.068570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.429 qpair failed and we were unable to recover it. 00:29:28.429 [2024-11-20 12:43:34.068666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-11-20 12:43:34.068688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-11-20 12:43:34.068784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-11-20 12:43:34.068807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-11-20 12:43:34.068959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-11-20 12:43:34.068982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-11-20 12:43:34.069156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-11-20 12:43:34.069179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-11-20 12:43:34.069270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-11-20 12:43:34.069293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-11-20 12:43:34.069390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-11-20 12:43:34.069412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-11-20 12:43:34.069512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-11-20 12:43:34.069534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-11-20 12:43:34.069630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-11-20 12:43:34.069656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-11-20 12:43:34.069806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-11-20 12:43:34.069829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-11-20 12:43:34.069928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-11-20 12:43:34.069952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-11-20 12:43:34.070140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-11-20 12:43:34.070162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-11-20 12:43:34.070277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-11-20 12:43:34.070300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-11-20 12:43:34.070455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-11-20 12:43:34.070478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-11-20 12:43:34.070567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-11-20 12:43:34.070589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-11-20 12:43:34.070687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-11-20 12:43:34.070710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-11-20 12:43:34.070861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-11-20 12:43:34.070884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-11-20 12:43:34.070976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-11-20 12:43:34.070999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-11-20 12:43:34.071084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-11-20 12:43:34.071106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-11-20 12:43:34.071192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-11-20 12:43:34.071235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-11-20 12:43:34.071320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-11-20 12:43:34.071343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-11-20 12:43:34.071510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-11-20 12:43:34.071532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-11-20 12:43:34.071624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-11-20 12:43:34.071648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-11-20 12:43:34.071734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-11-20 12:43:34.071756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-11-20 12:43:34.071838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-11-20 12:43:34.071861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-11-20 12:43:34.071950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-11-20 12:43:34.071972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-11-20 12:43:34.072060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-11-20 12:43:34.072083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-11-20 12:43:34.072171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-11-20 12:43:34.072194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-11-20 12:43:34.072357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-11-20 12:43:34.072380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-11-20 12:43:34.072475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-11-20 12:43:34.072497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-11-20 12:43:34.072667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-11-20 12:43:34.072690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-11-20 12:43:34.072794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-11-20 12:43:34.072817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-11-20 12:43:34.072912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-11-20 12:43:34.072934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-11-20 12:43:34.073083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-11-20 12:43:34.073105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-11-20 12:43:34.073215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-11-20 12:43:34.073239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-11-20 12:43:34.073439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-11-20 12:43:34.073466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-11-20 12:43:34.073571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-11-20 12:43:34.073593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-11-20 12:43:34.073742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-11-20 12:43:34.073764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-11-20 12:43:34.073916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-11-20 12:43:34.073938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.430 [2024-11-20 12:43:34.074031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.430 [2024-11-20 12:43:34.074054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.430 qpair failed and we were unable to recover it. 00:29:28.431 [2024-11-20 12:43:34.074212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-11-20 12:43:34.074235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-11-20 12:43:34.074326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-11-20 12:43:34.074348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-11-20 12:43:34.074435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-11-20 12:43:34.074457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-11-20 12:43:34.074560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-11-20 12:43:34.074583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-11-20 12:43:34.074734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-11-20 12:43:34.074757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-11-20 12:43:34.074861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-11-20 12:43:34.074884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-11-20 12:43:34.075032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-11-20 12:43:34.075055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-11-20 12:43:34.075148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-11-20 12:43:34.075171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-11-20 12:43:34.075407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-11-20 12:43:34.075430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-11-20 12:43:34.075540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-11-20 12:43:34.075563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-11-20 12:43:34.075658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-11-20 12:43:34.075680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-11-20 12:43:34.075856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-11-20 12:43:34.075879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-11-20 12:43:34.075986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-11-20 12:43:34.076008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-11-20 12:43:34.076105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-11-20 12:43:34.076129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-11-20 12:43:34.076236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-11-20 12:43:34.076261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-11-20 12:43:34.076345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-11-20 12:43:34.076369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-11-20 12:43:34.076519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-11-20 12:43:34.076541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-11-20 12:43:34.076651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-11-20 12:43:34.076674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-11-20 12:43:34.076784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-11-20 12:43:34.076809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-11-20 12:43:34.076913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-11-20 12:43:34.076936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-11-20 12:43:34.077017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-11-20 12:43:34.077037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-11-20 12:43:34.077143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-11-20 12:43:34.077166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-11-20 12:43:34.077329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-11-20 12:43:34.077353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-11-20 12:43:34.077514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-11-20 12:43:34.077538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-11-20 12:43:34.077799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-11-20 12:43:34.077822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-11-20 12:43:34.077911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-11-20 12:43:34.077934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-11-20 12:43:34.078014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-11-20 12:43:34.078034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-11-20 12:43:34.078198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-11-20 12:43:34.078226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-11-20 12:43:34.078386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-11-20 12:43:34.078409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-11-20 12:43:34.078579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-11-20 12:43:34.078602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-11-20 12:43:34.078733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-11-20 12:43:34.078755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-11-20 12:43:34.078930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-11-20 12:43:34.078953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-11-20 12:43:34.079053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-11-20 12:43:34.079078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-11-20 12:43:34.079175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-11-20 12:43:34.079198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-11-20 12:43:34.079293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-11-20 12:43:34.079316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-11-20 12:43:34.079396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-11-20 12:43:34.079419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-11-20 12:43:34.079581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-11-20 12:43:34.079609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-11-20 12:43:34.079762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.431 [2024-11-20 12:43:34.079785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.431 qpair failed and we were unable to recover it. 00:29:28.431 [2024-11-20 12:43:34.079954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-11-20 12:43:34.079978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-11-20 12:43:34.080065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-11-20 12:43:34.080088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-11-20 12:43:34.080262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-11-20 12:43:34.080286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-11-20 12:43:34.080449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-11-20 12:43:34.080472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-11-20 12:43:34.080580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-11-20 12:43:34.080603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-11-20 12:43:34.080776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-11-20 12:43:34.080800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-11-20 12:43:34.080903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-11-20 12:43:34.080925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-11-20 12:43:34.081033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-11-20 12:43:34.081056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-11-20 12:43:34.081213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-11-20 12:43:34.081237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-11-20 12:43:34.081348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-11-20 12:43:34.081370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-11-20 12:43:34.081461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-11-20 12:43:34.081484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-11-20 12:43:34.081587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-11-20 12:43:34.081610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-11-20 12:43:34.081702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-11-20 12:43:34.081724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-11-20 12:43:34.081876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-11-20 12:43:34.081899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-11-20 12:43:34.081983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-11-20 12:43:34.082006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-11-20 12:43:34.082155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-11-20 12:43:34.082177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-11-20 12:43:34.082279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-11-20 12:43:34.082303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-11-20 12:43:34.082474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-11-20 12:43:34.082497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-11-20 12:43:34.082668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-11-20 12:43:34.082690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-11-20 12:43:34.082799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-11-20 12:43:34.082822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-11-20 12:43:34.082912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-11-20 12:43:34.082935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-11-20 12:43:34.083106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-11-20 12:43:34.083129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-11-20 12:43:34.083286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-11-20 12:43:34.083310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-11-20 12:43:34.083397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-11-20 12:43:34.083420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-11-20 12:43:34.083571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-11-20 12:43:34.083594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-11-20 12:43:34.083679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-11-20 12:43:34.083705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-11-20 12:43:34.083857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-11-20 12:43:34.083879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-11-20 12:43:34.083983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-11-20 12:43:34.084005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-11-20 12:43:34.084102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-11-20 12:43:34.084124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-11-20 12:43:34.084283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-11-20 12:43:34.084306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-11-20 12:43:34.084410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-11-20 12:43:34.084432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-11-20 12:43:34.084597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-11-20 12:43:34.084619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-11-20 12:43:34.084714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-11-20 12:43:34.084736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-11-20 12:43:34.084839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.432 [2024-11-20 12:43:34.084862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.432 qpair failed and we were unable to recover it. 00:29:28.432 [2024-11-20 12:43:34.084945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-11-20 12:43:34.084968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-11-20 12:43:34.085066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-11-20 12:43:34.085088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-11-20 12:43:34.085171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-11-20 12:43:34.085192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-11-20 12:43:34.085294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-11-20 12:43:34.085318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-11-20 12:43:34.085402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-11-20 12:43:34.085424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-11-20 12:43:34.085586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-11-20 12:43:34.085610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-11-20 12:43:34.085715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-11-20 12:43:34.085738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-11-20 12:43:34.085849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-11-20 12:43:34.085873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-11-20 12:43:34.085963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-11-20 12:43:34.085985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-11-20 12:43:34.086200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-11-20 12:43:34.086228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-11-20 12:43:34.086330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-11-20 12:43:34.086352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-11-20 12:43:34.086460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-11-20 12:43:34.086483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-11-20 12:43:34.086757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-11-20 12:43:34.086780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-11-20 12:43:34.086933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-11-20 12:43:34.086955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-11-20 12:43:34.087044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-11-20 12:43:34.087066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-11-20 12:43:34.087151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-11-20 12:43:34.087175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-11-20 12:43:34.087348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-11-20 12:43:34.087371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-11-20 12:43:34.087472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-11-20 12:43:34.087494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-11-20 12:43:34.087644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-11-20 12:43:34.087667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-11-20 12:43:34.087774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-11-20 12:43:34.087798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-11-20 12:43:34.087967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-11-20 12:43:34.087989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-11-20 12:43:34.088235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-11-20 12:43:34.088259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-11-20 12:43:34.088423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-11-20 12:43:34.088446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-11-20 12:43:34.088542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-11-20 12:43:34.088565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-11-20 12:43:34.088660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-11-20 12:43:34.088683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-11-20 12:43:34.088792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-11-20 12:43:34.088814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-11-20 12:43:34.088993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-11-20 12:43:34.089016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-11-20 12:43:34.089123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-11-20 12:43:34.089146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-11-20 12:43:34.089305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-11-20 12:43:34.089330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-11-20 12:43:34.089478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-11-20 12:43:34.089501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-11-20 12:43:34.089656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-11-20 12:43:34.089679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-11-20 12:43:34.089838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-11-20 12:43:34.089861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-11-20 12:43:34.090021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-11-20 12:43:34.090047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-11-20 12:43:34.090146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-11-20 12:43:34.090169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-11-20 12:43:34.090283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-11-20 12:43:34.090307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-11-20 12:43:34.090394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-11-20 12:43:34.090416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-11-20 12:43:34.090513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-11-20 12:43:34.090535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-11-20 12:43:34.090748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-11-20 12:43:34.090771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.433 [2024-11-20 12:43:34.090876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.433 [2024-11-20 12:43:34.090898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.433 qpair failed and we were unable to recover it. 00:29:28.434 [2024-11-20 12:43:34.091056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-11-20 12:43:34.091078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-11-20 12:43:34.091171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-11-20 12:43:34.091194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-11-20 12:43:34.091299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-11-20 12:43:34.091322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-11-20 12:43:34.091490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-11-20 12:43:34.091512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-11-20 12:43:34.091610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-11-20 12:43:34.091632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-11-20 12:43:34.091793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-11-20 12:43:34.091816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-11-20 12:43:34.091925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-11-20 12:43:34.091948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-11-20 12:43:34.092117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-11-20 12:43:34.092140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-11-20 12:43:34.092226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-11-20 12:43:34.092250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-11-20 12:43:34.092339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-11-20 12:43:34.092362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-11-20 12:43:34.092445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-11-20 12:43:34.092467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-11-20 12:43:34.092556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-11-20 12:43:34.092579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-11-20 12:43:34.092679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-11-20 12:43:34.092702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-11-20 12:43:34.092803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-11-20 12:43:34.092825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-11-20 12:43:34.092911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-11-20 12:43:34.092934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-11-20 12:43:34.093050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-11-20 12:43:34.093072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-11-20 12:43:34.093155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-11-20 12:43:34.093176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-11-20 12:43:34.093280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-11-20 12:43:34.093303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-11-20 12:43:34.093452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-11-20 12:43:34.093474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-11-20 12:43:34.093625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-11-20 12:43:34.093649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-11-20 12:43:34.093832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-11-20 12:43:34.093858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-11-20 12:43:34.093965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-11-20 12:43:34.093988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-11-20 12:43:34.094140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-11-20 12:43:34.094162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-11-20 12:43:34.094330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-11-20 12:43:34.094353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-11-20 12:43:34.094531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-11-20 12:43:34.094554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-11-20 12:43:34.094655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-11-20 12:43:34.094678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-11-20 12:43:34.094834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-11-20 12:43:34.094857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-11-20 12:43:34.095007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-11-20 12:43:34.095030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-11-20 12:43:34.095137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-11-20 12:43:34.095159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-11-20 12:43:34.095264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-11-20 12:43:34.095288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-11-20 12:43:34.095552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-11-20 12:43:34.095574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-11-20 12:43:34.095673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-11-20 12:43:34.095696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-11-20 12:43:34.095806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-11-20 12:43:34.095829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-11-20 12:43:34.095943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-11-20 12:43:34.095966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-11-20 12:43:34.096133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-11-20 12:43:34.096157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-11-20 12:43:34.096361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-11-20 12:43:34.096385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-11-20 12:43:34.096492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-11-20 12:43:34.096515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-11-20 12:43:34.096607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.434 [2024-11-20 12:43:34.096631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.434 qpair failed and we were unable to recover it. 00:29:28.434 [2024-11-20 12:43:34.096780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-11-20 12:43:34.096803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-11-20 12:43:34.096969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-11-20 12:43:34.096992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-11-20 12:43:34.097081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-11-20 12:43:34.097103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-11-20 12:43:34.097216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-11-20 12:43:34.097240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-11-20 12:43:34.097411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-11-20 12:43:34.097435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-11-20 12:43:34.097616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-11-20 12:43:34.097639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-11-20 12:43:34.097739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-11-20 12:43:34.097762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-11-20 12:43:34.097881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-11-20 12:43:34.097904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-11-20 12:43:34.097985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-11-20 12:43:34.098008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-11-20 12:43:34.098159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-11-20 12:43:34.098181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-11-20 12:43:34.098301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-11-20 12:43:34.098324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-11-20 12:43:34.098426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-11-20 12:43:34.098450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-11-20 12:43:34.098531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-11-20 12:43:34.098555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-11-20 12:43:34.098661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-11-20 12:43:34.098685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-11-20 12:43:34.098789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-11-20 12:43:34.098812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-11-20 12:43:34.098919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-11-20 12:43:34.098942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-11-20 12:43:34.099022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-11-20 12:43:34.099043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-11-20 12:43:34.099213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-11-20 12:43:34.099237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-11-20 12:43:34.099414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-11-20 12:43:34.099438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-11-20 12:43:34.099589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-11-20 12:43:34.099612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-11-20 12:43:34.099784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-11-20 12:43:34.099807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-11-20 12:43:34.099912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-11-20 12:43:34.099935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-11-20 12:43:34.100090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-11-20 12:43:34.100113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-11-20 12:43:34.100224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-11-20 12:43:34.100252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-11-20 12:43:34.100349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-11-20 12:43:34.100371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-11-20 12:43:34.100522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-11-20 12:43:34.100545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-11-20 12:43:34.100634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-11-20 12:43:34.100657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-11-20 12:43:34.100828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-11-20 12:43:34.100851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-11-20 12:43:34.100949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-11-20 12:43:34.100972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-11-20 12:43:34.101070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-11-20 12:43:34.101093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-11-20 12:43:34.101193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-11-20 12:43:34.101222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-11-20 12:43:34.101336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-11-20 12:43:34.101359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-11-20 12:43:34.101454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-11-20 12:43:34.101477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-11-20 12:43:34.101643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-11-20 12:43:34.101666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-11-20 12:43:34.101753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-11-20 12:43:34.101775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-11-20 12:43:34.101875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-11-20 12:43:34.101898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-11-20 12:43:34.102069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-11-20 12:43:34.102092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-11-20 12:43:34.102179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-11-20 12:43:34.102207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.435 qpair failed and we were unable to recover it. 00:29:28.435 [2024-11-20 12:43:34.102291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.435 [2024-11-20 12:43:34.102313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-11-20 12:43:34.102398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-11-20 12:43:34.102421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-11-20 12:43:34.102544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-11-20 12:43:34.102567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-11-20 12:43:34.102719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-11-20 12:43:34.102742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-11-20 12:43:34.102835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-11-20 12:43:34.102859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-11-20 12:43:34.102949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-11-20 12:43:34.102973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-11-20 12:43:34.103071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-11-20 12:43:34.103094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-11-20 12:43:34.103182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-11-20 12:43:34.103231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-11-20 12:43:34.103382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-11-20 12:43:34.103405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-11-20 12:43:34.103511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-11-20 12:43:34.103534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-11-20 12:43:34.103616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-11-20 12:43:34.103639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-11-20 12:43:34.103817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-11-20 12:43:34.103840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-11-20 12:43:34.103994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-11-20 12:43:34.104017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-11-20 12:43:34.104122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-11-20 12:43:34.104144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-11-20 12:43:34.104308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-11-20 12:43:34.104332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-11-20 12:43:34.104426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-11-20 12:43:34.104450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-11-20 12:43:34.104549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-11-20 12:43:34.104572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-11-20 12:43:34.104662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-11-20 12:43:34.104684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-11-20 12:43:34.104837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-11-20 12:43:34.104859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-11-20 12:43:34.104956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-11-20 12:43:34.104979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-11-20 12:43:34.105073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-11-20 12:43:34.105096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-11-20 12:43:34.105182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-11-20 12:43:34.105210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-11-20 12:43:34.105318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-11-20 12:43:34.105341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-11-20 12:43:34.105424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-11-20 12:43:34.105446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-11-20 12:43:34.105570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-11-20 12:43:34.105593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-11-20 12:43:34.105744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-11-20 12:43:34.105766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-11-20 12:43:34.105889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-11-20 12:43:34.105937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-11-20 12:43:34.106077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-11-20 12:43:34.106112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-11-20 12:43:34.106242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-11-20 12:43:34.106281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-11-20 12:43:34.106405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-11-20 12:43:34.106438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-11-20 12:43:34.106635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-11-20 12:43:34.106668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-11-20 12:43:34.106772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-11-20 12:43:34.106805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-11-20 12:43:34.107026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-11-20 12:43:34.107052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-11-20 12:43:34.107161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-11-20 12:43:34.107183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-11-20 12:43:34.107338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-11-20 12:43:34.107361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-11-20 12:43:34.107461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-11-20 12:43:34.107483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.436 qpair failed and we were unable to recover it. 00:29:28.436 [2024-11-20 12:43:34.107606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.436 [2024-11-20 12:43:34.107628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-11-20 12:43:34.107719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-11-20 12:43:34.107741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-11-20 12:43:34.107824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-11-20 12:43:34.107846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-11-20 12:43:34.108020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-11-20 12:43:34.108043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-11-20 12:43:34.108229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-11-20 12:43:34.108253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-11-20 12:43:34.108345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-11-20 12:43:34.108368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-11-20 12:43:34.108523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-11-20 12:43:34.108548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-11-20 12:43:34.108664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-11-20 12:43:34.108687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-11-20 12:43:34.108854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-11-20 12:43:34.108877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-11-20 12:43:34.108982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-11-20 12:43:34.109004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-11-20 12:43:34.109220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-11-20 12:43:34.109244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-11-20 12:43:34.109325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-11-20 12:43:34.109349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-11-20 12:43:34.109590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-11-20 12:43:34.109612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-11-20 12:43:34.109692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-11-20 12:43:34.109713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-11-20 12:43:34.109805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-11-20 12:43:34.109828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-11-20 12:43:34.109912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-11-20 12:43:34.109934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-11-20 12:43:34.110119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-11-20 12:43:34.110142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-11-20 12:43:34.110251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-11-20 12:43:34.110279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-11-20 12:43:34.110386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-11-20 12:43:34.110408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-11-20 12:43:34.110586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-11-20 12:43:34.110609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-11-20 12:43:34.110757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-11-20 12:43:34.110780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-11-20 12:43:34.110933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-11-20 12:43:34.110955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-11-20 12:43:34.111045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-11-20 12:43:34.111068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-11-20 12:43:34.111179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-11-20 12:43:34.111207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-11-20 12:43:34.111294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-11-20 12:43:34.111315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-11-20 12:43:34.111418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-11-20 12:43:34.111442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-11-20 12:43:34.111542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-11-20 12:43:34.111565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-11-20 12:43:34.111666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-11-20 12:43:34.111689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-11-20 12:43:34.111802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-11-20 12:43:34.111824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-11-20 12:43:34.111978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-11-20 12:43:34.112000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-11-20 12:43:34.112155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-11-20 12:43:34.112178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-11-20 12:43:34.112293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-11-20 12:43:34.112316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-11-20 12:43:34.112413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-11-20 12:43:34.112436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-11-20 12:43:34.112516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-11-20 12:43:34.112539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-11-20 12:43:34.112638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-11-20 12:43:34.112661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-11-20 12:43:34.112779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-11-20 12:43:34.112802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-11-20 12:43:34.112947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-11-20 12:43:34.112970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-11-20 12:43:34.113082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-11-20 12:43:34.113104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-11-20 12:43:34.113188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.437 [2024-11-20 12:43:34.113234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.437 qpair failed and we were unable to recover it. 00:29:28.437 [2024-11-20 12:43:34.113314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-11-20 12:43:34.113335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-11-20 12:43:34.113416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-11-20 12:43:34.113438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-11-20 12:43:34.113550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-11-20 12:43:34.113573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-11-20 12:43:34.113672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-11-20 12:43:34.113695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-11-20 12:43:34.113874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-11-20 12:43:34.113897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-11-20 12:43:34.113991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-11-20 12:43:34.114017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-11-20 12:43:34.114172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-11-20 12:43:34.114194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-11-20 12:43:34.114355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-11-20 12:43:34.114378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-11-20 12:43:34.114475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-11-20 12:43:34.114497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-11-20 12:43:34.114648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-11-20 12:43:34.114671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-11-20 12:43:34.114828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-11-20 12:43:34.114851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-11-20 12:43:34.114934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-11-20 12:43:34.114957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-11-20 12:43:34.115115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-11-20 12:43:34.115138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-11-20 12:43:34.115402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-11-20 12:43:34.115426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-11-20 12:43:34.115522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-11-20 12:43:34.115544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-11-20 12:43:34.115702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-11-20 12:43:34.115726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-11-20 12:43:34.115884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-11-20 12:43:34.115906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-11-20 12:43:34.116061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-11-20 12:43:34.116084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-11-20 12:43:34.116235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-11-20 12:43:34.116258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-11-20 12:43:34.116359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-11-20 12:43:34.116382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-11-20 12:43:34.116490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-11-20 12:43:34.116513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-11-20 12:43:34.116606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-11-20 12:43:34.116629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-11-20 12:43:34.116730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-11-20 12:43:34.116752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-11-20 12:43:34.116971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-11-20 12:43:34.116995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-11-20 12:43:34.117144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-11-20 12:43:34.117166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-11-20 12:43:34.117252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-11-20 12:43:34.117275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-11-20 12:43:34.117389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-11-20 12:43:34.117411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-11-20 12:43:34.117513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-11-20 12:43:34.117537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-11-20 12:43:34.117706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-11-20 12:43:34.117728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-11-20 12:43:34.117816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-11-20 12:43:34.117839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-11-20 12:43:34.117923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-11-20 12:43:34.117945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-11-20 12:43:34.118028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-11-20 12:43:34.118051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-11-20 12:43:34.118226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-11-20 12:43:34.118250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-11-20 12:43:34.118340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-11-20 12:43:34.118363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-11-20 12:43:34.118526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-11-20 12:43:34.118549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-11-20 12:43:34.118648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-11-20 12:43:34.118671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-11-20 12:43:34.118763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-11-20 12:43:34.118786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-11-20 12:43:34.118873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-11-20 12:43:34.118896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-11-20 12:43:34.119068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-11-20 12:43:34.119091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-11-20 12:43:34.119194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-11-20 12:43:34.119225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-11-20 12:43:34.119326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-11-20 12:43:34.119348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.438 [2024-11-20 12:43:34.119502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.438 [2024-11-20 12:43:34.119524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.438 qpair failed and we were unable to recover it. 00:29:28.439 [2024-11-20 12:43:34.119616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-11-20 12:43:34.119638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-11-20 12:43:34.119736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-11-20 12:43:34.119760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-11-20 12:43:34.119861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-11-20 12:43:34.119883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-11-20 12:43:34.119981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-11-20 12:43:34.120004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-11-20 12:43:34.120224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-11-20 12:43:34.120252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-11-20 12:43:34.120347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-11-20 12:43:34.120369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-11-20 12:43:34.120454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-11-20 12:43:34.120476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-11-20 12:43:34.120715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-11-20 12:43:34.120738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-11-20 12:43:34.120852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-11-20 12:43:34.120874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-11-20 12:43:34.120957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-11-20 12:43:34.120980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-11-20 12:43:34.121074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-11-20 12:43:34.121096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-11-20 12:43:34.121250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-11-20 12:43:34.121274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-11-20 12:43:34.121424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-11-20 12:43:34.121446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-11-20 12:43:34.121607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-11-20 12:43:34.121629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-11-20 12:43:34.121777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-11-20 12:43:34.121800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-11-20 12:43:34.121977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-11-20 12:43:34.122000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-11-20 12:43:34.122161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-11-20 12:43:34.122185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-11-20 12:43:34.122353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-11-20 12:43:34.122377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-11-20 12:43:34.122535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-11-20 12:43:34.122558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-11-20 12:43:34.122711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-11-20 12:43:34.122734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-11-20 12:43:34.122912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-11-20 12:43:34.122935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-11-20 12:43:34.123033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-11-20 12:43:34.123055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-11-20 12:43:34.123141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-11-20 12:43:34.123164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-11-20 12:43:34.123331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-11-20 12:43:34.123354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-11-20 12:43:34.123457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-11-20 12:43:34.123480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-11-20 12:43:34.123590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-11-20 12:43:34.123613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-11-20 12:43:34.123760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-11-20 12:43:34.123784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-11-20 12:43:34.123886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-11-20 12:43:34.123909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-11-20 12:43:34.124072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-11-20 12:43:34.124095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-11-20 12:43:34.124291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-11-20 12:43:34.124315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-11-20 12:43:34.124412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-11-20 12:43:34.124434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-11-20 12:43:34.124583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-11-20 12:43:34.124609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-11-20 12:43:34.124715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-11-20 12:43:34.124738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-11-20 12:43:34.124823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-11-20 12:43:34.124845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-11-20 12:43:34.124942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-11-20 12:43:34.124965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-11-20 12:43:34.125148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-11-20 12:43:34.125171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-11-20 12:43:34.125416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-11-20 12:43:34.125440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-11-20 12:43:34.125592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-11-20 12:43:34.125615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-11-20 12:43:34.125733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-11-20 12:43:34.125757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-11-20 12:43:34.125919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-11-20 12:43:34.125941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-11-20 12:43:34.126101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.439 [2024-11-20 12:43:34.126124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.439 qpair failed and we were unable to recover it. 00:29:28.439 [2024-11-20 12:43:34.126233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-11-20 12:43:34.126257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-11-20 12:43:34.126425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-11-20 12:43:34.126448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-11-20 12:43:34.126665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-11-20 12:43:34.126689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-11-20 12:43:34.126799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-11-20 12:43:34.126822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-11-20 12:43:34.126956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-11-20 12:43:34.126995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-11-20 12:43:34.127139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-11-20 12:43:34.127172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-11-20 12:43:34.127307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-11-20 12:43:34.127353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-11-20 12:43:34.127471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-11-20 12:43:34.127496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-11-20 12:43:34.127694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-11-20 12:43:34.127717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-11-20 12:43:34.127817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-11-20 12:43:34.127839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-11-20 12:43:34.128005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-11-20 12:43:34.128028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-11-20 12:43:34.128191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-11-20 12:43:34.128222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-11-20 12:43:34.128374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-11-20 12:43:34.128397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-11-20 12:43:34.128480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-11-20 12:43:34.128503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-11-20 12:43:34.128746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-11-20 12:43:34.128768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-11-20 12:43:34.128854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-11-20 12:43:34.128876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-11-20 12:43:34.128963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-11-20 12:43:34.128985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-11-20 12:43:34.129074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-11-20 12:43:34.129097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-11-20 12:43:34.129232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-11-20 12:43:34.129257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-11-20 12:43:34.129371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-11-20 12:43:34.129395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-11-20 12:43:34.129484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-11-20 12:43:34.129506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-11-20 12:43:34.129595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-11-20 12:43:34.129618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.440 [2024-11-20 12:43:34.129714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.440 [2024-11-20 12:43:34.129736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.440 qpair failed and we were unable to recover it. 00:29:28.754 [2024-11-20 12:43:34.129917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.754 [2024-11-20 12:43:34.129940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.754 qpair failed and we were unable to recover it. 00:29:28.754 [2024-11-20 12:43:34.130020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.754 [2024-11-20 12:43:34.130043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.754 qpair failed and we were unable to recover it. 00:29:28.754 [2024-11-20 12:43:34.130217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.754 [2024-11-20 12:43:34.130242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.754 qpair failed and we were unable to recover it. 00:29:28.754 [2024-11-20 12:43:34.130329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.754 [2024-11-20 12:43:34.130351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.754 qpair failed and we were unable to recover it. 00:29:28.754 [2024-11-20 12:43:34.130462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.754 [2024-11-20 12:43:34.130484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.754 qpair failed and we were unable to recover it. 00:29:28.754 [2024-11-20 12:43:34.130702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.754 [2024-11-20 12:43:34.130725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.754 qpair failed and we were unable to recover it. 00:29:28.754 [2024-11-20 12:43:34.130838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.754 [2024-11-20 12:43:34.130860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.754 qpair failed and we were unable to recover it. 00:29:28.754 [2024-11-20 12:43:34.130952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.754 [2024-11-20 12:43:34.130975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.754 qpair failed and we were unable to recover it. 00:29:28.754 [2024-11-20 12:43:34.131071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.754 [2024-11-20 12:43:34.131099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.754 qpair failed and we were unable to recover it. 00:29:28.754 [2024-11-20 12:43:34.131198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.754 [2024-11-20 12:43:34.131227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.754 qpair failed and we were unable to recover it. 00:29:28.754 [2024-11-20 12:43:34.131387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.754 [2024-11-20 12:43:34.131410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.754 qpair failed and we were unable to recover it. 00:29:28.754 [2024-11-20 12:43:34.131495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.754 [2024-11-20 12:43:34.131517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.754 qpair failed and we were unable to recover it. 00:29:28.754 [2024-11-20 12:43:34.131610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.754 [2024-11-20 12:43:34.131633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.754 qpair failed and we were unable to recover it. 00:29:28.754 [2024-11-20 12:43:34.131722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.754 [2024-11-20 12:43:34.131744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.754 qpair failed and we were unable to recover it. 00:29:28.754 [2024-11-20 12:43:34.131842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.754 [2024-11-20 12:43:34.131864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.754 qpair failed and we were unable to recover it. 00:29:28.754 [2024-11-20 12:43:34.131973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.754 [2024-11-20 12:43:34.131996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.754 qpair failed and we were unable to recover it. 00:29:28.754 [2024-11-20 12:43:34.132081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.754 [2024-11-20 12:43:34.132105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.754 qpair failed and we were unable to recover it. 00:29:28.754 [2024-11-20 12:43:34.132183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.754 [2024-11-20 12:43:34.132213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.754 qpair failed and we were unable to recover it. 00:29:28.754 [2024-11-20 12:43:34.132298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.754 [2024-11-20 12:43:34.132320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.754 qpair failed and we were unable to recover it. 00:29:28.754 [2024-11-20 12:43:34.132424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.754 [2024-11-20 12:43:34.132447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.754 qpair failed and we were unable to recover it. 00:29:28.754 [2024-11-20 12:43:34.132542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.754 [2024-11-20 12:43:34.132565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.754 qpair failed and we were unable to recover it. 00:29:28.754 [2024-11-20 12:43:34.132663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.754 [2024-11-20 12:43:34.132686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.754 qpair failed and we were unable to recover it. 00:29:28.754 [2024-11-20 12:43:34.132791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.754 [2024-11-20 12:43:34.132814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.754 qpair failed and we were unable to recover it. 00:29:28.754 [2024-11-20 12:43:34.132964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.754 [2024-11-20 12:43:34.132987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.754 qpair failed and we were unable to recover it. 00:29:28.754 [2024-11-20 12:43:34.133159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.754 [2024-11-20 12:43:34.133187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.754 qpair failed and we were unable to recover it. 00:29:28.754 [2024-11-20 12:43:34.133285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.754 [2024-11-20 12:43:34.133308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.754 qpair failed and we were unable to recover it. 00:29:28.754 [2024-11-20 12:43:34.133406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.755 [2024-11-20 12:43:34.133429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.755 qpair failed and we were unable to recover it. 00:29:28.755 [2024-11-20 12:43:34.133585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.755 [2024-11-20 12:43:34.133609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.755 qpair failed and we were unable to recover it. 00:29:28.755 [2024-11-20 12:43:34.133720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.755 [2024-11-20 12:43:34.133743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.755 qpair failed and we were unable to recover it. 00:29:28.755 [2024-11-20 12:43:34.133933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.755 [2024-11-20 12:43:34.133955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.755 qpair failed and we were unable to recover it. 00:29:28.755 [2024-11-20 12:43:34.134113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.755 [2024-11-20 12:43:34.134135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.755 qpair failed and we were unable to recover it. 00:29:28.755 [2024-11-20 12:43:34.134235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.755 [2024-11-20 12:43:34.134260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.755 qpair failed and we were unable to recover it. 00:29:28.755 [2024-11-20 12:43:34.134482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.755 [2024-11-20 12:43:34.134506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.755 qpair failed and we were unable to recover it. 00:29:28.755 [2024-11-20 12:43:34.134592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.755 [2024-11-20 12:43:34.134615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.755 qpair failed and we were unable to recover it. 00:29:28.755 [2024-11-20 12:43:34.134791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.755 [2024-11-20 12:43:34.134814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.755 qpair failed and we were unable to recover it. 00:29:28.755 [2024-11-20 12:43:34.134913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.755 [2024-11-20 12:43:34.134940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.755 qpair failed and we were unable to recover it. 00:29:28.755 [2024-11-20 12:43:34.135097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.755 [2024-11-20 12:43:34.135120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.755 qpair failed and we were unable to recover it. 00:29:28.755 [2024-11-20 12:43:34.135231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.755 [2024-11-20 12:43:34.135254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.755 qpair failed and we were unable to recover it. 00:29:28.755 [2024-11-20 12:43:34.135343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.755 [2024-11-20 12:43:34.135366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.755 qpair failed and we were unable to recover it. 00:29:28.755 [2024-11-20 12:43:34.135457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.755 [2024-11-20 12:43:34.135480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.755 qpair failed and we were unable to recover it. 00:29:28.755 [2024-11-20 12:43:34.135639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.755 [2024-11-20 12:43:34.135662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.755 qpair failed and we were unable to recover it. 00:29:28.755 [2024-11-20 12:43:34.135758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.755 [2024-11-20 12:43:34.135781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.755 qpair failed and we were unable to recover it. 00:29:28.755 [2024-11-20 12:43:34.135883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.755 [2024-11-20 12:43:34.135907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.755 qpair failed and we were unable to recover it. 00:29:28.755 [2024-11-20 12:43:34.136006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.755 [2024-11-20 12:43:34.136028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.755 qpair failed and we were unable to recover it. 00:29:28.755 [2024-11-20 12:43:34.136180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.755 [2024-11-20 12:43:34.136210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.755 qpair failed and we were unable to recover it. 00:29:28.755 [2024-11-20 12:43:34.136313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.755 [2024-11-20 12:43:34.136336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.755 qpair failed and we were unable to recover it. 00:29:28.755 [2024-11-20 12:43:34.136555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.755 [2024-11-20 12:43:34.136577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.755 qpair failed and we were unable to recover it. 00:29:28.755 [2024-11-20 12:43:34.136666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.755 [2024-11-20 12:43:34.136689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.755 qpair failed and we were unable to recover it. 00:29:28.755 [2024-11-20 12:43:34.136786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.755 [2024-11-20 12:43:34.136810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.755 qpair failed and we were unable to recover it. 00:29:28.755 [2024-11-20 12:43:34.136941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.755 [2024-11-20 12:43:34.136986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.755 qpair failed and we were unable to recover it. 00:29:28.755 [2024-11-20 12:43:34.137134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.755 [2024-11-20 12:43:34.137187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.755 qpair failed and we were unable to recover it. 00:29:28.755 [2024-11-20 12:43:34.137344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.755 [2024-11-20 12:43:34.137378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.755 qpair failed and we were unable to recover it. 00:29:28.755 [2024-11-20 12:43:34.137495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.755 [2024-11-20 12:43:34.137528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.755 qpair failed and we were unable to recover it. 00:29:28.755 [2024-11-20 12:43:34.137707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.755 [2024-11-20 12:43:34.137740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.755 qpair failed and we were unable to recover it. 00:29:28.755 [2024-11-20 12:43:34.137862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.755 [2024-11-20 12:43:34.137894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.755 qpair failed and we were unable to recover it. 00:29:28.755 [2024-11-20 12:43:34.138007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.755 [2024-11-20 12:43:34.138033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.755 qpair failed and we were unable to recover it. 00:29:28.755 [2024-11-20 12:43:34.138130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.755 [2024-11-20 12:43:34.138153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.755 qpair failed and we were unable to recover it. 00:29:28.755 [2024-11-20 12:43:34.138307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.755 [2024-11-20 12:43:34.138330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.755 qpair failed and we were unable to recover it. 00:29:28.755 [2024-11-20 12:43:34.138421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.755 [2024-11-20 12:43:34.138443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.755 qpair failed and we were unable to recover it. 00:29:28.755 [2024-11-20 12:43:34.138537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.755 [2024-11-20 12:43:34.138560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.755 qpair failed and we were unable to recover it. 00:29:28.755 [2024-11-20 12:43:34.138660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.755 [2024-11-20 12:43:34.138683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.755 qpair failed and we were unable to recover it. 00:29:28.755 [2024-11-20 12:43:34.138781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.755 [2024-11-20 12:43:34.138804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.755 qpair failed and we were unable to recover it. 00:29:28.755 [2024-11-20 12:43:34.139023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.755 [2024-11-20 12:43:34.139045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.755 qpair failed and we were unable to recover it. 00:29:28.755 [2024-11-20 12:43:34.139210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.755 [2024-11-20 12:43:34.139234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.755 qpair failed and we were unable to recover it. 00:29:28.755 [2024-11-20 12:43:34.139338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.755 [2024-11-20 12:43:34.139361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.756 qpair failed and we were unable to recover it. 00:29:28.756 [2024-11-20 12:43:34.139521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.756 [2024-11-20 12:43:34.139543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.756 qpair failed and we were unable to recover it. 00:29:28.756 [2024-11-20 12:43:34.139718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.756 [2024-11-20 12:43:34.139741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.756 qpair failed and we were unable to recover it. 00:29:28.756 [2024-11-20 12:43:34.139962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.756 [2024-11-20 12:43:34.139984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.756 qpair failed and we were unable to recover it. 00:29:28.756 [2024-11-20 12:43:34.140071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.756 [2024-11-20 12:43:34.140093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.756 qpair failed and we were unable to recover it. 00:29:28.756 [2024-11-20 12:43:34.140189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.756 [2024-11-20 12:43:34.140228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.756 qpair failed and we were unable to recover it. 00:29:28.756 [2024-11-20 12:43:34.140345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.756 [2024-11-20 12:43:34.140368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.756 qpair failed and we were unable to recover it. 00:29:28.756 [2024-11-20 12:43:34.140526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.756 [2024-11-20 12:43:34.140549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.756 qpair failed and we were unable to recover it. 00:29:28.756 [2024-11-20 12:43:34.140627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.756 [2024-11-20 12:43:34.140650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.756 qpair failed and we were unable to recover it. 00:29:28.756 [2024-11-20 12:43:34.140733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.756 [2024-11-20 12:43:34.140756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.756 qpair failed and we were unable to recover it. 00:29:28.756 [2024-11-20 12:43:34.140977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.756 [2024-11-20 12:43:34.140999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.756 qpair failed and we were unable to recover it. 00:29:28.756 [2024-11-20 12:43:34.141170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.756 [2024-11-20 12:43:34.141193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.756 qpair failed and we were unable to recover it. 00:29:28.756 [2024-11-20 12:43:34.141372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.756 [2024-11-20 12:43:34.141410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.756 qpair failed and we were unable to recover it. 00:29:28.756 [2024-11-20 12:43:34.141541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.756 [2024-11-20 12:43:34.141575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.756 qpair failed and we were unable to recover it. 00:29:28.756 [2024-11-20 12:43:34.141753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.756 [2024-11-20 12:43:34.141786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:28.756 qpair failed and we were unable to recover it. 00:29:28.756 [2024-11-20 12:43:34.141973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.756 [2024-11-20 12:43:34.141998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.756 qpair failed and we were unable to recover it. 00:29:28.756 [2024-11-20 12:43:34.142085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.756 [2024-11-20 12:43:34.142108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.756 qpair failed and we were unable to recover it. 00:29:28.756 [2024-11-20 12:43:34.142190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.756 [2024-11-20 12:43:34.142220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.756 qpair failed and we were unable to recover it. 00:29:28.756 [2024-11-20 12:43:34.142434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.756 [2024-11-20 12:43:34.142457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.756 qpair failed and we were unable to recover it. 00:29:28.756 [2024-11-20 12:43:34.142550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.756 [2024-11-20 12:43:34.142572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.756 qpair failed and we were unable to recover it. 00:29:28.756 [2024-11-20 12:43:34.143911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.756 [2024-11-20 12:43:34.143954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.756 qpair failed and we were unable to recover it. 00:29:28.756 [2024-11-20 12:43:34.144170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.756 [2024-11-20 12:43:34.144195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.756 qpair failed and we were unable to recover it. 00:29:28.756 [2024-11-20 12:43:34.144378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.756 [2024-11-20 12:43:34.144401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.756 qpair failed and we were unable to recover it. 00:29:28.756 [2024-11-20 12:43:34.144558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.756 [2024-11-20 12:43:34.144580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.756 qpair failed and we were unable to recover it. 00:29:28.756 [2024-11-20 12:43:34.144689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.756 [2024-11-20 12:43:34.144712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.756 qpair failed and we were unable to recover it. 00:29:28.756 [2024-11-20 12:43:34.144881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.756 [2024-11-20 12:43:34.144903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.756 qpair failed and we were unable to recover it. 00:29:28.756 [2024-11-20 12:43:34.145008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.756 [2024-11-20 12:43:34.145032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.756 qpair failed and we were unable to recover it. 00:29:28.756 [2024-11-20 12:43:34.145117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.756 [2024-11-20 12:43:34.145139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.756 qpair failed and we were unable to recover it. 00:29:28.756 [2024-11-20 12:43:34.145236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.756 [2024-11-20 12:43:34.145261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.756 qpair failed and we were unable to recover it. 00:29:28.756 [2024-11-20 12:43:34.145367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.756 [2024-11-20 12:43:34.145388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.756 qpair failed and we were unable to recover it. 00:29:28.756 [2024-11-20 12:43:34.145480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.756 [2024-11-20 12:43:34.145502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.756 qpair failed and we were unable to recover it. 00:29:28.756 [2024-11-20 12:43:34.145582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.756 [2024-11-20 12:43:34.145605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.756 qpair failed and we were unable to recover it. 00:29:28.756 [2024-11-20 12:43:34.145690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.756 [2024-11-20 12:43:34.145713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.756 qpair failed and we were unable to recover it. 00:29:28.756 [2024-11-20 12:43:34.145813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.756 [2024-11-20 12:43:34.145837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.756 qpair failed and we were unable to recover it. 00:29:28.756 [2024-11-20 12:43:34.145986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.756 [2024-11-20 12:43:34.146009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.756 qpair failed and we were unable to recover it. 00:29:28.756 [2024-11-20 12:43:34.146160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.756 [2024-11-20 12:43:34.146184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.756 qpair failed and we were unable to recover it. 00:29:28.756 [2024-11-20 12:43:34.146346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.756 [2024-11-20 12:43:34.146369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.756 qpair failed and we were unable to recover it. 00:29:28.756 [2024-11-20 12:43:34.146527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.756 [2024-11-20 12:43:34.146550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.756 qpair failed and we were unable to recover it. 00:29:28.756 [2024-11-20 12:43:34.146643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.756 [2024-11-20 12:43:34.146666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.756 qpair failed and we were unable to recover it. 00:29:28.757 [2024-11-20 12:43:34.146759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-11-20 12:43:34.146787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.757 qpair failed and we were unable to recover it. 00:29:28.757 [2024-11-20 12:43:34.146879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-11-20 12:43:34.146902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.757 qpair failed and we were unable to recover it. 00:29:28.757 [2024-11-20 12:43:34.147005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-11-20 12:43:34.147028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.757 qpair failed and we were unable to recover it. 00:29:28.757 [2024-11-20 12:43:34.147138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-11-20 12:43:34.147161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.757 qpair failed and we were unable to recover it. 00:29:28.757 [2024-11-20 12:43:34.147318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-11-20 12:43:34.147342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.757 qpair failed and we were unable to recover it. 00:29:28.757 [2024-11-20 12:43:34.147432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-11-20 12:43:34.147455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.757 qpair failed and we were unable to recover it. 00:29:28.757 [2024-11-20 12:43:34.147604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-11-20 12:43:34.147626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.757 qpair failed and we were unable to recover it. 00:29:28.757 [2024-11-20 12:43:34.147730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-11-20 12:43:34.147753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.757 qpair failed and we were unable to recover it. 00:29:28.757 [2024-11-20 12:43:34.147921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-11-20 12:43:34.147943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.757 qpair failed and we were unable to recover it. 00:29:28.757 [2024-11-20 12:43:34.148038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-11-20 12:43:34.148061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.757 qpair failed and we were unable to recover it. 00:29:28.757 [2024-11-20 12:43:34.148165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-11-20 12:43:34.148187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.757 qpair failed and we were unable to recover it. 00:29:28.757 [2024-11-20 12:43:34.148277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-11-20 12:43:34.148301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.757 qpair failed and we were unable to recover it. 00:29:28.757 [2024-11-20 12:43:34.148387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-11-20 12:43:34.148410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.757 qpair failed and we were unable to recover it. 00:29:28.757 [2024-11-20 12:43:34.148501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-11-20 12:43:34.148523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.757 qpair failed and we were unable to recover it. 00:29:28.757 [2024-11-20 12:43:34.148612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-11-20 12:43:34.148641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.757 qpair failed and we were unable to recover it. 00:29:28.757 [2024-11-20 12:43:34.148758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-11-20 12:43:34.148782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.757 qpair failed and we were unable to recover it. 00:29:28.757 [2024-11-20 12:43:34.148871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-11-20 12:43:34.148893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.757 qpair failed and we were unable to recover it. 00:29:28.757 [2024-11-20 12:43:34.148976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-11-20 12:43:34.148999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.757 qpair failed and we were unable to recover it. 00:29:28.757 [2024-11-20 12:43:34.149153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-11-20 12:43:34.149176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.757 qpair failed and we were unable to recover it. 00:29:28.757 [2024-11-20 12:43:34.149401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-11-20 12:43:34.149425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.757 qpair failed and we were unable to recover it. 00:29:28.757 [2024-11-20 12:43:34.149524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-11-20 12:43:34.149547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.757 qpair failed and we were unable to recover it. 00:29:28.757 [2024-11-20 12:43:34.149630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-11-20 12:43:34.149651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.757 qpair failed and we were unable to recover it. 00:29:28.757 [2024-11-20 12:43:34.149800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-11-20 12:43:34.149823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.757 qpair failed and we were unable to recover it. 00:29:28.757 [2024-11-20 12:43:34.149918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-11-20 12:43:34.149941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.757 qpair failed and we were unable to recover it. 00:29:28.757 [2024-11-20 12:43:34.150041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-11-20 12:43:34.150063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.757 qpair failed and we were unable to recover it. 00:29:28.757 [2024-11-20 12:43:34.150165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-11-20 12:43:34.150187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.757 qpair failed and we were unable to recover it. 00:29:28.757 [2024-11-20 12:43:34.150291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-11-20 12:43:34.150315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.757 qpair failed and we were unable to recover it. 00:29:28.757 [2024-11-20 12:43:34.150394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-11-20 12:43:34.150416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.757 qpair failed and we were unable to recover it. 00:29:28.757 [2024-11-20 12:43:34.150568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-11-20 12:43:34.150592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.757 qpair failed and we were unable to recover it. 00:29:28.757 [2024-11-20 12:43:34.150688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-11-20 12:43:34.150710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.757 qpair failed and we were unable to recover it. 00:29:28.757 [2024-11-20 12:43:34.150806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-11-20 12:43:34.150829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.757 qpair failed and we were unable to recover it. 00:29:28.757 [2024-11-20 12:43:34.150930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-11-20 12:43:34.150953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.757 qpair failed and we were unable to recover it. 00:29:28.757 [2024-11-20 12:43:34.151042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-11-20 12:43:34.151064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.757 qpair failed and we were unable to recover it. 00:29:28.757 [2024-11-20 12:43:34.151153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-11-20 12:43:34.151174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.757 qpair failed and we were unable to recover it. 00:29:28.757 [2024-11-20 12:43:34.151280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-11-20 12:43:34.151304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.757 qpair failed and we were unable to recover it. 00:29:28.757 [2024-11-20 12:43:34.151393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-11-20 12:43:34.151416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.757 qpair failed and we were unable to recover it. 00:29:28.757 [2024-11-20 12:43:34.151501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-11-20 12:43:34.151523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.757 qpair failed and we were unable to recover it. 00:29:28.757 [2024-11-20 12:43:34.151628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-11-20 12:43:34.151651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.757 qpair failed and we were unable to recover it. 00:29:28.758 [2024-11-20 12:43:34.151747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.758 [2024-11-20 12:43:34.151769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.758 qpair failed and we were unable to recover it. 00:29:28.758 [2024-11-20 12:43:34.151922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.758 [2024-11-20 12:43:34.151944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.758 qpair failed and we were unable to recover it. 00:29:28.758 [2024-11-20 12:43:34.152031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.758 [2024-11-20 12:43:34.152053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.758 qpair failed and we were unable to recover it. 00:29:28.758 [2024-11-20 12:43:34.152188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.758 [2024-11-20 12:43:34.152244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.758 qpair failed and we were unable to recover it. 00:29:28.758 [2024-11-20 12:43:34.152368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.758 [2024-11-20 12:43:34.152403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.758 qpair failed and we were unable to recover it. 00:29:28.758 [2024-11-20 12:43:34.152582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.758 [2024-11-20 12:43:34.152616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.758 qpair failed and we were unable to recover it. 00:29:28.758 [2024-11-20 12:43:34.152864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.758 [2024-11-20 12:43:34.152898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.758 qpair failed and we were unable to recover it. 00:29:28.758 [2024-11-20 12:43:34.153073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.758 [2024-11-20 12:43:34.153106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.758 qpair failed and we were unable to recover it. 00:29:28.758 [2024-11-20 12:43:34.153230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.758 [2024-11-20 12:43:34.153265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.758 qpair failed and we were unable to recover it. 00:29:28.758 [2024-11-20 12:43:34.153375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.758 [2024-11-20 12:43:34.153401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.758 qpair failed and we were unable to recover it. 00:29:28.758 [2024-11-20 12:43:34.153582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.758 [2024-11-20 12:43:34.153604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.758 qpair failed and we were unable to recover it. 00:29:28.758 [2024-11-20 12:43:34.153757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.758 [2024-11-20 12:43:34.153779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.758 qpair failed and we were unable to recover it. 00:29:28.758 [2024-11-20 12:43:34.153892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.758 [2024-11-20 12:43:34.153915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.758 qpair failed and we were unable to recover it. 00:29:28.758 [2024-11-20 12:43:34.153997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.758 [2024-11-20 12:43:34.154020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.758 qpair failed and we were unable to recover it. 00:29:28.758 [2024-11-20 12:43:34.154110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.758 [2024-11-20 12:43:34.154133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.758 qpair failed and we were unable to recover it. 00:29:28.758 [2024-11-20 12:43:34.154289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.758 [2024-11-20 12:43:34.154313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.758 qpair failed and we were unable to recover it. 00:29:28.758 [2024-11-20 12:43:34.154502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.758 [2024-11-20 12:43:34.154524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.758 qpair failed and we were unable to recover it. 00:29:28.758 [2024-11-20 12:43:34.154626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.758 [2024-11-20 12:43:34.154649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.758 qpair failed and we were unable to recover it. 00:29:28.758 [2024-11-20 12:43:34.154743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.758 [2024-11-20 12:43:34.154765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.758 qpair failed and we were unable to recover it. 00:29:28.758 [2024-11-20 12:43:34.154985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.758 [2024-11-20 12:43:34.155008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.758 qpair failed and we were unable to recover it. 00:29:28.758 [2024-11-20 12:43:34.155160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.758 [2024-11-20 12:43:34.155182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.758 qpair failed and we were unable to recover it. 00:29:28.758 [2024-11-20 12:43:34.155299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.758 [2024-11-20 12:43:34.155322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.758 qpair failed and we were unable to recover it. 00:29:28.758 [2024-11-20 12:43:34.155411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.758 [2024-11-20 12:43:34.155433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.758 qpair failed and we were unable to recover it. 00:29:28.758 [2024-11-20 12:43:34.155519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.758 [2024-11-20 12:43:34.155541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.758 qpair failed and we were unable to recover it. 00:29:28.758 [2024-11-20 12:43:34.155642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.758 [2024-11-20 12:43:34.155664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.758 qpair failed and we were unable to recover it. 00:29:28.758 [2024-11-20 12:43:34.155817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.758 [2024-11-20 12:43:34.155846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.758 qpair failed and we were unable to recover it. 00:29:28.758 [2024-11-20 12:43:34.155943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.758 [2024-11-20 12:43:34.155965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.758 qpair failed and we were unable to recover it. 00:29:28.758 [2024-11-20 12:43:34.156056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.758 [2024-11-20 12:43:34.156079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.758 qpair failed and we were unable to recover it. 00:29:28.758 [2024-11-20 12:43:34.156170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.758 [2024-11-20 12:43:34.156193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.758 qpair failed and we were unable to recover it. 00:29:28.758 [2024-11-20 12:43:34.156285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.758 [2024-11-20 12:43:34.156308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.758 qpair failed and we were unable to recover it. 00:29:28.758 [2024-11-20 12:43:34.156396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.759 [2024-11-20 12:43:34.156422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.759 qpair failed and we were unable to recover it. 00:29:28.759 [2024-11-20 12:43:34.156520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.759 [2024-11-20 12:43:34.156543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.759 qpair failed and we were unable to recover it. 00:29:28.759 [2024-11-20 12:43:34.156653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.759 [2024-11-20 12:43:34.156675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.759 qpair failed and we were unable to recover it. 00:29:28.759 [2024-11-20 12:43:34.156826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.759 [2024-11-20 12:43:34.156849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.759 qpair failed and we were unable to recover it. 00:29:28.759 [2024-11-20 12:43:34.156954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.759 [2024-11-20 12:43:34.156976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.759 qpair failed and we were unable to recover it. 00:29:28.759 [2024-11-20 12:43:34.157080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.759 [2024-11-20 12:43:34.157103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.759 qpair failed and we were unable to recover it. 00:29:28.759 [2024-11-20 12:43:34.157265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.759 [2024-11-20 12:43:34.157290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.759 qpair failed and we were unable to recover it. 00:29:28.759 [2024-11-20 12:43:34.157393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.759 [2024-11-20 12:43:34.157416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.759 qpair failed and we were unable to recover it. 00:29:28.759 [2024-11-20 12:43:34.157566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.759 [2024-11-20 12:43:34.157589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.759 qpair failed and we were unable to recover it. 00:29:28.759 [2024-11-20 12:43:34.157760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.759 [2024-11-20 12:43:34.157784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.759 qpair failed and we were unable to recover it. 00:29:28.759 [2024-11-20 12:43:34.157874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.759 [2024-11-20 12:43:34.157898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.759 qpair failed and we were unable to recover it. 00:29:28.759 [2024-11-20 12:43:34.158007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.759 [2024-11-20 12:43:34.158030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.759 qpair failed and we were unable to recover it. 00:29:28.759 [2024-11-20 12:43:34.158122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.759 [2024-11-20 12:43:34.158145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.759 qpair failed and we were unable to recover it. 00:29:28.759 [2024-11-20 12:43:34.158320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.759 [2024-11-20 12:43:34.158345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.759 qpair failed and we were unable to recover it. 00:29:28.759 [2024-11-20 12:43:34.158546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.759 [2024-11-20 12:43:34.158569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.759 qpair failed and we were unable to recover it. 00:29:28.759 [2024-11-20 12:43:34.158721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.759 [2024-11-20 12:43:34.158745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.759 qpair failed and we were unable to recover it. 00:29:28.759 [2024-11-20 12:43:34.158915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.759 [2024-11-20 12:43:34.158938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.759 qpair failed and we were unable to recover it. 00:29:28.759 [2024-11-20 12:43:34.159156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.759 [2024-11-20 12:43:34.159179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.759 qpair failed and we were unable to recover it. 00:29:28.759 [2024-11-20 12:43:34.159299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.759 [2024-11-20 12:43:34.159324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.759 qpair failed and we were unable to recover it. 00:29:28.759 [2024-11-20 12:43:34.159423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.759 [2024-11-20 12:43:34.159445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.759 qpair failed and we were unable to recover it. 00:29:28.759 [2024-11-20 12:43:34.159688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.759 [2024-11-20 12:43:34.159711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.759 qpair failed and we were unable to recover it. 00:29:28.759 [2024-11-20 12:43:34.159880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.759 [2024-11-20 12:43:34.159904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.759 qpair failed and we were unable to recover it. 00:29:28.759 [2024-11-20 12:43:34.159996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.759 [2024-11-20 12:43:34.160019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.759 qpair failed and we were unable to recover it. 00:29:28.759 [2024-11-20 12:43:34.160177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.759 [2024-11-20 12:43:34.160200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.759 qpair failed and we were unable to recover it. 00:29:28.759 [2024-11-20 12:43:34.160377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.759 [2024-11-20 12:43:34.160400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.759 qpair failed and we were unable to recover it. 00:29:28.759 [2024-11-20 12:43:34.160513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.759 [2024-11-20 12:43:34.160537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.759 qpair failed and we were unable to recover it. 00:29:28.759 [2024-11-20 12:43:34.160626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.759 [2024-11-20 12:43:34.160648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.759 qpair failed and we were unable to recover it. 00:29:28.759 [2024-11-20 12:43:34.160757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.759 [2024-11-20 12:43:34.160784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.759 qpair failed and we were unable to recover it. 00:29:28.759 [2024-11-20 12:43:34.160943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.759 [2024-11-20 12:43:34.160966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.759 qpair failed and we were unable to recover it. 00:29:28.759 [2024-11-20 12:43:34.161068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.759 [2024-11-20 12:43:34.161092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.759 qpair failed and we were unable to recover it. 00:29:28.759 [2024-11-20 12:43:34.161302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.759 [2024-11-20 12:43:34.161328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.759 qpair failed and we were unable to recover it. 00:29:28.759 [2024-11-20 12:43:34.161424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.759 [2024-11-20 12:43:34.161446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.759 qpair failed and we were unable to recover it. 00:29:28.759 [2024-11-20 12:43:34.161537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.759 [2024-11-20 12:43:34.161560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.759 qpair failed and we were unable to recover it. 00:29:28.759 [2024-11-20 12:43:34.161658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.759 [2024-11-20 12:43:34.161681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.759 qpair failed and we were unable to recover it. 00:29:28.759 [2024-11-20 12:43:34.161851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.759 [2024-11-20 12:43:34.161874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.759 qpair failed and we were unable to recover it. 00:29:28.759 [2024-11-20 12:43:34.162031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.759 [2024-11-20 12:43:34.162054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.759 qpair failed and we were unable to recover it. 00:29:28.759 [2024-11-20 12:43:34.162239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.759 [2024-11-20 12:43:34.162263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.759 qpair failed and we were unable to recover it. 00:29:28.760 [2024-11-20 12:43:34.162368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.760 [2024-11-20 12:43:34.162391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.760 qpair failed and we were unable to recover it. 00:29:28.760 [2024-11-20 12:43:34.162671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.760 [2024-11-20 12:43:34.162694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.760 qpair failed and we were unable to recover it. 00:29:28.760 [2024-11-20 12:43:34.162859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.760 [2024-11-20 12:43:34.162882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.760 qpair failed and we were unable to recover it. 00:29:28.760 [2024-11-20 12:43:34.163043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.760 [2024-11-20 12:43:34.163066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.760 qpair failed and we were unable to recover it. 00:29:28.760 [2024-11-20 12:43:34.163239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.760 [2024-11-20 12:43:34.163264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.760 qpair failed and we were unable to recover it. 00:29:28.760 [2024-11-20 12:43:34.163449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.760 [2024-11-20 12:43:34.163472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.760 qpair failed and we were unable to recover it. 00:29:28.760 [2024-11-20 12:43:34.163575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.760 [2024-11-20 12:43:34.163598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.760 qpair failed and we were unable to recover it. 00:29:28.760 [2024-11-20 12:43:34.163707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.760 [2024-11-20 12:43:34.163730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.760 qpair failed and we were unable to recover it. 00:29:28.760 [2024-11-20 12:43:34.163837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.760 [2024-11-20 12:43:34.163860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.760 qpair failed and we were unable to recover it. 00:29:28.760 [2024-11-20 12:43:34.163942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.760 [2024-11-20 12:43:34.163965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.760 qpair failed and we were unable to recover it. 00:29:28.760 [2024-11-20 12:43:34.164055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.760 [2024-11-20 12:43:34.164078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.760 qpair failed and we were unable to recover it. 00:29:28.760 [2024-11-20 12:43:34.164229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.760 [2024-11-20 12:43:34.164252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.760 qpair failed and we were unable to recover it. 00:29:28.760 [2024-11-20 12:43:34.164425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.760 [2024-11-20 12:43:34.164448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.760 qpair failed and we were unable to recover it. 00:29:28.760 [2024-11-20 12:43:34.164547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.760 [2024-11-20 12:43:34.164571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.760 qpair failed and we were unable to recover it. 00:29:28.760 [2024-11-20 12:43:34.164726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.760 [2024-11-20 12:43:34.164748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.760 qpair failed and we were unable to recover it. 00:29:28.760 [2024-11-20 12:43:34.164907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.760 [2024-11-20 12:43:34.164930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.760 qpair failed and we were unable to recover it. 00:29:28.760 [2024-11-20 12:43:34.165094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.760 [2024-11-20 12:43:34.165117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.760 qpair failed and we were unable to recover it. 00:29:28.760 [2024-11-20 12:43:34.165346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.760 [2024-11-20 12:43:34.165370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.760 qpair failed and we were unable to recover it. 00:29:28.760 [2024-11-20 12:43:34.165493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.760 [2024-11-20 12:43:34.165516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.760 qpair failed and we were unable to recover it. 00:29:28.760 [2024-11-20 12:43:34.165603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.760 [2024-11-20 12:43:34.165627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.760 qpair failed and we were unable to recover it. 00:29:28.760 [2024-11-20 12:43:34.165717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.760 [2024-11-20 12:43:34.165739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.760 qpair failed and we were unable to recover it. 00:29:28.760 [2024-11-20 12:43:34.165892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.760 [2024-11-20 12:43:34.165915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.760 qpair failed and we were unable to recover it. 00:29:28.760 [2024-11-20 12:43:34.166135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.760 [2024-11-20 12:43:34.166158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.760 qpair failed and we were unable to recover it. 00:29:28.760 [2024-11-20 12:43:34.166320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.760 [2024-11-20 12:43:34.166344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.760 qpair failed and we were unable to recover it. 00:29:28.760 [2024-11-20 12:43:34.166454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.760 [2024-11-20 12:43:34.166477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.760 qpair failed and we were unable to recover it. 00:29:28.760 [2024-11-20 12:43:34.166562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.760 [2024-11-20 12:43:34.166582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.760 qpair failed and we were unable to recover it. 00:29:28.760 [2024-11-20 12:43:34.166668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.760 [2024-11-20 12:43:34.166691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.760 qpair failed and we were unable to recover it. 00:29:28.760 [2024-11-20 12:43:34.166781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.760 [2024-11-20 12:43:34.166804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.760 qpair failed and we were unable to recover it. 00:29:28.760 [2024-11-20 12:43:34.166885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.760 [2024-11-20 12:43:34.166907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.760 qpair failed and we were unable to recover it. 00:29:28.760 [2024-11-20 12:43:34.166987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.760 [2024-11-20 12:43:34.167008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.760 qpair failed and we were unable to recover it. 00:29:28.760 [2024-11-20 12:43:34.167090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.760 [2024-11-20 12:43:34.167113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.760 qpair failed and we were unable to recover it. 00:29:28.760 [2024-11-20 12:43:34.167294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.760 [2024-11-20 12:43:34.167322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.760 qpair failed and we were unable to recover it. 00:29:28.760 [2024-11-20 12:43:34.167412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.760 [2024-11-20 12:43:34.167434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.760 qpair failed and we were unable to recover it. 00:29:28.760 [2024-11-20 12:43:34.167617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.760 [2024-11-20 12:43:34.167639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.760 qpair failed and we were unable to recover it. 00:29:28.760 [2024-11-20 12:43:34.167735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.760 [2024-11-20 12:43:34.167758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.760 qpair failed and we were unable to recover it. 00:29:28.760 [2024-11-20 12:43:34.167976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.761 [2024-11-20 12:43:34.167999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.761 qpair failed and we were unable to recover it. 00:29:28.761 [2024-11-20 12:43:34.168091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.761 [2024-11-20 12:43:34.168114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.761 qpair failed and we were unable to recover it. 00:29:28.761 [2024-11-20 12:43:34.168235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.761 [2024-11-20 12:43:34.168259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.761 qpair failed and we were unable to recover it. 00:29:28.761 [2024-11-20 12:43:34.168406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.761 [2024-11-20 12:43:34.168429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.761 qpair failed and we were unable to recover it. 00:29:28.761 [2024-11-20 12:43:34.168576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.761 [2024-11-20 12:43:34.168599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.761 qpair failed and we were unable to recover it. 00:29:28.761 [2024-11-20 12:43:34.168747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.761 [2024-11-20 12:43:34.168770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.761 qpair failed and we were unable to recover it. 00:29:28.761 [2024-11-20 12:43:34.168916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.761 [2024-11-20 12:43:34.168939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.761 qpair failed and we were unable to recover it. 00:29:28.761 [2024-11-20 12:43:34.169039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.761 [2024-11-20 12:43:34.169062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.761 qpair failed and we were unable to recover it. 00:29:28.761 [2024-11-20 12:43:34.169168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.761 [2024-11-20 12:43:34.169192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.761 qpair failed and we were unable to recover it. 00:29:28.761 [2024-11-20 12:43:34.169284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.761 [2024-11-20 12:43:34.169307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.761 qpair failed and we were unable to recover it. 00:29:28.761 [2024-11-20 12:43:34.169460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.761 [2024-11-20 12:43:34.169483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.761 qpair failed and we were unable to recover it. 00:29:28.761 [2024-11-20 12:43:34.169698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.761 [2024-11-20 12:43:34.169721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.761 qpair failed and we were unable to recover it. 00:29:28.761 [2024-11-20 12:43:34.169804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.761 [2024-11-20 12:43:34.169827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.761 qpair failed and we were unable to recover it. 00:29:28.761 [2024-11-20 12:43:34.169978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.761 [2024-11-20 12:43:34.170001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.761 qpair failed and we were unable to recover it. 00:29:28.761 [2024-11-20 12:43:34.170103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.761 [2024-11-20 12:43:34.170125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.761 qpair failed and we were unable to recover it. 00:29:28.761 [2024-11-20 12:43:34.170229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.761 [2024-11-20 12:43:34.170254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.761 qpair failed and we were unable to recover it. 00:29:28.761 [2024-11-20 12:43:34.170498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.761 [2024-11-20 12:43:34.170521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.761 qpair failed and we were unable to recover it. 00:29:28.761 [2024-11-20 12:43:34.170613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.761 [2024-11-20 12:43:34.170635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.761 qpair failed and we were unable to recover it. 00:29:28.761 [2024-11-20 12:43:34.170744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.761 [2024-11-20 12:43:34.170767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.761 qpair failed and we were unable to recover it. 00:29:28.761 [2024-11-20 12:43:34.170861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.761 [2024-11-20 12:43:34.170885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.761 qpair failed and we were unable to recover it. 00:29:28.761 [2024-11-20 12:43:34.171043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.761 [2024-11-20 12:43:34.171065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.761 qpair failed and we were unable to recover it. 00:29:28.761 [2024-11-20 12:43:34.171162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.761 [2024-11-20 12:43:34.171184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.761 qpair failed and we were unable to recover it. 00:29:28.761 [2024-11-20 12:43:34.171280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.761 [2024-11-20 12:43:34.171304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.761 qpair failed and we were unable to recover it. 00:29:28.761 [2024-11-20 12:43:34.171409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.761 [2024-11-20 12:43:34.171436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.761 qpair failed and we were unable to recover it. 00:29:28.761 [2024-11-20 12:43:34.171589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.761 [2024-11-20 12:43:34.171611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.761 qpair failed and we were unable to recover it. 00:29:28.761 [2024-11-20 12:43:34.171696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.761 [2024-11-20 12:43:34.171717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.761 qpair failed and we were unable to recover it. 00:29:28.761 [2024-11-20 12:43:34.171812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.761 [2024-11-20 12:43:34.171835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.761 qpair failed and we were unable to recover it. 00:29:28.761 [2024-11-20 12:43:34.171931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.761 [2024-11-20 12:43:34.171953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.761 qpair failed and we were unable to recover it. 00:29:28.761 [2024-11-20 12:43:34.172129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.761 [2024-11-20 12:43:34.172151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.761 qpair failed and we were unable to recover it. 00:29:28.761 [2024-11-20 12:43:34.172322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.761 [2024-11-20 12:43:34.172347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.761 qpair failed and we were unable to recover it. 00:29:28.761 [2024-11-20 12:43:34.172430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.761 [2024-11-20 12:43:34.172453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.761 qpair failed and we were unable to recover it. 00:29:28.761 [2024-11-20 12:43:34.172545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.761 [2024-11-20 12:43:34.172568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.761 qpair failed and we were unable to recover it. 00:29:28.761 [2024-11-20 12:43:34.172652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.761 [2024-11-20 12:43:34.172675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.761 qpair failed and we were unable to recover it. 00:29:28.761 [2024-11-20 12:43:34.172821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.761 [2024-11-20 12:43:34.172844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.761 qpair failed and we were unable to recover it. 00:29:28.761 [2024-11-20 12:43:34.172937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.761 [2024-11-20 12:43:34.172960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.761 qpair failed and we were unable to recover it. 00:29:28.761 [2024-11-20 12:43:34.173116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.761 [2024-11-20 12:43:34.173138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.761 qpair failed and we were unable to recover it. 00:29:28.761 [2024-11-20 12:43:34.173249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.762 [2024-11-20 12:43:34.173272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.762 qpair failed and we were unable to recover it. 00:29:28.762 [2024-11-20 12:43:34.173417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.762 [2024-11-20 12:43:34.173490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.762 qpair failed and we were unable to recover it. 00:29:28.762 [2024-11-20 12:43:34.173707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.762 [2024-11-20 12:43:34.173746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.762 qpair failed and we were unable to recover it. 00:29:28.762 [2024-11-20 12:43:34.173874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.762 [2024-11-20 12:43:34.173909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.762 qpair failed and we were unable to recover it. 00:29:28.762 [2024-11-20 12:43:34.174083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.762 [2024-11-20 12:43:34.174116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.762 qpair failed and we were unable to recover it. 00:29:28.762 [2024-11-20 12:43:34.174245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.762 [2024-11-20 12:43:34.174282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.762 qpair failed and we were unable to recover it. 00:29:28.762 [2024-11-20 12:43:34.174490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.762 [2024-11-20 12:43:34.174524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.762 qpair failed and we were unable to recover it. 00:29:28.762 [2024-11-20 12:43:34.174709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.762 [2024-11-20 12:43:34.174735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.762 qpair failed and we were unable to recover it. 00:29:28.762 [2024-11-20 12:43:34.174905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.762 [2024-11-20 12:43:34.174927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.762 qpair failed and we were unable to recover it. 00:29:28.762 [2024-11-20 12:43:34.175082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.762 [2024-11-20 12:43:34.175105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.762 qpair failed and we were unable to recover it. 00:29:28.762 [2024-11-20 12:43:34.175206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.762 [2024-11-20 12:43:34.175230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.762 qpair failed and we were unable to recover it. 00:29:28.762 [2024-11-20 12:43:34.175351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.762 [2024-11-20 12:43:34.175374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.762 qpair failed and we were unable to recover it. 00:29:28.762 [2024-11-20 12:43:34.175478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.762 [2024-11-20 12:43:34.175501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.762 qpair failed and we were unable to recover it. 00:29:28.762 [2024-11-20 12:43:34.175730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.762 [2024-11-20 12:43:34.175753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.762 qpair failed and we were unable to recover it. 00:29:28.762 [2024-11-20 12:43:34.175835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.762 [2024-11-20 12:43:34.175858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.762 qpair failed and we were unable to recover it. 00:29:28.762 [2024-11-20 12:43:34.175959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.762 [2024-11-20 12:43:34.175983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.762 qpair failed and we were unable to recover it. 00:29:28.762 [2024-11-20 12:43:34.176224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.762 [2024-11-20 12:43:34.176249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.762 qpair failed and we were unable to recover it. 00:29:28.762 [2024-11-20 12:43:34.176362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.762 [2024-11-20 12:43:34.176385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.762 qpair failed and we were unable to recover it. 00:29:28.762 [2024-11-20 12:43:34.176491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.762 [2024-11-20 12:43:34.176513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.762 qpair failed and we were unable to recover it. 00:29:28.762 [2024-11-20 12:43:34.176677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.762 [2024-11-20 12:43:34.176699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.762 qpair failed and we were unable to recover it. 00:29:28.762 [2024-11-20 12:43:34.176849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.762 [2024-11-20 12:43:34.176872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.762 qpair failed and we were unable to recover it. 00:29:28.762 [2024-11-20 12:43:34.177022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.762 [2024-11-20 12:43:34.177045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.762 qpair failed and we were unable to recover it. 00:29:28.762 [2024-11-20 12:43:34.177140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.762 [2024-11-20 12:43:34.177162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.762 qpair failed and we were unable to recover it. 00:29:28.762 [2024-11-20 12:43:34.177339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.762 [2024-11-20 12:43:34.177364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.762 qpair failed and we were unable to recover it. 00:29:28.762 [2024-11-20 12:43:34.177459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.762 [2024-11-20 12:43:34.177482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.762 qpair failed and we were unable to recover it. 00:29:28.762 [2024-11-20 12:43:34.177578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.762 [2024-11-20 12:43:34.177600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.762 qpair failed and we were unable to recover it. 00:29:28.762 [2024-11-20 12:43:34.177696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.762 [2024-11-20 12:43:34.177719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.762 qpair failed and we were unable to recover it. 00:29:28.762 [2024-11-20 12:43:34.177895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.762 [2024-11-20 12:43:34.177919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.762 qpair failed and we were unable to recover it. 00:29:28.762 [2024-11-20 12:43:34.178004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.762 [2024-11-20 12:43:34.178031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.762 qpair failed and we were unable to recover it. 00:29:28.762 [2024-11-20 12:43:34.178116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.762 [2024-11-20 12:43:34.178139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.762 qpair failed and we were unable to recover it. 00:29:28.762 [2024-11-20 12:43:34.178289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.762 [2024-11-20 12:43:34.178313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.762 qpair failed and we were unable to recover it. 00:29:28.762 [2024-11-20 12:43:34.178470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.762 [2024-11-20 12:43:34.178492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.762 qpair failed and we were unable to recover it. 00:29:28.762 [2024-11-20 12:43:34.178670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.762 [2024-11-20 12:43:34.178693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.762 qpair failed and we were unable to recover it. 00:29:28.762 [2024-11-20 12:43:34.178784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.762 [2024-11-20 12:43:34.178807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.762 qpair failed and we were unable to recover it. 00:29:28.762 [2024-11-20 12:43:34.178893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.762 [2024-11-20 12:43:34.178916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.762 qpair failed and we were unable to recover it. 00:29:28.762 [2024-11-20 12:43:34.179001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.762 [2024-11-20 12:43:34.179023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.762 qpair failed and we were unable to recover it. 00:29:28.762 [2024-11-20 12:43:34.179106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.762 [2024-11-20 12:43:34.179128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.762 qpair failed and we were unable to recover it. 00:29:28.762 [2024-11-20 12:43:34.179283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.763 [2024-11-20 12:43:34.179307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.763 qpair failed and we were unable to recover it. 00:29:28.763 [2024-11-20 12:43:34.179528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.763 [2024-11-20 12:43:34.179551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.763 qpair failed and we were unable to recover it. 00:29:28.763 [2024-11-20 12:43:34.179650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.763 [2024-11-20 12:43:34.179672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.763 qpair failed and we were unable to recover it. 00:29:28.763 [2024-11-20 12:43:34.179833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.763 [2024-11-20 12:43:34.179856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.763 qpair failed and we were unable to recover it. 00:29:28.763 [2024-11-20 12:43:34.180028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.763 [2024-11-20 12:43:34.180051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.763 qpair failed and we were unable to recover it. 00:29:28.763 [2024-11-20 12:43:34.180227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.763 [2024-11-20 12:43:34.180251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.763 qpair failed and we were unable to recover it. 00:29:28.763 [2024-11-20 12:43:34.180351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.763 [2024-11-20 12:43:34.180374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.763 qpair failed and we were unable to recover it. 00:29:28.763 [2024-11-20 12:43:34.180455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.763 [2024-11-20 12:43:34.180476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.763 qpair failed and we were unable to recover it. 00:29:28.763 [2024-11-20 12:43:34.180575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.763 [2024-11-20 12:43:34.180598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.763 qpair failed and we were unable to recover it. 00:29:28.763 [2024-11-20 12:43:34.180823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.763 [2024-11-20 12:43:34.180845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.763 qpair failed and we were unable to recover it. 00:29:28.763 [2024-11-20 12:43:34.180948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.763 [2024-11-20 12:43:34.180970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.763 qpair failed and we were unable to recover it. 00:29:28.763 [2024-11-20 12:43:34.181085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.763 [2024-11-20 12:43:34.181108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.763 qpair failed and we were unable to recover it. 00:29:28.763 [2024-11-20 12:43:34.181192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.763 [2024-11-20 12:43:34.181222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.763 qpair failed and we were unable to recover it. 00:29:28.763 [2024-11-20 12:43:34.181330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.763 [2024-11-20 12:43:34.181354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.763 qpair failed and we were unable to recover it. 00:29:28.763 [2024-11-20 12:43:34.181437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.763 [2024-11-20 12:43:34.181460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.763 qpair failed and we were unable to recover it. 00:29:28.763 [2024-11-20 12:43:34.181610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.763 [2024-11-20 12:43:34.181633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.763 qpair failed and we were unable to recover it. 00:29:28.763 [2024-11-20 12:43:34.181829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.763 [2024-11-20 12:43:34.181852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.763 qpair failed and we were unable to recover it. 00:29:28.763 [2024-11-20 12:43:34.181953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.763 [2024-11-20 12:43:34.181975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.763 qpair failed and we were unable to recover it. 00:29:28.763 [2024-11-20 12:43:34.182138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.763 [2024-11-20 12:43:34.182165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.763 qpair failed and we were unable to recover it. 00:29:28.763 [2024-11-20 12:43:34.182357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.763 [2024-11-20 12:43:34.182382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.763 qpair failed and we were unable to recover it. 00:29:28.763 [2024-11-20 12:43:34.182562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.763 [2024-11-20 12:43:34.182585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.763 qpair failed and we were unable to recover it. 00:29:28.763 [2024-11-20 12:43:34.182746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.763 [2024-11-20 12:43:34.182769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.763 qpair failed and we were unable to recover it. 00:29:28.763 [2024-11-20 12:43:34.182933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.763 [2024-11-20 12:43:34.182957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.763 qpair failed and we were unable to recover it. 00:29:28.763 [2024-11-20 12:43:34.183121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.763 [2024-11-20 12:43:34.183144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.763 qpair failed and we were unable to recover it. 00:29:28.763 [2024-11-20 12:43:34.183251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.763 [2024-11-20 12:43:34.183275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.763 qpair failed and we were unable to recover it. 00:29:28.763 [2024-11-20 12:43:34.183371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.763 [2024-11-20 12:43:34.183394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.763 qpair failed and we were unable to recover it. 00:29:28.763 [2024-11-20 12:43:34.183497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.763 [2024-11-20 12:43:34.183520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.763 qpair failed and we were unable to recover it. 00:29:28.763 [2024-11-20 12:43:34.183673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.763 [2024-11-20 12:43:34.183695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.763 qpair failed and we were unable to recover it. 00:29:28.763 [2024-11-20 12:43:34.183855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.763 [2024-11-20 12:43:34.183878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.763 qpair failed and we were unable to recover it. 00:29:28.763 [2024-11-20 12:43:34.184034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.763 [2024-11-20 12:43:34.184057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.763 qpair failed and we were unable to recover it. 00:29:28.763 [2024-11-20 12:43:34.184236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.763 [2024-11-20 12:43:34.184261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.763 qpair failed and we were unable to recover it. 00:29:28.763 [2024-11-20 12:43:34.184354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.763 [2024-11-20 12:43:34.184377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.763 qpair failed and we were unable to recover it. 00:29:28.763 [2024-11-20 12:43:34.184466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.763 [2024-11-20 12:43:34.184489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.763 qpair failed and we were unable to recover it. 00:29:28.763 [2024-11-20 12:43:34.184719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.764 [2024-11-20 12:43:34.184741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.764 qpair failed and we were unable to recover it. 00:29:28.764 [2024-11-20 12:43:34.184827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.764 [2024-11-20 12:43:34.184850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.764 qpair failed and we were unable to recover it. 00:29:28.764 [2024-11-20 12:43:34.184945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.764 [2024-11-20 12:43:34.184968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.764 qpair failed and we were unable to recover it. 00:29:28.764 [2024-11-20 12:43:34.185150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.764 [2024-11-20 12:43:34.185172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.764 qpair failed and we were unable to recover it. 00:29:28.764 [2024-11-20 12:43:34.185349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.764 [2024-11-20 12:43:34.185373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.764 qpair failed and we were unable to recover it. 00:29:28.764 [2024-11-20 12:43:34.185536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.764 [2024-11-20 12:43:34.185558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.764 qpair failed and we were unable to recover it. 00:29:28.764 [2024-11-20 12:43:34.185654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.764 [2024-11-20 12:43:34.185676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.764 qpair failed and we were unable to recover it. 00:29:28.764 [2024-11-20 12:43:34.185836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.764 [2024-11-20 12:43:34.185859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.764 qpair failed and we were unable to recover it. 00:29:28.764 [2024-11-20 12:43:34.186075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.764 [2024-11-20 12:43:34.186098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.764 qpair failed and we were unable to recover it. 00:29:28.764 [2024-11-20 12:43:34.186248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.764 [2024-11-20 12:43:34.186272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.764 qpair failed and we were unable to recover it. 00:29:28.764 [2024-11-20 12:43:34.186358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.764 [2024-11-20 12:43:34.186382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.764 qpair failed and we were unable to recover it. 00:29:28.764 [2024-11-20 12:43:34.186464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.764 [2024-11-20 12:43:34.186485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.764 qpair failed and we were unable to recover it. 00:29:28.764 [2024-11-20 12:43:34.186660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.764 [2024-11-20 12:43:34.186683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.764 qpair failed and we were unable to recover it. 00:29:28.764 [2024-11-20 12:43:34.186867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.764 [2024-11-20 12:43:34.186890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.764 qpair failed and we were unable to recover it. 00:29:28.764 [2024-11-20 12:43:34.187051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.764 [2024-11-20 12:43:34.187074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.764 qpair failed and we were unable to recover it. 00:29:28.764 [2024-11-20 12:43:34.187175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.764 [2024-11-20 12:43:34.187198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.764 qpair failed and we were unable to recover it. 00:29:28.764 [2024-11-20 12:43:34.187357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.764 [2024-11-20 12:43:34.187381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.764 qpair failed and we were unable to recover it. 00:29:28.764 [2024-11-20 12:43:34.187552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.764 [2024-11-20 12:43:34.187575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.764 qpair failed and we were unable to recover it. 00:29:28.764 [2024-11-20 12:43:34.187756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.764 [2024-11-20 12:43:34.187779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.764 qpair failed and we were unable to recover it. 00:29:28.764 [2024-11-20 12:43:34.187894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.764 [2024-11-20 12:43:34.187917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.764 qpair failed and we were unable to recover it. 00:29:28.764 [2024-11-20 12:43:34.188013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.764 [2024-11-20 12:43:34.188037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.764 qpair failed and we were unable to recover it. 00:29:28.764 [2024-11-20 12:43:34.188278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.764 [2024-11-20 12:43:34.188303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.764 qpair failed and we were unable to recover it. 00:29:28.764 [2024-11-20 12:43:34.188385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.764 [2024-11-20 12:43:34.188408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.764 qpair failed and we were unable to recover it. 00:29:28.764 [2024-11-20 12:43:34.188574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.764 [2024-11-20 12:43:34.188597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.764 qpair failed and we were unable to recover it. 00:29:28.764 [2024-11-20 12:43:34.188764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.764 [2024-11-20 12:43:34.188788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.764 qpair failed and we were unable to recover it. 00:29:28.764 [2024-11-20 12:43:34.188951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.764 [2024-11-20 12:43:34.188975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.764 qpair failed and we were unable to recover it. 00:29:28.764 [2024-11-20 12:43:34.189156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.764 [2024-11-20 12:43:34.189183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.764 qpair failed and we were unable to recover it. 00:29:28.764 [2024-11-20 12:43:34.189411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.764 [2024-11-20 12:43:34.189435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.764 qpair failed and we were unable to recover it. 00:29:28.764 [2024-11-20 12:43:34.189522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.764 [2024-11-20 12:43:34.189545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.764 qpair failed and we were unable to recover it. 00:29:28.764 [2024-11-20 12:43:34.189641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.764 [2024-11-20 12:43:34.189664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.764 qpair failed and we were unable to recover it. 00:29:28.764 [2024-11-20 12:43:34.189816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.764 [2024-11-20 12:43:34.189838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.764 qpair failed and we were unable to recover it. 00:29:28.764 [2024-11-20 12:43:34.190060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.764 [2024-11-20 12:43:34.190083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.764 qpair failed and we were unable to recover it. 00:29:28.764 [2024-11-20 12:43:34.190179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.764 [2024-11-20 12:43:34.190228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.764 qpair failed and we were unable to recover it. 00:29:28.764 [2024-11-20 12:43:34.190488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.764 [2024-11-20 12:43:34.190511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.764 qpair failed and we were unable to recover it. 00:29:28.764 [2024-11-20 12:43:34.190594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.764 [2024-11-20 12:43:34.190617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.764 qpair failed and we were unable to recover it. 00:29:28.764 [2024-11-20 12:43:34.190731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.764 [2024-11-20 12:43:34.190754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.764 qpair failed and we were unable to recover it. 00:29:28.764 [2024-11-20 12:43:34.190955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-11-20 12:43:34.190979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.765 qpair failed and we were unable to recover it. 00:29:28.765 [2024-11-20 12:43:34.191162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-11-20 12:43:34.191186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.765 qpair failed and we were unable to recover it. 00:29:28.765 [2024-11-20 12:43:34.191299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-11-20 12:43:34.191323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.765 qpair failed and we were unable to recover it. 00:29:28.765 [2024-11-20 12:43:34.191506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-11-20 12:43:34.191528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.765 qpair failed and we were unable to recover it. 00:29:28.765 [2024-11-20 12:43:34.191636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-11-20 12:43:34.191659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.765 qpair failed and we were unable to recover it. 00:29:28.765 [2024-11-20 12:43:34.191828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-11-20 12:43:34.191850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.765 qpair failed and we were unable to recover it. 00:29:28.765 [2024-11-20 12:43:34.192042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-11-20 12:43:34.192064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.765 qpair failed and we were unable to recover it. 00:29:28.765 [2024-11-20 12:43:34.192181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-11-20 12:43:34.192210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.765 qpair failed and we were unable to recover it. 00:29:28.765 [2024-11-20 12:43:34.192401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-11-20 12:43:34.192425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.765 qpair failed and we were unable to recover it. 00:29:28.765 [2024-11-20 12:43:34.192579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-11-20 12:43:34.192600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.765 qpair failed and we were unable to recover it. 00:29:28.765 [2024-11-20 12:43:34.192762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-11-20 12:43:34.192785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.765 qpair failed and we were unable to recover it. 00:29:28.765 [2024-11-20 12:43:34.192947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-11-20 12:43:34.192971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.765 qpair failed and we were unable to recover it. 00:29:28.765 [2024-11-20 12:43:34.193119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-11-20 12:43:34.193141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.765 qpair failed and we were unable to recover it. 00:29:28.765 [2024-11-20 12:43:34.193291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-11-20 12:43:34.193315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.765 qpair failed and we were unable to recover it. 00:29:28.765 [2024-11-20 12:43:34.193427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-11-20 12:43:34.193449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.765 qpair failed and we were unable to recover it. 00:29:28.765 [2024-11-20 12:43:34.193604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-11-20 12:43:34.193627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.765 qpair failed and we were unable to recover it. 00:29:28.765 [2024-11-20 12:43:34.193786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-11-20 12:43:34.193809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.765 qpair failed and we were unable to recover it. 00:29:28.765 [2024-11-20 12:43:34.193974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-11-20 12:43:34.194000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.765 qpair failed and we were unable to recover it. 00:29:28.765 [2024-11-20 12:43:34.194104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-11-20 12:43:34.194127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.765 qpair failed and we were unable to recover it. 00:29:28.765 [2024-11-20 12:43:34.194218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-11-20 12:43:34.194243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.765 qpair failed and we were unable to recover it. 00:29:28.765 [2024-11-20 12:43:34.194363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-11-20 12:43:34.194386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.765 qpair failed and we were unable to recover it. 00:29:28.765 [2024-11-20 12:43:34.194550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-11-20 12:43:34.194573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.765 qpair failed and we were unable to recover it. 00:29:28.765 [2024-11-20 12:43:34.194665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-11-20 12:43:34.194688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.765 qpair failed and we were unable to recover it. 00:29:28.765 [2024-11-20 12:43:34.194852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-11-20 12:43:34.194875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.765 qpair failed and we were unable to recover it. 00:29:28.765 [2024-11-20 12:43:34.195045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-11-20 12:43:34.195068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.765 qpair failed and we were unable to recover it. 00:29:28.765 [2024-11-20 12:43:34.195237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-11-20 12:43:34.195262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.765 qpair failed and we were unable to recover it. 00:29:28.765 [2024-11-20 12:43:34.195434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-11-20 12:43:34.195457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.765 qpair failed and we were unable to recover it. 00:29:28.765 [2024-11-20 12:43:34.195571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-11-20 12:43:34.195593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.765 qpair failed and we were unable to recover it. 00:29:28.765 [2024-11-20 12:43:34.195686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-11-20 12:43:34.195710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.765 qpair failed and we were unable to recover it. 00:29:28.765 [2024-11-20 12:43:34.195808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-11-20 12:43:34.195830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.765 qpair failed and we were unable to recover it. 00:29:28.765 [2024-11-20 12:43:34.195930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-11-20 12:43:34.195952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.765 qpair failed and we were unable to recover it. 00:29:28.765 [2024-11-20 12:43:34.196111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-11-20 12:43:34.196134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.765 qpair failed and we were unable to recover it. 00:29:28.765 [2024-11-20 12:43:34.196301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-11-20 12:43:34.196325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.765 qpair failed and we were unable to recover it. 00:29:28.765 [2024-11-20 12:43:34.196485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-11-20 12:43:34.196507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.765 qpair failed and we were unable to recover it. 00:29:28.765 [2024-11-20 12:43:34.196671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-11-20 12:43:34.196694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.765 qpair failed and we were unable to recover it. 00:29:28.765 [2024-11-20 12:43:34.196788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-11-20 12:43:34.196810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.765 qpair failed and we were unable to recover it. 00:29:28.765 [2024-11-20 12:43:34.196975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-11-20 12:43:34.196998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.765 qpair failed and we were unable to recover it. 00:29:28.765 [2024-11-20 12:43:34.197188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-11-20 12:43:34.197219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.765 qpair failed and we were unable to recover it. 00:29:28.765 [2024-11-20 12:43:34.197460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.765 [2024-11-20 12:43:34.197483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.766 qpair failed and we were unable to recover it. 00:29:28.766 [2024-11-20 12:43:34.197594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.766 [2024-11-20 12:43:34.197617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.766 qpair failed and we were unable to recover it. 00:29:28.766 [2024-11-20 12:43:34.197707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.766 [2024-11-20 12:43:34.197728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.766 qpair failed and we were unable to recover it. 00:29:28.766 [2024-11-20 12:43:34.197845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.766 [2024-11-20 12:43:34.197867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.766 qpair failed and we were unable to recover it. 00:29:28.766 [2024-11-20 12:43:34.198129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.766 [2024-11-20 12:43:34.198152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.766 qpair failed and we were unable to recover it. 00:29:28.766 [2024-11-20 12:43:34.198317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.766 [2024-11-20 12:43:34.198341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.766 qpair failed and we were unable to recover it. 00:29:28.766 [2024-11-20 12:43:34.198564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.766 [2024-11-20 12:43:34.198586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.766 qpair failed and we were unable to recover it. 00:29:28.766 [2024-11-20 12:43:34.198688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.766 [2024-11-20 12:43:34.198716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.766 qpair failed and we were unable to recover it. 00:29:28.766 [2024-11-20 12:43:34.198967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.766 [2024-11-20 12:43:34.198992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.766 qpair failed and we were unable to recover it. 00:29:28.766 [2024-11-20 12:43:34.199144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.766 [2024-11-20 12:43:34.199167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.766 qpair failed and we were unable to recover it. 00:29:28.766 [2024-11-20 12:43:34.199356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.766 [2024-11-20 12:43:34.199380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.766 qpair failed and we were unable to recover it. 00:29:28.766 [2024-11-20 12:43:34.199562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.766 [2024-11-20 12:43:34.199585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.766 qpair failed and we were unable to recover it. 00:29:28.766 [2024-11-20 12:43:34.199736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.766 [2024-11-20 12:43:34.199759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.766 qpair failed and we were unable to recover it. 00:29:28.766 [2024-11-20 12:43:34.199860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.766 [2024-11-20 12:43:34.199883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.766 qpair failed and we were unable to recover it. 00:29:28.766 [2024-11-20 12:43:34.200033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.766 [2024-11-20 12:43:34.200056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.766 qpair failed and we were unable to recover it. 00:29:28.766 [2024-11-20 12:43:34.200222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.766 [2024-11-20 12:43:34.200246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.766 qpair failed and we were unable to recover it. 00:29:28.766 [2024-11-20 12:43:34.200339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.766 [2024-11-20 12:43:34.200361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.766 qpair failed and we were unable to recover it. 00:29:28.766 [2024-11-20 12:43:34.200479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.766 [2024-11-20 12:43:34.200502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.766 qpair failed and we were unable to recover it. 00:29:28.766 [2024-11-20 12:43:34.200587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.766 [2024-11-20 12:43:34.200608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.766 qpair failed and we were unable to recover it. 00:29:28.766 [2024-11-20 12:43:34.200774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.766 [2024-11-20 12:43:34.200797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.766 qpair failed and we were unable to recover it. 00:29:28.766 [2024-11-20 12:43:34.200894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.766 [2024-11-20 12:43:34.200922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.766 qpair failed and we were unable to recover it. 00:29:28.766 [2024-11-20 12:43:34.201075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.766 [2024-11-20 12:43:34.201098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.766 qpair failed and we were unable to recover it. 00:29:28.766 [2024-11-20 12:43:34.201250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.766 [2024-11-20 12:43:34.201274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.766 qpair failed and we were unable to recover it. 00:29:28.766 [2024-11-20 12:43:34.201424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.766 [2024-11-20 12:43:34.201446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.766 qpair failed and we were unable to recover it. 00:29:28.766 [2024-11-20 12:43:34.201619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.766 [2024-11-20 12:43:34.201642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.766 qpair failed and we were unable to recover it. 00:29:28.766 [2024-11-20 12:43:34.201804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.766 [2024-11-20 12:43:34.201827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.766 qpair failed and we were unable to recover it. 00:29:28.766 [2024-11-20 12:43:34.201928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.766 [2024-11-20 12:43:34.201951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.766 qpair failed and we were unable to recover it. 00:29:28.766 [2024-11-20 12:43:34.202033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.766 [2024-11-20 12:43:34.202056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.766 qpair failed and we were unable to recover it. 00:29:28.766 [2024-11-20 12:43:34.202223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.766 [2024-11-20 12:43:34.202248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.766 qpair failed and we were unable to recover it. 00:29:28.766 [2024-11-20 12:43:34.202337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.766 [2024-11-20 12:43:34.202359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.766 qpair failed and we were unable to recover it. 00:29:28.766 [2024-11-20 12:43:34.202519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.766 [2024-11-20 12:43:34.202542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.766 qpair failed and we were unable to recover it. 00:29:28.766 [2024-11-20 12:43:34.202721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.766 [2024-11-20 12:43:34.202743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.766 qpair failed and we were unable to recover it. 00:29:28.766 [2024-11-20 12:43:34.202901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.766 [2024-11-20 12:43:34.202924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.766 qpair failed and we were unable to recover it. 00:29:28.766 [2024-11-20 12:43:34.203102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.766 [2024-11-20 12:43:34.203124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.766 qpair failed and we were unable to recover it. 00:29:28.766 [2024-11-20 12:43:34.203287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.766 [2024-11-20 12:43:34.203311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.766 qpair failed and we were unable to recover it. 00:29:28.766 [2024-11-20 12:43:34.203480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.766 [2024-11-20 12:43:34.203503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.766 qpair failed and we were unable to recover it. 00:29:28.766 [2024-11-20 12:43:34.203650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.766 [2024-11-20 12:43:34.203673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.766 qpair failed and we were unable to recover it. 00:29:28.766 [2024-11-20 12:43:34.203858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.766 [2024-11-20 12:43:34.203881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.766 qpair failed and we were unable to recover it. 00:29:28.766 [2024-11-20 12:43:34.204137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.766 [2024-11-20 12:43:34.204160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.766 qpair failed and we were unable to recover it. 00:29:28.766 [2024-11-20 12:43:34.204361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.767 [2024-11-20 12:43:34.204386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.767 qpair failed and we were unable to recover it. 00:29:28.767 [2024-11-20 12:43:34.204551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.767 [2024-11-20 12:43:34.204574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.767 qpair failed and we were unable to recover it. 00:29:28.767 [2024-11-20 12:43:34.204755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.767 [2024-11-20 12:43:34.204778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.767 qpair failed and we were unable to recover it. 00:29:28.767 [2024-11-20 12:43:34.204938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.767 [2024-11-20 12:43:34.204961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.767 qpair failed and we were unable to recover it. 00:29:28.767 [2024-11-20 12:43:34.205149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.767 [2024-11-20 12:43:34.205172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.767 qpair failed and we were unable to recover it. 00:29:28.767 [2024-11-20 12:43:34.205277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.767 [2024-11-20 12:43:34.205300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.767 qpair failed and we were unable to recover it. 00:29:28.767 [2024-11-20 12:43:34.205470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.767 [2024-11-20 12:43:34.205492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.767 qpair failed and we were unable to recover it. 00:29:28.767 [2024-11-20 12:43:34.205593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.767 [2024-11-20 12:43:34.205615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.767 qpair failed and we were unable to recover it. 00:29:28.767 [2024-11-20 12:43:34.205792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.767 [2024-11-20 12:43:34.205818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.767 qpair failed and we were unable to recover it. 00:29:28.767 [2024-11-20 12:43:34.205922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.767 [2024-11-20 12:43:34.205946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.767 qpair failed and we were unable to recover it. 00:29:28.767 [2024-11-20 12:43:34.206119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.767 [2024-11-20 12:43:34.206142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.767 qpair failed and we were unable to recover it. 00:29:28.767 [2024-11-20 12:43:34.206357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.767 [2024-11-20 12:43:34.206382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.767 qpair failed and we were unable to recover it. 00:29:28.767 [2024-11-20 12:43:34.206548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.767 [2024-11-20 12:43:34.206572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.767 qpair failed and we were unable to recover it. 00:29:28.767 [2024-11-20 12:43:34.206664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.767 [2024-11-20 12:43:34.206687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.767 qpair failed and we were unable to recover it. 00:29:28.767 [2024-11-20 12:43:34.206796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.767 [2024-11-20 12:43:34.206819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.767 qpair failed and we were unable to recover it. 00:29:28.767 [2024-11-20 12:43:34.207005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.767 [2024-11-20 12:43:34.207029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.767 qpair failed and we were unable to recover it. 00:29:28.767 [2024-11-20 12:43:34.207193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.767 [2024-11-20 12:43:34.207231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.767 qpair failed and we were unable to recover it. 00:29:28.767 [2024-11-20 12:43:34.207463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.767 [2024-11-20 12:43:34.207486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.767 qpair failed and we were unable to recover it. 00:29:28.767 [2024-11-20 12:43:34.207595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.767 [2024-11-20 12:43:34.207618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.767 qpair failed and we were unable to recover it. 00:29:28.767 [2024-11-20 12:43:34.207735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.767 [2024-11-20 12:43:34.207759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.767 qpair failed and we were unable to recover it. 00:29:28.767 [2024-11-20 12:43:34.207845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.767 [2024-11-20 12:43:34.207866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.767 qpair failed and we were unable to recover it. 00:29:28.767 [2024-11-20 12:43:34.208084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.767 [2024-11-20 12:43:34.208108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.767 qpair failed and we were unable to recover it. 00:29:28.767 [2024-11-20 12:43:34.208340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.767 [2024-11-20 12:43:34.208413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.767 qpair failed and we were unable to recover it. 00:29:28.767 [2024-11-20 12:43:34.208725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.767 [2024-11-20 12:43:34.208768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.767 qpair failed and we were unable to recover it. 00:29:28.767 [2024-11-20 12:43:34.208950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.767 [2024-11-20 12:43:34.208984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.767 qpair failed and we were unable to recover it. 00:29:28.767 [2024-11-20 12:43:34.209196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.767 [2024-11-20 12:43:34.209243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.767 qpair failed and we were unable to recover it. 00:29:28.767 [2024-11-20 12:43:34.209440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.767 [2024-11-20 12:43:34.209477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.767 qpair failed and we were unable to recover it. 00:29:28.767 [2024-11-20 12:43:34.209605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.767 [2024-11-20 12:43:34.209644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.767 qpair failed and we were unable to recover it. 00:29:28.767 [2024-11-20 12:43:34.209878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.767 [2024-11-20 12:43:34.209904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.767 qpair failed and we were unable to recover it. 00:29:28.767 [2024-11-20 12:43:34.210077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.767 [2024-11-20 12:43:34.210100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.767 qpair failed and we were unable to recover it. 00:29:28.767 [2024-11-20 12:43:34.210249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.767 [2024-11-20 12:43:34.210273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.767 qpair failed and we were unable to recover it. 00:29:28.767 [2024-11-20 12:43:34.210423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.767 [2024-11-20 12:43:34.210446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.767 qpair failed and we were unable to recover it. 00:29:28.767 [2024-11-20 12:43:34.210609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.767 [2024-11-20 12:43:34.210632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.767 qpair failed and we were unable to recover it. 00:29:28.767 [2024-11-20 12:43:34.210801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.767 [2024-11-20 12:43:34.210823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.767 qpair failed and we were unable to recover it. 00:29:28.767 [2024-11-20 12:43:34.211002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.767 [2024-11-20 12:43:34.211026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.767 qpair failed and we were unable to recover it. 00:29:28.767 [2024-11-20 12:43:34.211141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.767 [2024-11-20 12:43:34.211164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.767 qpair failed and we were unable to recover it. 00:29:28.767 [2024-11-20 12:43:34.211293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.767 [2024-11-20 12:43:34.211317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.767 qpair failed and we were unable to recover it. 00:29:28.767 [2024-11-20 12:43:34.211421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.767 [2024-11-20 12:43:34.211443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.767 qpair failed and we were unable to recover it. 00:29:28.767 [2024-11-20 12:43:34.211601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.768 [2024-11-20 12:43:34.211624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.768 qpair failed and we were unable to recover it. 00:29:28.768 [2024-11-20 12:43:34.211724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.768 [2024-11-20 12:43:34.211747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.768 qpair failed and we were unable to recover it. 00:29:28.768 [2024-11-20 12:43:34.211846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.768 [2024-11-20 12:43:34.211869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.768 qpair failed and we were unable to recover it. 00:29:28.768 [2024-11-20 12:43:34.212034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.768 [2024-11-20 12:43:34.212057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.768 qpair failed and we were unable to recover it. 00:29:28.768 [2024-11-20 12:43:34.212237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.768 [2024-11-20 12:43:34.212261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.768 qpair failed and we were unable to recover it. 00:29:28.768 [2024-11-20 12:43:34.212444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.768 [2024-11-20 12:43:34.212467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.768 qpair failed and we were unable to recover it. 00:29:28.768 [2024-11-20 12:43:34.212626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.768 [2024-11-20 12:43:34.212649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.768 qpair failed and we were unable to recover it. 00:29:28.768 [2024-11-20 12:43:34.212894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.768 [2024-11-20 12:43:34.212917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.768 qpair failed and we were unable to recover it. 00:29:28.768 [2024-11-20 12:43:34.213082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.768 [2024-11-20 12:43:34.213105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.768 qpair failed and we were unable to recover it. 00:29:28.768 [2024-11-20 12:43:34.213273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.768 [2024-11-20 12:43:34.213296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.768 qpair failed and we were unable to recover it. 00:29:28.768 [2024-11-20 12:43:34.213402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.768 [2024-11-20 12:43:34.213426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.768 qpair failed and we were unable to recover it. 00:29:28.768 [2024-11-20 12:43:34.213659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.768 [2024-11-20 12:43:34.213685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.768 qpair failed and we were unable to recover it. 00:29:28.768 [2024-11-20 12:43:34.213853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.768 [2024-11-20 12:43:34.213876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.768 qpair failed and we were unable to recover it. 00:29:28.768 [2024-11-20 12:43:34.214129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.768 [2024-11-20 12:43:34.214153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.768 qpair failed and we were unable to recover it. 00:29:28.768 [2024-11-20 12:43:34.214325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.768 [2024-11-20 12:43:34.214350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.768 qpair failed and we were unable to recover it. 00:29:28.768 [2024-11-20 12:43:34.214444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.768 [2024-11-20 12:43:34.214468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.768 qpair failed and we were unable to recover it. 00:29:28.768 [2024-11-20 12:43:34.214636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.768 [2024-11-20 12:43:34.214660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.768 qpair failed and we were unable to recover it. 00:29:28.768 [2024-11-20 12:43:34.214827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.768 [2024-11-20 12:43:34.214850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.768 qpair failed and we were unable to recover it. 00:29:28.768 [2024-11-20 12:43:34.215016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.768 [2024-11-20 12:43:34.215039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.768 qpair failed and we were unable to recover it. 00:29:28.768 [2024-11-20 12:43:34.215138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.768 [2024-11-20 12:43:34.215161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.768 qpair failed and we were unable to recover it. 00:29:28.768 [2024-11-20 12:43:34.215317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.768 [2024-11-20 12:43:34.215342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.768 qpair failed and we were unable to recover it. 00:29:28.768 [2024-11-20 12:43:34.215527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.768 [2024-11-20 12:43:34.215550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.768 qpair failed and we were unable to recover it. 00:29:28.768 [2024-11-20 12:43:34.215791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.768 [2024-11-20 12:43:34.215814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.768 qpair failed and we were unable to recover it. 00:29:28.768 [2024-11-20 12:43:34.215969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.768 [2024-11-20 12:43:34.215992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.768 qpair failed and we were unable to recover it. 00:29:28.768 [2024-11-20 12:43:34.216098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.768 [2024-11-20 12:43:34.216121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.768 qpair failed and we were unable to recover it. 00:29:28.768 [2024-11-20 12:43:34.216371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.768 [2024-11-20 12:43:34.216396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.768 qpair failed and we were unable to recover it. 00:29:28.768 [2024-11-20 12:43:34.216588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.768 [2024-11-20 12:43:34.216611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.768 qpair failed and we were unable to recover it. 00:29:28.768 [2024-11-20 12:43:34.216779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.768 [2024-11-20 12:43:34.216801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.768 qpair failed and we were unable to recover it. 00:29:28.768 [2024-11-20 12:43:34.216956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.768 [2024-11-20 12:43:34.216979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.768 qpair failed and we were unable to recover it. 00:29:28.768 [2024-11-20 12:43:34.217086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.768 [2024-11-20 12:43:34.217110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.768 qpair failed and we were unable to recover it. 00:29:28.768 [2024-11-20 12:43:34.217262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.768 [2024-11-20 12:43:34.217286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.768 qpair failed and we were unable to recover it. 00:29:28.768 [2024-11-20 12:43:34.217448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.769 [2024-11-20 12:43:34.217472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.769 qpair failed and we were unable to recover it. 00:29:28.769 [2024-11-20 12:43:34.217630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.769 [2024-11-20 12:43:34.217653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.769 qpair failed and we were unable to recover it. 00:29:28.769 [2024-11-20 12:43:34.217818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.769 [2024-11-20 12:43:34.217841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.769 qpair failed and we were unable to recover it. 00:29:28.769 [2024-11-20 12:43:34.217990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.769 [2024-11-20 12:43:34.218014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.769 qpair failed and we were unable to recover it. 00:29:28.769 [2024-11-20 12:43:34.218176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.769 [2024-11-20 12:43:34.218200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.769 qpair failed and we were unable to recover it. 00:29:28.769 [2024-11-20 12:43:34.218294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.769 [2024-11-20 12:43:34.218316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.769 qpair failed and we were unable to recover it. 00:29:28.769 [2024-11-20 12:43:34.218532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.769 [2024-11-20 12:43:34.218555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.769 qpair failed and we were unable to recover it. 00:29:28.769 [2024-11-20 12:43:34.218641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.769 [2024-11-20 12:43:34.218668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.769 qpair failed and we were unable to recover it. 00:29:28.769 [2024-11-20 12:43:34.218907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.769 [2024-11-20 12:43:34.218930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.769 qpair failed and we were unable to recover it. 00:29:28.769 [2024-11-20 12:43:34.219040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.769 [2024-11-20 12:43:34.219063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.769 qpair failed and we were unable to recover it. 00:29:28.769 [2024-11-20 12:43:34.219246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.769 [2024-11-20 12:43:34.219271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.769 qpair failed and we were unable to recover it. 00:29:28.769 [2024-11-20 12:43:34.219511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.769 [2024-11-20 12:43:34.219534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.769 qpair failed and we were unable to recover it. 00:29:28.769 [2024-11-20 12:43:34.219729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.769 [2024-11-20 12:43:34.219751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.769 qpair failed and we were unable to recover it. 00:29:28.769 [2024-11-20 12:43:34.219854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.769 [2024-11-20 12:43:34.219877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.769 qpair failed and we were unable to recover it. 00:29:28.769 [2024-11-20 12:43:34.219978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.769 [2024-11-20 12:43:34.220001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.769 qpair failed and we were unable to recover it. 00:29:28.769 [2024-11-20 12:43:34.220156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.769 [2024-11-20 12:43:34.220179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.769 qpair failed and we were unable to recover it. 00:29:28.769 [2024-11-20 12:43:34.220284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.769 [2024-11-20 12:43:34.220306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.769 qpair failed and we were unable to recover it. 00:29:28.769 [2024-11-20 12:43:34.220483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.769 [2024-11-20 12:43:34.220506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.769 qpair failed and we were unable to recover it. 00:29:28.769 [2024-11-20 12:43:34.220655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.769 [2024-11-20 12:43:34.220678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.769 qpair failed and we were unable to recover it. 00:29:28.769 [2024-11-20 12:43:34.220841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.769 [2024-11-20 12:43:34.220864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.769 qpair failed and we were unable to recover it. 00:29:28.769 [2024-11-20 12:43:34.221035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.769 [2024-11-20 12:43:34.221059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.769 qpair failed and we were unable to recover it. 00:29:28.769 [2024-11-20 12:43:34.221161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.769 [2024-11-20 12:43:34.221183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.769 qpair failed and we were unable to recover it. 00:29:28.769 [2024-11-20 12:43:34.221290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.769 [2024-11-20 12:43:34.221311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.769 qpair failed and we were unable to recover it. 00:29:28.769 [2024-11-20 12:43:34.221553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.769 [2024-11-20 12:43:34.221575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.769 qpair failed and we were unable to recover it. 00:29:28.769 [2024-11-20 12:43:34.221751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.769 [2024-11-20 12:43:34.221774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.769 qpair failed and we were unable to recover it. 00:29:28.769 [2024-11-20 12:43:34.221866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.769 [2024-11-20 12:43:34.221890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.769 qpair failed and we were unable to recover it. 00:29:28.769 [2024-11-20 12:43:34.222001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.769 [2024-11-20 12:43:34.222025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.769 qpair failed and we were unable to recover it. 00:29:28.769 [2024-11-20 12:43:34.222179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.769 [2024-11-20 12:43:34.222227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.769 qpair failed and we were unable to recover it. 00:29:28.769 [2024-11-20 12:43:34.222392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.769 [2024-11-20 12:43:34.222416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.769 qpair failed and we were unable to recover it. 00:29:28.769 [2024-11-20 12:43:34.222609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.769 [2024-11-20 12:43:34.222631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.769 qpair failed and we were unable to recover it. 00:29:28.769 [2024-11-20 12:43:34.222786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.769 [2024-11-20 12:43:34.222808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.769 qpair failed and we were unable to recover it. 00:29:28.769 [2024-11-20 12:43:34.222970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.769 [2024-11-20 12:43:34.222994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.769 qpair failed and we were unable to recover it. 00:29:28.769 [2024-11-20 12:43:34.223107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.769 [2024-11-20 12:43:34.223130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.769 qpair failed and we were unable to recover it. 00:29:28.769 [2024-11-20 12:43:34.223289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.769 [2024-11-20 12:43:34.223313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.769 qpair failed and we were unable to recover it. 00:29:28.769 [2024-11-20 12:43:34.223399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.769 [2024-11-20 12:43:34.223422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.769 qpair failed and we were unable to recover it. 00:29:28.769 [2024-11-20 12:43:34.223585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.769 [2024-11-20 12:43:34.223608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.769 qpair failed and we were unable to recover it. 00:29:28.769 [2024-11-20 12:43:34.223790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.769 [2024-11-20 12:43:34.223813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.769 qpair failed and we were unable to recover it. 00:29:28.769 [2024-11-20 12:43:34.224029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.769 [2024-11-20 12:43:34.224051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.769 qpair failed and we were unable to recover it. 00:29:28.769 [2024-11-20 12:43:34.224156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.770 [2024-11-20 12:43:34.224179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.770 qpair failed and we were unable to recover it. 00:29:28.770 [2024-11-20 12:43:34.224353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.770 [2024-11-20 12:43:34.224377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.770 qpair failed and we were unable to recover it. 00:29:28.770 [2024-11-20 12:43:34.224539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.770 [2024-11-20 12:43:34.224561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.770 qpair failed and we were unable to recover it. 00:29:28.770 [2024-11-20 12:43:34.224719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.770 [2024-11-20 12:43:34.224741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.770 qpair failed and we were unable to recover it. 00:29:28.770 [2024-11-20 12:43:34.224893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.770 [2024-11-20 12:43:34.224916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.770 qpair failed and we were unable to recover it. 00:29:28.770 [2024-11-20 12:43:34.225080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.770 [2024-11-20 12:43:34.225104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.770 qpair failed and we were unable to recover it. 00:29:28.770 [2024-11-20 12:43:34.225261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.770 [2024-11-20 12:43:34.225285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.770 qpair failed and we were unable to recover it. 00:29:28.770 [2024-11-20 12:43:34.225434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.770 [2024-11-20 12:43:34.225457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.770 qpair failed and we were unable to recover it. 00:29:28.770 [2024-11-20 12:43:34.225623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.770 [2024-11-20 12:43:34.225646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.770 qpair failed and we were unable to recover it. 00:29:28.770 [2024-11-20 12:43:34.225815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.770 [2024-11-20 12:43:34.225838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.770 qpair failed and we were unable to recover it. 00:29:28.770 [2024-11-20 12:43:34.225934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.770 [2024-11-20 12:43:34.225961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.770 qpair failed and we were unable to recover it. 00:29:28.770 [2024-11-20 12:43:34.226179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.770 [2024-11-20 12:43:34.226208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.770 qpair failed and we were unable to recover it. 00:29:28.770 [2024-11-20 12:43:34.226354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.770 [2024-11-20 12:43:34.226377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.770 qpair failed and we were unable to recover it. 00:29:28.770 [2024-11-20 12:43:34.226668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.770 [2024-11-20 12:43:34.226690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.770 qpair failed and we were unable to recover it. 00:29:28.770 [2024-11-20 12:43:34.226775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.770 [2024-11-20 12:43:34.226799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.770 qpair failed and we were unable to recover it. 00:29:28.770 [2024-11-20 12:43:34.227025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.770 [2024-11-20 12:43:34.227048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.770 qpair failed and we were unable to recover it. 00:29:28.770 [2024-11-20 12:43:34.227217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.770 [2024-11-20 12:43:34.227241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.770 qpair failed and we were unable to recover it. 00:29:28.770 [2024-11-20 12:43:34.227407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.770 [2024-11-20 12:43:34.227431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.770 qpair failed and we were unable to recover it. 00:29:28.770 [2024-11-20 12:43:34.227601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.770 [2024-11-20 12:43:34.227625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.770 qpair failed and we were unable to recover it. 00:29:28.770 [2024-11-20 12:43:34.227841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.770 [2024-11-20 12:43:34.227864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.770 qpair failed and we were unable to recover it. 00:29:28.770 [2024-11-20 12:43:34.228029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.770 [2024-11-20 12:43:34.228052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.770 qpair failed and we were unable to recover it. 00:29:28.770 [2024-11-20 12:43:34.228277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.770 [2024-11-20 12:43:34.228301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.770 qpair failed and we were unable to recover it. 00:29:28.770 [2024-11-20 12:43:34.228484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.770 [2024-11-20 12:43:34.228506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.770 qpair failed and we were unable to recover it. 00:29:28.770 [2024-11-20 12:43:34.228724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.770 [2024-11-20 12:43:34.228747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.770 qpair failed and we were unable to recover it. 00:29:28.770 [2024-11-20 12:43:34.228943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.770 [2024-11-20 12:43:34.228966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.770 qpair failed and we were unable to recover it. 00:29:28.770 [2024-11-20 12:43:34.229127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.770 [2024-11-20 12:43:34.229150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.770 qpair failed and we were unable to recover it. 00:29:28.770 [2024-11-20 12:43:34.229244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.770 [2024-11-20 12:43:34.229268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.770 qpair failed and we were unable to recover it. 00:29:28.770 [2024-11-20 12:43:34.229534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.770 [2024-11-20 12:43:34.229557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.770 qpair failed and we were unable to recover it. 00:29:28.770 [2024-11-20 12:43:34.229746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.770 [2024-11-20 12:43:34.229769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.770 qpair failed and we were unable to recover it. 00:29:28.770 [2024-11-20 12:43:34.229952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.770 [2024-11-20 12:43:34.229975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.770 qpair failed and we were unable to recover it. 00:29:28.770 [2024-11-20 12:43:34.230134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.770 [2024-11-20 12:43:34.230157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.770 qpair failed and we were unable to recover it. 00:29:28.770 [2024-11-20 12:43:34.230252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.770 [2024-11-20 12:43:34.230277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.770 qpair failed and we were unable to recover it. 00:29:28.770 [2024-11-20 12:43:34.230503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.770 [2024-11-20 12:43:34.230526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.770 qpair failed and we were unable to recover it. 00:29:28.770 [2024-11-20 12:43:34.230766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.770 [2024-11-20 12:43:34.230789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.770 qpair failed and we were unable to recover it. 00:29:28.770 [2024-11-20 12:43:34.230980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.770 [2024-11-20 12:43:34.231003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.770 qpair failed and we were unable to recover it. 00:29:28.770 [2024-11-20 12:43:34.231223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.770 [2024-11-20 12:43:34.231247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.770 qpair failed and we were unable to recover it. 00:29:28.770 [2024-11-20 12:43:34.231411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.770 [2024-11-20 12:43:34.231433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.770 qpair failed and we were unable to recover it. 00:29:28.770 [2024-11-20 12:43:34.231518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.770 [2024-11-20 12:43:34.231544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.770 qpair failed and we were unable to recover it. 00:29:28.770 [2024-11-20 12:43:34.231643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.770 [2024-11-20 12:43:34.231666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.771 qpair failed and we were unable to recover it. 00:29:28.771 [2024-11-20 12:43:34.231760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.771 [2024-11-20 12:43:34.231782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.771 qpair failed and we were unable to recover it. 00:29:28.771 [2024-11-20 12:43:34.232009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.771 [2024-11-20 12:43:34.232032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.771 qpair failed and we were unable to recover it. 00:29:28.771 [2024-11-20 12:43:34.232113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.771 [2024-11-20 12:43:34.232135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.771 qpair failed and we were unable to recover it. 00:29:28.771 [2024-11-20 12:43:34.232235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.771 [2024-11-20 12:43:34.232259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.771 qpair failed and we were unable to recover it. 00:29:28.771 [2024-11-20 12:43:34.232355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.771 [2024-11-20 12:43:34.232377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.771 qpair failed and we were unable to recover it. 00:29:28.771 [2024-11-20 12:43:34.232491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.771 [2024-11-20 12:43:34.232514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.771 qpair failed and we were unable to recover it. 00:29:28.771 [2024-11-20 12:43:34.232662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.771 [2024-11-20 12:43:34.232685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.771 qpair failed and we were unable to recover it. 00:29:28.771 [2024-11-20 12:43:34.232899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.771 [2024-11-20 12:43:34.232922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.771 qpair failed and we were unable to recover it. 00:29:28.771 [2024-11-20 12:43:34.233070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.771 [2024-11-20 12:43:34.233093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.771 qpair failed and we were unable to recover it. 00:29:28.771 [2024-11-20 12:43:34.233332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.771 [2024-11-20 12:43:34.233356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.771 qpair failed and we were unable to recover it. 00:29:28.771 [2024-11-20 12:43:34.233454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.771 [2024-11-20 12:43:34.233475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.771 qpair failed and we were unable to recover it. 00:29:28.771 [2024-11-20 12:43:34.233658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.771 [2024-11-20 12:43:34.233681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.771 qpair failed and we were unable to recover it. 00:29:28.771 [2024-11-20 12:43:34.233837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.771 [2024-11-20 12:43:34.233860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.771 qpair failed and we were unable to recover it. 00:29:28.771 [2024-11-20 12:43:34.234012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.771 [2024-11-20 12:43:34.234035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.771 qpair failed and we were unable to recover it. 00:29:28.771 [2024-11-20 12:43:34.234212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.771 [2024-11-20 12:43:34.234236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.771 qpair failed and we were unable to recover it. 00:29:28.771 [2024-11-20 12:43:34.234332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.771 [2024-11-20 12:43:34.234355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.771 qpair failed and we were unable to recover it. 00:29:28.771 [2024-11-20 12:43:34.234482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.771 [2024-11-20 12:43:34.234506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.771 qpair failed and we were unable to recover it. 00:29:28.771 [2024-11-20 12:43:34.234673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.771 [2024-11-20 12:43:34.234697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.771 qpair failed and we were unable to recover it. 00:29:28.771 [2024-11-20 12:43:34.234853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.771 [2024-11-20 12:43:34.234876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.771 qpair failed and we were unable to recover it. 00:29:28.771 [2024-11-20 12:43:34.235025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.771 [2024-11-20 12:43:34.235048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.771 qpair failed and we were unable to recover it. 00:29:28.771 [2024-11-20 12:43:34.235151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.771 [2024-11-20 12:43:34.235174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.771 qpair failed and we were unable to recover it. 00:29:28.771 [2024-11-20 12:43:34.235362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.771 [2024-11-20 12:43:34.235387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.771 qpair failed and we were unable to recover it. 00:29:28.771 [2024-11-20 12:43:34.235572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.771 [2024-11-20 12:43:34.235595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.771 qpair failed and we were unable to recover it. 00:29:28.771 [2024-11-20 12:43:34.235811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.771 [2024-11-20 12:43:34.235835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.771 qpair failed and we were unable to recover it. 00:29:28.771 [2024-11-20 12:43:34.236073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.771 [2024-11-20 12:43:34.236096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.771 qpair failed and we were unable to recover it. 00:29:28.771 [2024-11-20 12:43:34.236265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.771 [2024-11-20 12:43:34.236291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.771 qpair failed and we were unable to recover it. 00:29:28.771 [2024-11-20 12:43:34.236394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.771 [2024-11-20 12:43:34.236417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.771 qpair failed and we were unable to recover it. 00:29:28.771 [2024-11-20 12:43:34.236647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.771 [2024-11-20 12:43:34.236669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.771 qpair failed and we were unable to recover it. 00:29:28.771 [2024-11-20 12:43:34.236912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.771 [2024-11-20 12:43:34.236936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.771 qpair failed and we were unable to recover it. 00:29:28.771 [2024-11-20 12:43:34.237157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.771 [2024-11-20 12:43:34.237180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.771 qpair failed and we were unable to recover it. 00:29:28.771 [2024-11-20 12:43:34.237283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.771 [2024-11-20 12:43:34.237306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.771 qpair failed and we were unable to recover it. 00:29:28.771 [2024-11-20 12:43:34.237397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.771 [2024-11-20 12:43:34.237420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.771 qpair failed and we were unable to recover it. 00:29:28.771 [2024-11-20 12:43:34.237597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.771 [2024-11-20 12:43:34.237620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.771 qpair failed and we were unable to recover it. 00:29:28.771 [2024-11-20 12:43:34.237801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.771 [2024-11-20 12:43:34.237824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.771 qpair failed and we were unable to recover it. 00:29:28.771 [2024-11-20 12:43:34.237973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.771 [2024-11-20 12:43:34.237996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.771 qpair failed and we were unable to recover it. 00:29:28.771 [2024-11-20 12:43:34.238162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.771 [2024-11-20 12:43:34.238185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.771 qpair failed and we were unable to recover it. 00:29:28.771 [2024-11-20 12:43:34.238361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.771 [2024-11-20 12:43:34.238384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.771 qpair failed and we were unable to recover it. 00:29:28.771 [2024-11-20 12:43:34.238544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.771 [2024-11-20 12:43:34.238566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.771 qpair failed and we were unable to recover it. 00:29:28.772 [2024-11-20 12:43:34.238720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.772 [2024-11-20 12:43:34.238743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.772 qpair failed and we were unable to recover it. 00:29:28.772 [2024-11-20 12:43:34.238910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.772 [2024-11-20 12:43:34.238937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.772 qpair failed and we were unable to recover it. 00:29:28.772 [2024-11-20 12:43:34.239152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.772 [2024-11-20 12:43:34.239180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.772 qpair failed and we were unable to recover it. 00:29:28.772 [2024-11-20 12:43:34.239378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.772 [2024-11-20 12:43:34.239403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.772 qpair failed and we were unable to recover it. 00:29:28.772 [2024-11-20 12:43:34.239561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.772 [2024-11-20 12:43:34.239584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.772 qpair failed and we were unable to recover it. 00:29:28.772 [2024-11-20 12:43:34.239774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.772 [2024-11-20 12:43:34.239797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.772 qpair failed and we were unable to recover it. 00:29:28.772 [2024-11-20 12:43:34.239970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.772 [2024-11-20 12:43:34.239993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.772 qpair failed and we were unable to recover it. 00:29:28.772 [2024-11-20 12:43:34.240161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.772 [2024-11-20 12:43:34.240185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.772 qpair failed and we were unable to recover it. 00:29:28.772 [2024-11-20 12:43:34.240300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.772 [2024-11-20 12:43:34.240324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.772 qpair failed and we were unable to recover it. 00:29:28.772 [2024-11-20 12:43:34.240437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.772 [2024-11-20 12:43:34.240459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.772 qpair failed and we were unable to recover it. 00:29:28.772 [2024-11-20 12:43:34.240720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.772 [2024-11-20 12:43:34.240743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.772 qpair failed and we were unable to recover it. 00:29:28.772 [2024-11-20 12:43:34.240978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.772 [2024-11-20 12:43:34.241001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.772 qpair failed and we were unable to recover it. 00:29:28.772 [2024-11-20 12:43:34.241152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.772 [2024-11-20 12:43:34.241176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.772 qpair failed and we were unable to recover it. 00:29:28.772 [2024-11-20 12:43:34.241449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.772 [2024-11-20 12:43:34.241473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.772 qpair failed and we were unable to recover it. 00:29:28.772 [2024-11-20 12:43:34.241666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.772 [2024-11-20 12:43:34.241689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.772 qpair failed and we were unable to recover it. 00:29:28.772 [2024-11-20 12:43:34.241845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.772 [2024-11-20 12:43:34.241869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.772 qpair failed and we were unable to recover it. 00:29:28.772 [2024-11-20 12:43:34.241969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.772 [2024-11-20 12:43:34.241992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.772 qpair failed and we were unable to recover it. 00:29:28.772 [2024-11-20 12:43:34.242099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.772 [2024-11-20 12:43:34.242122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.772 qpair failed and we were unable to recover it. 00:29:28.772 [2024-11-20 12:43:34.242288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.772 [2024-11-20 12:43:34.242312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.772 qpair failed and we were unable to recover it. 00:29:28.772 [2024-11-20 12:43:34.242480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.772 [2024-11-20 12:43:34.242503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.772 qpair failed and we were unable to recover it. 00:29:28.772 [2024-11-20 12:43:34.242653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.772 [2024-11-20 12:43:34.242677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.772 qpair failed and we were unable to recover it. 00:29:28.772 [2024-11-20 12:43:34.242837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.772 [2024-11-20 12:43:34.242860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.772 qpair failed and we were unable to recover it. 00:29:28.772 [2024-11-20 12:43:34.243031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.772 [2024-11-20 12:43:34.243054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.772 qpair failed and we were unable to recover it. 00:29:28.772 [2024-11-20 12:43:34.243230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.772 [2024-11-20 12:43:34.243255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.772 qpair failed and we were unable to recover it. 00:29:28.772 [2024-11-20 12:43:34.243351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.772 [2024-11-20 12:43:34.243372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.772 qpair failed and we were unable to recover it. 00:29:28.772 [2024-11-20 12:43:34.243466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.772 [2024-11-20 12:43:34.243490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.772 qpair failed and we were unable to recover it. 00:29:28.772 [2024-11-20 12:43:34.243586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.772 [2024-11-20 12:43:34.243614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.772 qpair failed and we were unable to recover it. 00:29:28.772 [2024-11-20 12:43:34.243705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.772 [2024-11-20 12:43:34.243728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.772 qpair failed and we were unable to recover it. 00:29:28.772 [2024-11-20 12:43:34.243906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.772 [2024-11-20 12:43:34.243933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.772 qpair failed and we were unable to recover it. 00:29:28.772 [2024-11-20 12:43:34.244031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.772 [2024-11-20 12:43:34.244054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.772 qpair failed and we were unable to recover it. 00:29:28.772 [2024-11-20 12:43:34.244213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.772 [2024-11-20 12:43:34.244236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.772 qpair failed and we were unable to recover it. 00:29:28.772 [2024-11-20 12:43:34.244354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.772 [2024-11-20 12:43:34.244377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.772 qpair failed and we were unable to recover it. 00:29:28.772 [2024-11-20 12:43:34.244604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.772 [2024-11-20 12:43:34.244627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.772 qpair failed and we were unable to recover it. 00:29:28.772 [2024-11-20 12:43:34.244721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.772 [2024-11-20 12:43:34.244744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.772 qpair failed and we were unable to recover it. 00:29:28.772 [2024-11-20 12:43:34.244850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.772 [2024-11-20 12:43:34.244873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.772 qpair failed and we were unable to recover it. 00:29:28.772 [2024-11-20 12:43:34.245039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.772 [2024-11-20 12:43:34.245062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.772 qpair failed and we were unable to recover it. 00:29:28.772 [2024-11-20 12:43:34.245149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.772 [2024-11-20 12:43:34.245171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.772 qpair failed and we were unable to recover it. 00:29:28.772 [2024-11-20 12:43:34.245331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.772 [2024-11-20 12:43:34.245355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.772 qpair failed and we were unable to recover it. 00:29:28.772 [2024-11-20 12:43:34.245455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.773 [2024-11-20 12:43:34.245477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.773 qpair failed and we were unable to recover it. 00:29:28.773 [2024-11-20 12:43:34.245714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.773 [2024-11-20 12:43:34.245736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.773 qpair failed and we were unable to recover it. 00:29:28.773 [2024-11-20 12:43:34.245846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.773 [2024-11-20 12:43:34.245869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.773 qpair failed and we were unable to recover it. 00:29:28.773 [2024-11-20 12:43:34.246027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.773 [2024-11-20 12:43:34.246049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.773 qpair failed and we were unable to recover it. 00:29:28.773 [2024-11-20 12:43:34.246170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.773 [2024-11-20 12:43:34.246193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.773 qpair failed and we were unable to recover it. 00:29:28.773 [2024-11-20 12:43:34.246353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.773 [2024-11-20 12:43:34.246376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.773 qpair failed and we were unable to recover it. 00:29:28.773 [2024-11-20 12:43:34.246529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.773 [2024-11-20 12:43:34.246552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.773 qpair failed and we were unable to recover it. 00:29:28.773 [2024-11-20 12:43:34.246644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.773 [2024-11-20 12:43:34.246665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.773 qpair failed and we were unable to recover it. 00:29:28.773 [2024-11-20 12:43:34.246754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.773 [2024-11-20 12:43:34.246775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.773 qpair failed and we were unable to recover it. 00:29:28.773 [2024-11-20 12:43:34.246942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.773 [2024-11-20 12:43:34.246965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.773 qpair failed and we were unable to recover it. 00:29:28.773 [2024-11-20 12:43:34.247049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.773 [2024-11-20 12:43:34.247071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.773 qpair failed and we were unable to recover it. 00:29:28.773 [2024-11-20 12:43:34.247232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.773 [2024-11-20 12:43:34.247257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.773 qpair failed and we were unable to recover it. 00:29:28.773 [2024-11-20 12:43:34.247477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.773 [2024-11-20 12:43:34.247501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.773 qpair failed and we were unable to recover it. 00:29:28.773 [2024-11-20 12:43:34.247731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.773 [2024-11-20 12:43:34.247755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.773 qpair failed and we were unable to recover it. 00:29:28.773 [2024-11-20 12:43:34.247923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.773 [2024-11-20 12:43:34.247946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.773 qpair failed and we were unable to recover it. 00:29:28.773 [2024-11-20 12:43:34.248135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.773 [2024-11-20 12:43:34.248158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.773 qpair failed and we were unable to recover it. 00:29:28.773 [2024-11-20 12:43:34.248393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.773 [2024-11-20 12:43:34.248418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.773 qpair failed and we were unable to recover it. 00:29:28.773 [2024-11-20 12:43:34.248661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.773 [2024-11-20 12:43:34.248684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.773 qpair failed and we were unable to recover it. 00:29:28.773 [2024-11-20 12:43:34.248866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.773 [2024-11-20 12:43:34.248892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.773 qpair failed and we were unable to recover it. 00:29:28.773 [2024-11-20 12:43:34.249131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.773 [2024-11-20 12:43:34.249155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.773 qpair failed and we were unable to recover it. 00:29:28.773 [2024-11-20 12:43:34.249420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.773 [2024-11-20 12:43:34.249443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.773 qpair failed and we were unable to recover it. 00:29:28.773 [2024-11-20 12:43:34.249610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.773 [2024-11-20 12:43:34.249633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.773 qpair failed and we were unable to recover it. 00:29:28.773 [2024-11-20 12:43:34.249783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.773 [2024-11-20 12:43:34.249806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.773 qpair failed and we were unable to recover it. 00:29:28.773 [2024-11-20 12:43:34.249977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.773 [2024-11-20 12:43:34.250001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.773 qpair failed and we were unable to recover it. 00:29:28.773 [2024-11-20 12:43:34.250153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.773 [2024-11-20 12:43:34.250176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.773 qpair failed and we were unable to recover it. 00:29:28.773 [2024-11-20 12:43:34.250371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.773 [2024-11-20 12:43:34.250395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.773 qpair failed and we were unable to recover it. 00:29:28.773 [2024-11-20 12:43:34.250636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.773 [2024-11-20 12:43:34.250660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.773 qpair failed and we were unable to recover it. 00:29:28.773 [2024-11-20 12:43:34.250831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.773 [2024-11-20 12:43:34.250854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.773 qpair failed and we were unable to recover it. 00:29:28.773 [2024-11-20 12:43:34.251036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.773 [2024-11-20 12:43:34.251059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.773 qpair failed and we were unable to recover it. 00:29:28.773 [2024-11-20 12:43:34.251299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.773 [2024-11-20 12:43:34.251323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.773 qpair failed and we were unable to recover it. 00:29:28.773 [2024-11-20 12:43:34.251541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.773 [2024-11-20 12:43:34.251565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.773 qpair failed and we were unable to recover it. 00:29:28.773 [2024-11-20 12:43:34.251816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.773 [2024-11-20 12:43:34.251843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.773 qpair failed and we were unable to recover it. 00:29:28.773 [2024-11-20 12:43:34.252011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.773 [2024-11-20 12:43:34.252034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.773 qpair failed and we were unable to recover it. 00:29:28.773 [2024-11-20 12:43:34.252216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.773 [2024-11-20 12:43:34.252241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.773 qpair failed and we were unable to recover it. 00:29:28.774 [2024-11-20 12:43:34.252484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.774 [2024-11-20 12:43:34.252507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.774 qpair failed and we were unable to recover it. 00:29:28.774 [2024-11-20 12:43:34.252770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.774 [2024-11-20 12:43:34.252794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.774 qpair failed and we were unable to recover it. 00:29:28.774 [2024-11-20 12:43:34.253008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.774 [2024-11-20 12:43:34.253032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.774 qpair failed and we were unable to recover it. 00:29:28.774 [2024-11-20 12:43:34.253137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.774 [2024-11-20 12:43:34.253160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.774 qpair failed and we were unable to recover it. 00:29:28.774 [2024-11-20 12:43:34.253415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.774 [2024-11-20 12:43:34.253439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.774 qpair failed and we were unable to recover it. 00:29:28.774 [2024-11-20 12:43:34.253682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.774 [2024-11-20 12:43:34.253705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.774 qpair failed and we were unable to recover it. 00:29:28.774 [2024-11-20 12:43:34.253931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.774 [2024-11-20 12:43:34.253954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.774 qpair failed and we were unable to recover it. 00:29:28.774 [2024-11-20 12:43:34.254106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.774 [2024-11-20 12:43:34.254130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.774 qpair failed and we were unable to recover it. 00:29:28.774 [2024-11-20 12:43:34.254281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.774 [2024-11-20 12:43:34.254305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.774 qpair failed and we were unable to recover it. 00:29:28.774 [2024-11-20 12:43:34.254408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.774 [2024-11-20 12:43:34.254431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.774 qpair failed and we were unable to recover it. 00:29:28.774 [2024-11-20 12:43:34.254541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.774 [2024-11-20 12:43:34.254564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.774 qpair failed and we were unable to recover it. 00:29:28.774 [2024-11-20 12:43:34.254810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.774 [2024-11-20 12:43:34.254833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.774 qpair failed and we were unable to recover it. 00:29:28.774 [2024-11-20 12:43:34.255072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.774 [2024-11-20 12:43:34.255095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.774 qpair failed and we were unable to recover it. 00:29:28.774 [2024-11-20 12:43:34.255326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.774 [2024-11-20 12:43:34.255349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.774 qpair failed and we were unable to recover it. 00:29:28.774 [2024-11-20 12:43:34.255515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.774 [2024-11-20 12:43:34.255538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.774 qpair failed and we were unable to recover it. 00:29:28.774 [2024-11-20 12:43:34.255763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.774 [2024-11-20 12:43:34.255787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.774 qpair failed and we were unable to recover it. 00:29:28.774 [2024-11-20 12:43:34.256035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.774 [2024-11-20 12:43:34.256058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.774 qpair failed and we were unable to recover it. 00:29:28.774 [2024-11-20 12:43:34.256303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.774 [2024-11-20 12:43:34.256328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.774 qpair failed and we were unable to recover it. 00:29:28.774 [2024-11-20 12:43:34.256549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.774 [2024-11-20 12:43:34.256573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.774 qpair failed and we were unable to recover it. 00:29:28.774 [2024-11-20 12:43:34.256788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.774 [2024-11-20 12:43:34.256812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.774 qpair failed and we were unable to recover it. 00:29:28.774 [2024-11-20 12:43:34.257080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.774 [2024-11-20 12:43:34.257103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.774 qpair failed and we were unable to recover it. 00:29:28.774 [2024-11-20 12:43:34.257352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.774 [2024-11-20 12:43:34.257376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.774 qpair failed and we were unable to recover it. 00:29:28.774 [2024-11-20 12:43:34.257608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.774 [2024-11-20 12:43:34.257632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.774 qpair failed and we were unable to recover it. 00:29:28.774 [2024-11-20 12:43:34.257803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.774 [2024-11-20 12:43:34.257826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.774 qpair failed and we were unable to recover it. 00:29:28.774 [2024-11-20 12:43:34.258058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.774 [2024-11-20 12:43:34.258081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.774 qpair failed and we were unable to recover it. 00:29:28.774 [2024-11-20 12:43:34.258194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.774 [2024-11-20 12:43:34.258241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.774 qpair failed and we were unable to recover it. 00:29:28.774 [2024-11-20 12:43:34.258519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.774 [2024-11-20 12:43:34.258542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.774 qpair failed and we were unable to recover it. 00:29:28.774 [2024-11-20 12:43:34.258732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.774 [2024-11-20 12:43:34.258755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.774 qpair failed and we were unable to recover it. 00:29:28.774 [2024-11-20 12:43:34.258902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.774 [2024-11-20 12:43:34.258925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.774 qpair failed and we were unable to recover it. 00:29:28.774 [2024-11-20 12:43:34.259142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.774 [2024-11-20 12:43:34.259166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.774 qpair failed and we were unable to recover it. 00:29:28.774 [2024-11-20 12:43:34.259458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.774 [2024-11-20 12:43:34.259483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.774 qpair failed and we were unable to recover it. 00:29:28.774 [2024-11-20 12:43:34.259706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.774 [2024-11-20 12:43:34.259729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.774 qpair failed and we were unable to recover it. 00:29:28.774 [2024-11-20 12:43:34.259944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.774 [2024-11-20 12:43:34.259967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.774 qpair failed and we were unable to recover it. 00:29:28.774 [2024-11-20 12:43:34.260184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.774 [2024-11-20 12:43:34.260214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.774 qpair failed and we were unable to recover it. 00:29:28.774 [2024-11-20 12:43:34.260379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.774 [2024-11-20 12:43:34.260403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.774 qpair failed and we were unable to recover it. 00:29:28.774 [2024-11-20 12:43:34.260644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.774 [2024-11-20 12:43:34.260668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.774 qpair failed and we were unable to recover it. 00:29:28.774 [2024-11-20 12:43:34.260913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.774 [2024-11-20 12:43:34.260936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.774 qpair failed and we were unable to recover it. 00:29:28.774 [2024-11-20 12:43:34.261038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.774 [2024-11-20 12:43:34.261059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.774 qpair failed and we were unable to recover it. 00:29:28.774 [2024-11-20 12:43:34.261303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.775 [2024-11-20 12:43:34.261327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.775 qpair failed and we were unable to recover it. 00:29:28.775 [2024-11-20 12:43:34.261590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.775 [2024-11-20 12:43:34.261614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.775 qpair failed and we were unable to recover it. 00:29:28.775 [2024-11-20 12:43:34.261782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.775 [2024-11-20 12:43:34.261805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.775 qpair failed and we were unable to recover it. 00:29:28.775 [2024-11-20 12:43:34.261982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.775 [2024-11-20 12:43:34.262006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.775 qpair failed and we were unable to recover it. 00:29:28.775 [2024-11-20 12:43:34.262266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.775 [2024-11-20 12:43:34.262289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.775 qpair failed and we were unable to recover it. 00:29:28.775 [2024-11-20 12:43:34.262531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.775 [2024-11-20 12:43:34.262554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.775 qpair failed and we were unable to recover it. 00:29:28.775 [2024-11-20 12:43:34.262784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.775 [2024-11-20 12:43:34.262807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.775 qpair failed and we were unable to recover it. 00:29:28.775 [2024-11-20 12:43:34.263025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.775 [2024-11-20 12:43:34.263048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.775 qpair failed and we were unable to recover it. 00:29:28.775 [2024-11-20 12:43:34.263268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.775 [2024-11-20 12:43:34.263293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.775 qpair failed and we were unable to recover it. 00:29:28.775 [2024-11-20 12:43:34.263455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.775 [2024-11-20 12:43:34.263478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.775 qpair failed and we were unable to recover it. 00:29:28.775 [2024-11-20 12:43:34.263656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.775 [2024-11-20 12:43:34.263680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.775 qpair failed and we were unable to recover it. 00:29:28.775 [2024-11-20 12:43:34.263919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.775 [2024-11-20 12:43:34.263942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.775 qpair failed and we were unable to recover it. 00:29:28.775 [2024-11-20 12:43:34.264159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.775 [2024-11-20 12:43:34.264182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.775 qpair failed and we were unable to recover it. 00:29:28.775 [2024-11-20 12:43:34.264377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.775 [2024-11-20 12:43:34.264401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.775 qpair failed and we were unable to recover it. 00:29:28.775 [2024-11-20 12:43:34.264563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.775 [2024-11-20 12:43:34.264587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.775 qpair failed and we were unable to recover it. 00:29:28.775 [2024-11-20 12:43:34.264804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.775 [2024-11-20 12:43:34.264828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.775 qpair failed and we were unable to recover it. 00:29:28.775 [2024-11-20 12:43:34.265067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.775 [2024-11-20 12:43:34.265090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.775 qpair failed and we were unable to recover it. 00:29:28.775 [2024-11-20 12:43:34.265309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.775 [2024-11-20 12:43:34.265334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.775 qpair failed and we were unable to recover it. 00:29:28.775 [2024-11-20 12:43:34.265548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.775 [2024-11-20 12:43:34.265571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.775 qpair failed and we were unable to recover it. 00:29:28.775 [2024-11-20 12:43:34.265787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.775 [2024-11-20 12:43:34.265810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.775 qpair failed and we were unable to recover it. 00:29:28.775 [2024-11-20 12:43:34.266024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.775 [2024-11-20 12:43:34.266048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.775 qpair failed and we were unable to recover it. 00:29:28.775 [2024-11-20 12:43:34.266205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.775 [2024-11-20 12:43:34.266229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.775 qpair failed and we were unable to recover it. 00:29:28.775 [2024-11-20 12:43:34.266445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.775 [2024-11-20 12:43:34.266468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.775 qpair failed and we were unable to recover it. 00:29:28.775 [2024-11-20 12:43:34.266641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.775 [2024-11-20 12:43:34.266665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.775 qpair failed and we were unable to recover it. 00:29:28.775 [2024-11-20 12:43:34.266832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.775 [2024-11-20 12:43:34.266855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.775 qpair failed and we were unable to recover it. 00:29:28.775 [2024-11-20 12:43:34.267095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.775 [2024-11-20 12:43:34.267119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.775 qpair failed and we were unable to recover it. 00:29:28.775 [2024-11-20 12:43:34.267365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.775 [2024-11-20 12:43:34.267389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.775 qpair failed and we were unable to recover it. 00:29:28.775 [2024-11-20 12:43:34.267556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.775 [2024-11-20 12:43:34.267582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.775 qpair failed and we were unable to recover it. 00:29:28.775 [2024-11-20 12:43:34.267825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.775 [2024-11-20 12:43:34.267848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.775 qpair failed and we were unable to recover it. 00:29:28.775 [2024-11-20 12:43:34.268070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.775 [2024-11-20 12:43:34.268093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.775 qpair failed and we were unable to recover it. 00:29:28.775 [2024-11-20 12:43:34.268334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.775 [2024-11-20 12:43:34.268358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.775 qpair failed and we were unable to recover it. 00:29:28.775 [2024-11-20 12:43:34.268523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.775 [2024-11-20 12:43:34.268546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.775 qpair failed and we were unable to recover it. 00:29:28.775 [2024-11-20 12:43:34.268796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.775 [2024-11-20 12:43:34.268819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.775 qpair failed and we were unable to recover it. 00:29:28.775 [2024-11-20 12:43:34.269062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.775 [2024-11-20 12:43:34.269085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.775 qpair failed and we were unable to recover it. 00:29:28.775 [2024-11-20 12:43:34.269255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.775 [2024-11-20 12:43:34.269279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.775 qpair failed and we were unable to recover it. 00:29:28.775 [2024-11-20 12:43:34.269525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.775 [2024-11-20 12:43:34.269549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.775 qpair failed and we were unable to recover it. 00:29:28.775 [2024-11-20 12:43:34.269726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.775 [2024-11-20 12:43:34.269749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.775 qpair failed and we were unable to recover it. 00:29:28.775 [2024-11-20 12:43:34.269941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.775 [2024-11-20 12:43:34.269963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.775 qpair failed and we were unable to recover it. 00:29:28.775 [2024-11-20 12:43:34.270210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.775 [2024-11-20 12:43:34.270234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.775 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-20 12:43:34.270443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.776 [2024-11-20 12:43:34.270466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-20 12:43:34.270618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.776 [2024-11-20 12:43:34.270640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-20 12:43:34.270811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.776 [2024-11-20 12:43:34.270835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-20 12:43:34.271079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.776 [2024-11-20 12:43:34.271102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-20 12:43:34.271276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.776 [2024-11-20 12:43:34.271300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-20 12:43:34.271451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.776 [2024-11-20 12:43:34.271474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-20 12:43:34.271701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.776 [2024-11-20 12:43:34.271725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-20 12:43:34.271943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.776 [2024-11-20 12:43:34.271966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-20 12:43:34.272139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.776 [2024-11-20 12:43:34.272162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-20 12:43:34.272354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.776 [2024-11-20 12:43:34.272378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-20 12:43:34.272560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.776 [2024-11-20 12:43:34.272582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-20 12:43:34.272863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.776 [2024-11-20 12:43:34.272886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-20 12:43:34.273079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.776 [2024-11-20 12:43:34.273102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-20 12:43:34.273322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.776 [2024-11-20 12:43:34.273346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-20 12:43:34.273592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.776 [2024-11-20 12:43:34.273615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-20 12:43:34.273853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.776 [2024-11-20 12:43:34.273876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-20 12:43:34.274041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.776 [2024-11-20 12:43:34.274065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-20 12:43:34.274299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.776 [2024-11-20 12:43:34.274323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-20 12:43:34.274424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.776 [2024-11-20 12:43:34.274446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-20 12:43:34.274710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.776 [2024-11-20 12:43:34.274732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-20 12:43:34.274950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.776 [2024-11-20 12:43:34.274973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-20 12:43:34.275208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.776 [2024-11-20 12:43:34.275232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-20 12:43:34.275402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.776 [2024-11-20 12:43:34.275425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-20 12:43:34.275699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.776 [2024-11-20 12:43:34.275722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-20 12:43:34.275965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.776 [2024-11-20 12:43:34.275988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-20 12:43:34.276157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.776 [2024-11-20 12:43:34.276180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-20 12:43:34.276411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.776 [2024-11-20 12:43:34.276435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-20 12:43:34.276614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.776 [2024-11-20 12:43:34.276637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-20 12:43:34.276886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.776 [2024-11-20 12:43:34.276908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-20 12:43:34.277130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.776 [2024-11-20 12:43:34.277158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-20 12:43:34.277327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.776 [2024-11-20 12:43:34.277351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-20 12:43:34.277591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.776 [2024-11-20 12:43:34.277614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-20 12:43:34.277834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.776 [2024-11-20 12:43:34.277856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-20 12:43:34.278075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.776 [2024-11-20 12:43:34.278098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-20 12:43:34.278269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.776 [2024-11-20 12:43:34.278311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-20 12:43:34.278565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.776 [2024-11-20 12:43:34.278589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-20 12:43:34.278759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.776 [2024-11-20 12:43:34.278784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.776 [2024-11-20 12:43:34.279008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.776 [2024-11-20 12:43:34.279031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.776 qpair failed and we were unable to recover it. 00:29:28.777 [2024-11-20 12:43:34.279186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.777 [2024-11-20 12:43:34.279215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-11-20 12:43:34.279464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.777 [2024-11-20 12:43:34.279488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-11-20 12:43:34.279682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.777 [2024-11-20 12:43:34.279706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-11-20 12:43:34.279801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.777 [2024-11-20 12:43:34.279823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-11-20 12:43:34.280073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.777 [2024-11-20 12:43:34.280096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-11-20 12:43:34.280214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.777 [2024-11-20 12:43:34.280237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-11-20 12:43:34.280443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.777 [2024-11-20 12:43:34.280466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-11-20 12:43:34.280696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.777 [2024-11-20 12:43:34.280720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-11-20 12:43:34.280942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.777 [2024-11-20 12:43:34.280966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-11-20 12:43:34.281216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.777 [2024-11-20 12:43:34.281241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-11-20 12:43:34.281341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.777 [2024-11-20 12:43:34.281363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-11-20 12:43:34.281527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.777 [2024-11-20 12:43:34.281550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-11-20 12:43:34.281658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.777 [2024-11-20 12:43:34.281680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-11-20 12:43:34.281898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.777 [2024-11-20 12:43:34.281921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-11-20 12:43:34.282148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.777 [2024-11-20 12:43:34.282173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-11-20 12:43:34.282428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.777 [2024-11-20 12:43:34.282452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-11-20 12:43:34.282607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.777 [2024-11-20 12:43:34.282630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-11-20 12:43:34.282869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.777 [2024-11-20 12:43:34.282892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-11-20 12:43:34.283050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.777 [2024-11-20 12:43:34.283078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-11-20 12:43:34.283319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.777 [2024-11-20 12:43:34.283344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-11-20 12:43:34.283600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.777 [2024-11-20 12:43:34.283624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-11-20 12:43:34.283802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.777 [2024-11-20 12:43:34.283825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-11-20 12:43:34.283977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.777 [2024-11-20 12:43:34.284000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-11-20 12:43:34.284253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.777 [2024-11-20 12:43:34.284276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-11-20 12:43:34.284533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.777 [2024-11-20 12:43:34.284557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-11-20 12:43:34.284802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.777 [2024-11-20 12:43:34.284825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-11-20 12:43:34.285002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.777 [2024-11-20 12:43:34.285025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-11-20 12:43:34.285246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.777 [2024-11-20 12:43:34.285270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-11-20 12:43:34.285512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.777 [2024-11-20 12:43:34.285535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-11-20 12:43:34.285713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.777 [2024-11-20 12:43:34.285736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-11-20 12:43:34.285905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.777 [2024-11-20 12:43:34.285928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-11-20 12:43:34.286173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.777 [2024-11-20 12:43:34.286197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-11-20 12:43:34.286449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.777 [2024-11-20 12:43:34.286472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-11-20 12:43:34.286737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.777 [2024-11-20 12:43:34.286760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.777 qpair failed and we were unable to recover it. 00:29:28.777 [2024-11-20 12:43:34.286949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.778 [2024-11-20 12:43:34.286972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.778 qpair failed and we were unable to recover it. 00:29:28.778 [2024-11-20 12:43:34.287190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.778 [2024-11-20 12:43:34.287220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.778 qpair failed and we were unable to recover it. 00:29:28.778 [2024-11-20 12:43:34.287443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.778 [2024-11-20 12:43:34.287466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.778 qpair failed and we were unable to recover it. 00:29:28.778 [2024-11-20 12:43:34.287714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.778 [2024-11-20 12:43:34.287737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.778 qpair failed and we were unable to recover it. 00:29:28.778 [2024-11-20 12:43:34.287842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.778 [2024-11-20 12:43:34.287863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.778 qpair failed and we were unable to recover it. 00:29:28.778 [2024-11-20 12:43:34.288112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.778 [2024-11-20 12:43:34.288134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.778 qpair failed and we were unable to recover it. 00:29:28.778 [2024-11-20 12:43:34.288362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.778 [2024-11-20 12:43:34.288386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.778 qpair failed and we were unable to recover it. 00:29:28.778 [2024-11-20 12:43:34.288550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.778 [2024-11-20 12:43:34.288573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.778 qpair failed and we were unable to recover it. 00:29:28.778 [2024-11-20 12:43:34.288794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.778 [2024-11-20 12:43:34.288817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.778 qpair failed and we were unable to recover it. 00:29:28.778 [2024-11-20 12:43:34.288983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.778 [2024-11-20 12:43:34.289006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.778 qpair failed and we were unable to recover it. 00:29:28.778 [2024-11-20 12:43:34.289177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.778 [2024-11-20 12:43:34.289200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.778 qpair failed and we were unable to recover it. 00:29:28.778 [2024-11-20 12:43:34.289434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.778 [2024-11-20 12:43:34.289457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.778 qpair failed and we were unable to recover it. 00:29:28.778 [2024-11-20 12:43:34.289637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.778 [2024-11-20 12:43:34.289660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.778 qpair failed and we were unable to recover it. 00:29:28.778 [2024-11-20 12:43:34.289768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.778 [2024-11-20 12:43:34.289792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.778 qpair failed and we were unable to recover it. 00:29:28.778 [2024-11-20 12:43:34.289976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.778 [2024-11-20 12:43:34.289999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.778 qpair failed and we were unable to recover it. 00:29:28.778 [2024-11-20 12:43:34.290238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.778 [2024-11-20 12:43:34.290262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.778 qpair failed and we were unable to recover it. 00:29:28.778 [2024-11-20 12:43:34.290509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.778 [2024-11-20 12:43:34.290532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.778 qpair failed and we were unable to recover it. 00:29:28.778 [2024-11-20 12:43:34.290717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.778 [2024-11-20 12:43:34.290741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.778 qpair failed and we were unable to recover it. 00:29:28.778 [2024-11-20 12:43:34.290968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.778 [2024-11-20 12:43:34.290992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.778 qpair failed and we were unable to recover it. 00:29:28.778 [2024-11-20 12:43:34.291148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.778 [2024-11-20 12:43:34.291171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.778 qpair failed and we were unable to recover it. 00:29:28.778 [2024-11-20 12:43:34.291331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.778 [2024-11-20 12:43:34.291355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.778 qpair failed and we were unable to recover it. 00:29:28.778 [2024-11-20 12:43:34.291614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.778 [2024-11-20 12:43:34.291637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.778 qpair failed and we were unable to recover it. 00:29:28.778 [2024-11-20 12:43:34.291884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.778 [2024-11-20 12:43:34.291907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.778 qpair failed and we were unable to recover it. 00:29:28.778 [2024-11-20 12:43:34.292112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.778 [2024-11-20 12:43:34.292136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.778 qpair failed and we were unable to recover it. 00:29:28.778 [2024-11-20 12:43:34.292355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.778 [2024-11-20 12:43:34.292379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.778 qpair failed and we were unable to recover it. 00:29:28.778 [2024-11-20 12:43:34.292599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.778 [2024-11-20 12:43:34.292631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.778 qpair failed and we were unable to recover it. 00:29:28.778 [2024-11-20 12:43:34.292797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.778 [2024-11-20 12:43:34.292820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.778 qpair failed and we were unable to recover it. 00:29:28.778 [2024-11-20 12:43:34.292980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.778 [2024-11-20 12:43:34.293002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.778 qpair failed and we were unable to recover it. 00:29:28.778 [2024-11-20 12:43:34.293167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.778 [2024-11-20 12:43:34.293190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.778 qpair failed and we were unable to recover it. 00:29:28.778 [2024-11-20 12:43:34.293445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.778 [2024-11-20 12:43:34.293469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.778 qpair failed and we were unable to recover it. 00:29:28.778 [2024-11-20 12:43:34.293563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.778 [2024-11-20 12:43:34.293584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.778 qpair failed and we were unable to recover it. 00:29:28.778 [2024-11-20 12:43:34.293823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.778 [2024-11-20 12:43:34.293847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.778 qpair failed and we were unable to recover it. 00:29:28.778 [2024-11-20 12:43:34.294070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.778 [2024-11-20 12:43:34.294100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.778 qpair failed and we were unable to recover it. 00:29:28.778 [2024-11-20 12:43:34.294272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.778 [2024-11-20 12:43:34.294297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.778 qpair failed and we were unable to recover it. 00:29:28.778 [2024-11-20 12:43:34.294486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.778 [2024-11-20 12:43:34.294510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.778 qpair failed and we were unable to recover it. 00:29:28.778 [2024-11-20 12:43:34.294756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.778 [2024-11-20 12:43:34.294779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.778 qpair failed and we were unable to recover it. 00:29:28.778 [2024-11-20 12:43:34.295054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.778 [2024-11-20 12:43:34.295078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.778 qpair failed and we were unable to recover it. 00:29:28.778 [2024-11-20 12:43:34.295263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.778 [2024-11-20 12:43:34.295288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.778 qpair failed and we were unable to recover it. 00:29:28.778 [2024-11-20 12:43:34.295506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.778 [2024-11-20 12:43:34.295528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.779 qpair failed and we were unable to recover it. 00:29:28.779 [2024-11-20 12:43:34.295702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.779 [2024-11-20 12:43:34.295725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.779 qpair failed and we were unable to recover it. 00:29:28.779 [2024-11-20 12:43:34.295895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.779 [2024-11-20 12:43:34.295918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.779 qpair failed and we were unable to recover it. 00:29:28.779 [2024-11-20 12:43:34.296128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.779 [2024-11-20 12:43:34.296151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.779 qpair failed and we were unable to recover it. 00:29:28.779 [2024-11-20 12:43:34.296341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.779 [2024-11-20 12:43:34.296366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.779 qpair failed and we were unable to recover it. 00:29:28.779 [2024-11-20 12:43:34.296527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.779 [2024-11-20 12:43:34.296550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.779 qpair failed and we were unable to recover it. 00:29:28.779 [2024-11-20 12:43:34.296702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.779 [2024-11-20 12:43:34.296726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.779 qpair failed and we were unable to recover it. 00:29:28.779 [2024-11-20 12:43:34.296913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.779 [2024-11-20 12:43:34.296936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.779 qpair failed and we were unable to recover it. 00:29:28.779 [2024-11-20 12:43:34.297134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.779 [2024-11-20 12:43:34.297158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.779 qpair failed and we were unable to recover it. 00:29:28.779 [2024-11-20 12:43:34.297351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.779 [2024-11-20 12:43:34.297375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.779 qpair failed and we were unable to recover it. 00:29:28.779 [2024-11-20 12:43:34.297490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.779 [2024-11-20 12:43:34.297513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.779 qpair failed and we were unable to recover it. 00:29:28.779 [2024-11-20 12:43:34.297665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.779 [2024-11-20 12:43:34.297688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.779 qpair failed and we were unable to recover it. 00:29:28.779 [2024-11-20 12:43:34.297773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.779 [2024-11-20 12:43:34.297812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.779 qpair failed and we were unable to recover it. 00:29:28.779 [2024-11-20 12:43:34.297974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.779 [2024-11-20 12:43:34.297997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.779 qpair failed and we were unable to recover it. 00:29:28.779 [2024-11-20 12:43:34.298229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.779 [2024-11-20 12:43:34.298257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.779 qpair failed and we were unable to recover it. 00:29:28.779 [2024-11-20 12:43:34.298501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.779 [2024-11-20 12:43:34.298525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.779 qpair failed and we were unable to recover it. 00:29:28.779 [2024-11-20 12:43:34.298754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.779 [2024-11-20 12:43:34.298777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.779 qpair failed and we were unable to recover it. 00:29:28.779 [2024-11-20 12:43:34.299023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.779 [2024-11-20 12:43:34.299049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.779 qpair failed and we were unable to recover it. 00:29:28.779 [2024-11-20 12:43:34.299211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.779 [2024-11-20 12:43:34.299235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.779 qpair failed and we were unable to recover it. 00:29:28.779 [2024-11-20 12:43:34.299520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.779 [2024-11-20 12:43:34.299543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.779 qpair failed and we were unable to recover it. 00:29:28.779 [2024-11-20 12:43:34.299705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.779 [2024-11-20 12:43:34.299729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.779 qpair failed and we were unable to recover it. 00:29:28.779 [2024-11-20 12:43:34.299973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.779 [2024-11-20 12:43:34.300005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.779 qpair failed and we were unable to recover it. 00:29:28.779 [2024-11-20 12:43:34.300252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.779 [2024-11-20 12:43:34.300276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.779 qpair failed and we were unable to recover it. 00:29:28.779 [2024-11-20 12:43:34.300517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.779 [2024-11-20 12:43:34.300540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.779 qpair failed and we were unable to recover it. 00:29:28.779 [2024-11-20 12:43:34.300706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.779 [2024-11-20 12:43:34.300730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.779 qpair failed and we were unable to recover it. 00:29:28.779 [2024-11-20 12:43:34.300989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.779 [2024-11-20 12:43:34.301012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.779 qpair failed and we were unable to recover it. 00:29:28.779 [2024-11-20 12:43:34.301189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.779 [2024-11-20 12:43:34.301219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.779 qpair failed and we were unable to recover it. 00:29:28.779 [2024-11-20 12:43:34.301391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.779 [2024-11-20 12:43:34.301414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.779 qpair failed and we were unable to recover it. 00:29:28.779 [2024-11-20 12:43:34.301579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.779 [2024-11-20 12:43:34.301602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.779 qpair failed and we were unable to recover it. 00:29:28.779 [2024-11-20 12:43:34.301708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.779 [2024-11-20 12:43:34.301730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.779 qpair failed and we were unable to recover it. 00:29:28.779 [2024-11-20 12:43:34.301820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.779 [2024-11-20 12:43:34.301842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.779 qpair failed and we were unable to recover it. 00:29:28.779 [2024-11-20 12:43:34.302064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.779 [2024-11-20 12:43:34.302087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.779 qpair failed and we were unable to recover it. 00:29:28.779 [2024-11-20 12:43:34.302176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.779 [2024-11-20 12:43:34.302197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.779 qpair failed and we were unable to recover it. 00:29:28.779 [2024-11-20 12:43:34.302443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.779 [2024-11-20 12:43:34.302467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.779 qpair failed and we were unable to recover it. 00:29:28.779 [2024-11-20 12:43:34.302638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.779 [2024-11-20 12:43:34.302661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.779 qpair failed and we were unable to recover it. 00:29:28.779 [2024-11-20 12:43:34.302895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.779 [2024-11-20 12:43:34.302919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.779 qpair failed and we were unable to recover it. 00:29:28.779 [2024-11-20 12:43:34.303148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.779 [2024-11-20 12:43:34.303171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.779 qpair failed and we were unable to recover it. 00:29:28.779 [2024-11-20 12:43:34.303406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.779 [2024-11-20 12:43:34.303431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.779 qpair failed and we were unable to recover it. 00:29:28.779 [2024-11-20 12:43:34.303652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.779 [2024-11-20 12:43:34.303675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.779 qpair failed and we were unable to recover it. 00:29:28.780 [2024-11-20 12:43:34.303832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.780 [2024-11-20 12:43:34.303856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.780 qpair failed and we were unable to recover it. 00:29:28.780 [2024-11-20 12:43:34.304023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.780 [2024-11-20 12:43:34.304046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.780 qpair failed and we were unable to recover it. 00:29:28.780 [2024-11-20 12:43:34.304233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.780 [2024-11-20 12:43:34.304257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.780 qpair failed and we were unable to recover it. 00:29:28.780 [2024-11-20 12:43:34.304438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.780 [2024-11-20 12:43:34.304463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.780 qpair failed and we were unable to recover it. 00:29:28.780 [2024-11-20 12:43:34.304633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.780 [2024-11-20 12:43:34.304656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.780 qpair failed and we were unable to recover it. 00:29:28.780 [2024-11-20 12:43:34.304932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.780 [2024-11-20 12:43:34.304955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.780 qpair failed and we were unable to recover it. 00:29:28.780 [2024-11-20 12:43:34.305224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.780 [2024-11-20 12:43:34.305247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.780 qpair failed and we were unable to recover it. 00:29:28.780 [2024-11-20 12:43:34.305484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.780 [2024-11-20 12:43:34.305507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.780 qpair failed and we were unable to recover it. 00:29:28.780 [2024-11-20 12:43:34.305756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.780 [2024-11-20 12:43:34.305780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.780 qpair failed and we were unable to recover it. 00:29:28.780 [2024-11-20 12:43:34.306011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.780 [2024-11-20 12:43:34.306035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.780 qpair failed and we were unable to recover it. 00:29:28.780 [2024-11-20 12:43:34.306191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.780 [2024-11-20 12:43:34.306220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.780 qpair failed and we were unable to recover it. 00:29:28.780 [2024-11-20 12:43:34.306441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.780 [2024-11-20 12:43:34.306465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.780 qpair failed and we were unable to recover it. 00:29:28.780 [2024-11-20 12:43:34.306716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.780 [2024-11-20 12:43:34.306739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.780 qpair failed and we were unable to recover it. 00:29:28.780 [2024-11-20 12:43:34.306896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.780 [2024-11-20 12:43:34.306920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.780 qpair failed and we were unable to recover it. 00:29:28.780 [2024-11-20 12:43:34.307015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.780 [2024-11-20 12:43:34.307037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.780 qpair failed and we were unable to recover it. 00:29:28.780 [2024-11-20 12:43:34.307262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.780 [2024-11-20 12:43:34.307288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.780 qpair failed and we were unable to recover it. 00:29:28.780 [2024-11-20 12:43:34.307504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.780 [2024-11-20 12:43:34.307531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.780 qpair failed and we were unable to recover it. 00:29:28.780 [2024-11-20 12:43:34.307807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.780 [2024-11-20 12:43:34.307830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.780 qpair failed and we were unable to recover it. 00:29:28.780 [2024-11-20 12:43:34.308017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.780 [2024-11-20 12:43:34.308041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.780 qpair failed and we were unable to recover it. 00:29:28.780 [2024-11-20 12:43:34.308162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.780 [2024-11-20 12:43:34.308183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.780 qpair failed and we were unable to recover it. 00:29:28.780 [2024-11-20 12:43:34.308356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.780 [2024-11-20 12:43:34.308381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.780 qpair failed and we were unable to recover it. 00:29:28.780 [2024-11-20 12:43:34.308560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.780 [2024-11-20 12:43:34.308584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.780 qpair failed and we were unable to recover it. 00:29:28.780 [2024-11-20 12:43:34.308754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.780 [2024-11-20 12:43:34.308777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.780 qpair failed and we were unable to recover it. 00:29:28.780 [2024-11-20 12:43:34.308947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.780 [2024-11-20 12:43:34.308970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.780 qpair failed and we were unable to recover it. 00:29:28.780 [2024-11-20 12:43:34.309217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.780 [2024-11-20 12:43:34.309241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.780 qpair failed and we were unable to recover it. 00:29:28.780 [2024-11-20 12:43:34.309489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.780 [2024-11-20 12:43:34.309512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.780 qpair failed and we were unable to recover it. 00:29:28.780 [2024-11-20 12:43:34.309782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.780 [2024-11-20 12:43:34.309805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.780 qpair failed and we were unable to recover it. 00:29:28.780 [2024-11-20 12:43:34.309995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.780 [2024-11-20 12:43:34.310019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.780 qpair failed and we were unable to recover it. 00:29:28.780 [2024-11-20 12:43:34.310129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.780 [2024-11-20 12:43:34.310151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.780 qpair failed and we were unable to recover it. 00:29:28.780 [2024-11-20 12:43:34.310399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.780 [2024-11-20 12:43:34.310423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.780 qpair failed and we were unable to recover it. 00:29:28.780 [2024-11-20 12:43:34.310617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.780 [2024-11-20 12:43:34.310641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.780 qpair failed and we were unable to recover it. 00:29:28.780 [2024-11-20 12:43:34.310862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.780 [2024-11-20 12:43:34.310885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.780 qpair failed and we were unable to recover it. 00:29:28.780 [2024-11-20 12:43:34.311109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.780 [2024-11-20 12:43:34.311132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.780 qpair failed and we were unable to recover it. 00:29:28.780 [2024-11-20 12:43:34.311284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.780 [2024-11-20 12:43:34.311308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.780 qpair failed and we were unable to recover it. 00:29:28.780 [2024-11-20 12:43:34.311557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.780 [2024-11-20 12:43:34.311581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.780 qpair failed and we were unable to recover it. 00:29:28.780 [2024-11-20 12:43:34.311845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.780 [2024-11-20 12:43:34.311869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.780 qpair failed and we were unable to recover it. 00:29:28.780 [2024-11-20 12:43:34.312114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.780 [2024-11-20 12:43:34.312137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.780 qpair failed and we were unable to recover it. 00:29:28.780 [2024-11-20 12:43:34.312365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.780 [2024-11-20 12:43:34.312390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.780 qpair failed and we were unable to recover it. 00:29:28.780 [2024-11-20 12:43:34.312565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-20 12:43:34.312588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-20 12:43:34.312781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-20 12:43:34.312804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-20 12:43:34.313047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-20 12:43:34.313070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-20 12:43:34.313257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-20 12:43:34.313281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-20 12:43:34.313478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-20 12:43:34.313502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-20 12:43:34.313723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-20 12:43:34.313750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-20 12:43:34.313997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-20 12:43:34.314020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-20 12:43:34.314196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-20 12:43:34.314227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-20 12:43:34.314471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-20 12:43:34.314495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-20 12:43:34.314693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-20 12:43:34.314716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-20 12:43:34.314980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-20 12:43:34.315004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-20 12:43:34.315226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-20 12:43:34.315251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-20 12:43:34.315498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-20 12:43:34.315522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-20 12:43:34.315694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-20 12:43:34.315717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-20 12:43:34.315965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-20 12:43:34.315988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-20 12:43:34.316184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-20 12:43:34.316216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-20 12:43:34.316440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-20 12:43:34.316465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-20 12:43:34.316574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-20 12:43:34.316597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-20 12:43:34.316784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-20 12:43:34.316808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-20 12:43:34.317057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-20 12:43:34.317081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-20 12:43:34.317334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-20 12:43:34.317359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-20 12:43:34.317579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-20 12:43:34.317603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-20 12:43:34.317712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-20 12:43:34.317736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-20 12:43:34.317891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-20 12:43:34.317915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-20 12:43:34.318153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-20 12:43:34.318176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-20 12:43:34.318495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-20 12:43:34.318519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-20 12:43:34.318699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-20 12:43:34.318722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-20 12:43:34.318895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-20 12:43:34.318919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-20 12:43:34.319143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-20 12:43:34.319166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-20 12:43:34.319350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-20 12:43:34.319375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-20 12:43:34.319561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-20 12:43:34.319585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-20 12:43:34.319759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-20 12:43:34.319783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-20 12:43:34.319976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-20 12:43:34.319999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-20 12:43:34.320189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-20 12:43:34.320223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-20 12:43:34.320487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-20 12:43:34.320510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-20 12:43:34.320708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-20 12:43:34.320731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-20 12:43:34.320951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-20 12:43:34.320975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-20 12:43:34.321223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-20 12:43:34.321247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-20 12:43:34.321416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-20 12:43:34.321439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-20 12:43:34.321688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-20 12:43:34.321712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-20 12:43:34.321832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-20 12:43:34.321855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-20 12:43:34.322019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-20 12:43:34.322042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-20 12:43:34.322306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-20 12:43:34.322331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-20 12:43:34.322501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-20 12:43:34.322525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-20 12:43:34.322770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-20 12:43:34.322794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-20 12:43:34.322967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-20 12:43:34.322991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-20 12:43:34.323164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-20 12:43:34.323190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-20 12:43:34.323487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-20 12:43:34.323511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-20 12:43:34.323682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-20 12:43:34.323706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-20 12:43:34.323878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-20 12:43:34.323901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-20 12:43:34.324121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-20 12:43:34.324145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-20 12:43:34.324415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-20 12:43:34.324440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-20 12:43:34.324683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-20 12:43:34.324707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-20 12:43:34.324877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-20 12:43:34.324901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-20 12:43:34.325154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-20 12:43:34.325178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-20 12:43:34.325287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-20 12:43:34.325308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-20 12:43:34.325582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-20 12:43:34.325606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-20 12:43:34.325762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-20 12:43:34.325785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-20 12:43:34.326024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-20 12:43:34.326048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-20 12:43:34.326237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-20 12:43:34.326261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-20 12:43:34.326447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-20 12:43:34.326471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-20 12:43:34.326744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-20 12:43:34.326769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-20 12:43:34.326927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-20 12:43:34.326950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-20 12:43:34.327120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-20 12:43:34.327145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-20 12:43:34.327321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-20 12:43:34.327345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-20 12:43:34.327616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-20 12:43:34.327640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-20 12:43:34.327812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-20 12:43:34.327836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-20 12:43:34.328001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-20 12:43:34.328023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-20 12:43:34.328145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-20 12:43:34.328168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-20 12:43:34.328439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-20 12:43:34.328463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-20 12:43:34.328705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-20 12:43:34.328728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-20 12:43:34.328913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-20 12:43:34.328937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-20 12:43:34.329112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-20 12:43:34.329135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-20 12:43:34.329334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-20 12:43:34.329358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-20 12:43:34.329521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-20 12:43:34.329545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-20 12:43:34.329818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-20 12:43:34.329842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-20 12:43:34.330113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-20 12:43:34.330137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-20 12:43:34.330293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-20 12:43:34.330318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-20 12:43:34.330542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-20 12:43:34.330566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-20 12:43:34.330813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-20 12:43:34.330837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-20 12:43:34.331090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-20 12:43:34.331113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-20 12:43:34.331290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-20 12:43:34.331316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-20 12:43:34.331470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-20 12:43:34.331494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-20 12:43:34.331661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-20 12:43:34.331685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-20 12:43:34.331785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-20 12:43:34.331808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-20 12:43:34.332008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-20 12:43:34.332033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-20 12:43:34.332215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-20 12:43:34.332240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-20 12:43:34.332440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-20 12:43:34.332465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-20 12:43:34.332664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-20 12:43:34.332688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-20 12:43:34.332876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-20 12:43:34.332900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-20 12:43:34.333091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-20 12:43:34.333114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-20 12:43:34.333391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-20 12:43:34.333416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-20 12:43:34.333588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-20 12:43:34.333611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-20 12:43:34.333854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-20 12:43:34.333878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-20 12:43:34.334047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-20 12:43:34.334071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-20 12:43:34.334335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-20 12:43:34.334360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-20 12:43:34.334601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-20 12:43:34.334625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-20 12:43:34.334789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-20 12:43:34.334812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-20 12:43:34.334909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-20 12:43:34.334931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-20 12:43:34.335151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-20 12:43:34.335175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-20 12:43:34.335424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-20 12:43:34.335448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-20 12:43:34.335701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-20 12:43:34.335724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-20 12:43:34.335984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-20 12:43:34.336008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-20 12:43:34.336191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-20 12:43:34.336223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-20 12:43:34.336474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-20 12:43:34.336497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-20 12:43:34.336664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-20 12:43:34.336687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-20 12:43:34.336892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-20 12:43:34.336916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-20 12:43:34.337188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-20 12:43:34.337217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-20 12:43:34.337461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-20 12:43:34.337484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-20 12:43:34.337642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-20 12:43:34.337665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-20 12:43:34.337899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-20 12:43:34.337922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-20 12:43:34.338091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-20 12:43:34.338115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-20 12:43:34.338338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-20 12:43:34.338364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-20 12:43:34.338538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-20 12:43:34.338561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-20 12:43:34.338676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-20 12:43:34.338706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-20 12:43:34.338956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-20 12:43:34.338980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-20 12:43:34.339179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-20 12:43:34.339211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-20 12:43:34.339411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-20 12:43:34.339435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-20 12:43:34.339605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-20 12:43:34.339628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-20 12:43:34.339821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-20 12:43:34.339844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-20 12:43:34.339942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-20 12:43:34.339965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-20 12:43:34.340226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-20 12:43:34.340250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-20 12:43:34.340528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-20 12:43:34.340551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-20 12:43:34.340713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-20 12:43:34.340737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-20 12:43:34.340908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-20 12:43:34.340931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-20 12:43:34.341155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-20 12:43:34.341178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-20 12:43:34.341445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-20 12:43:34.341470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-20 12:43:34.341701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-20 12:43:34.341725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-20 12:43:34.341956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-20 12:43:34.341980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-20 12:43:34.342144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-20 12:43:34.342167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-20 12:43:34.342359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-20 12:43:34.342383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-20 12:43:34.342583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-20 12:43:34.342607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-20 12:43:34.342808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-20 12:43:34.342831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-20 12:43:34.343117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-20 12:43:34.343140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-20 12:43:34.343388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-20 12:43:34.343413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-20 12:43:34.343623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-20 12:43:34.343647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-20 12:43:34.343823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-20 12:43:34.343846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-20 12:43:34.344093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-20 12:43:34.344117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-20 12:43:34.344367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-20 12:43:34.344392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-20 12:43:34.344663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-20 12:43:34.344688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-20 12:43:34.344858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-20 12:43:34.344882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-20 12:43:34.345045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-20 12:43:34.345068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-20 12:43:34.345276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-20 12:43:34.345302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-20 12:43:34.345505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-20 12:43:34.345530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-20 12:43:34.345692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-20 12:43:34.345715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-20 12:43:34.345961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-20 12:43:34.345985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-20 12:43:34.346170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-20 12:43:34.346194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-20 12:43:34.346314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-20 12:43:34.346337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-20 12:43:34.346540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-20 12:43:34.346563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-20 12:43:34.346797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-20 12:43:34.346821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-20 12:43:34.347039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-20 12:43:34.347063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-20 12:43:34.347240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-20 12:43:34.347265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-20 12:43:34.347527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-20 12:43:34.347552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-20 12:43:34.347802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-20 12:43:34.347826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-20 12:43:34.347983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-20 12:43:34.348007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-20 12:43:34.348270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-20 12:43:34.348296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-20 12:43:34.348476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-20 12:43:34.348499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-20 12:43:34.348689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-20 12:43:34.348713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-20 12:43:34.348880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-20 12:43:34.348903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-20 12:43:34.349138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-20 12:43:34.349164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-20 12:43:34.349434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-20 12:43:34.349458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-20 12:43:34.349713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-20 12:43:34.349737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-20 12:43:34.349914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-20 12:43:34.349938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-20 12:43:34.350097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-20 12:43:34.350120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-20 12:43:34.350297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-20 12:43:34.350321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-20 12:43:34.350493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-20 12:43:34.350517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-20 12:43:34.350765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-20 12:43:34.350788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-20 12:43:34.351012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-20 12:43:34.351036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-20 12:43:34.351283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-20 12:43:34.351308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-20 12:43:34.351495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-20 12:43:34.351519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-20 12:43:34.351694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-20 12:43:34.351718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-20 12:43:34.351888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-20 12:43:34.351913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-20 12:43:34.352162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-20 12:43:34.352186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-20 12:43:34.352352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-20 12:43:34.352375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-20 12:43:34.352537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-20 12:43:34.352561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-20 12:43:34.352791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-20 12:43:34.352815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-20 12:43:34.353061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-20 12:43:34.353084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-20 12:43:34.353183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-20 12:43:34.353221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-20 12:43:34.353475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-20 12:43:34.353499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-20 12:43:34.353749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-20 12:43:34.353773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-20 12:43:34.354045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-20 12:43:34.354069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-20 12:43:34.354345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-20 12:43:34.354371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-20 12:43:34.354496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-20 12:43:34.354524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-20 12:43:34.354771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-20 12:43:34.354795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-20 12:43:34.355068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-20 12:43:34.355092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-20 12:43:34.355262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-20 12:43:34.355287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-20 12:43:34.355514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-20 12:43:34.355538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-20 12:43:34.355785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-20 12:43:34.355809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-20 12:43:34.356059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-20 12:43:34.356083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-20 12:43:34.356345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-20 12:43:34.356370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-20 12:43:34.356664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-20 12:43:34.356689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-20 12:43:34.356859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-20 12:43:34.356883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-20 12:43:34.357074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-20 12:43:34.357098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-20 12:43:34.357347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-20 12:43:34.357371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-20 12:43:34.357646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-20 12:43:34.357671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-20 12:43:34.357782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-20 12:43:34.357805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-20 12:43:34.357928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-20 12:43:34.357953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-20 12:43:34.358211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-20 12:43:34.358235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-20 12:43:34.358414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-20 12:43:34.358438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-20 12:43:34.358640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-20 12:43:34.358664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-20 12:43:34.358833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-20 12:43:34.358856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-20 12:43:34.359082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-20 12:43:34.359105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-20 12:43:34.359272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-20 12:43:34.359297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-20 12:43:34.359549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-20 12:43:34.359573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-20 12:43:34.359801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-20 12:43:34.359825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-20 12:43:34.360066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-20 12:43:34.360092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-20 12:43:34.360274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-20 12:43:34.360299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-20 12:43:34.360526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-20 12:43:34.360560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-20 12:43:34.360720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-20 12:43:34.360745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-20 12:43:34.360987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-20 12:43:34.361011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-20 12:43:34.361255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-20 12:43:34.361281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-20 12:43:34.361438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-20 12:43:34.361463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-20 12:43:34.361622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-20 12:43:34.361646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-20 12:43:34.361812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-20 12:43:34.361836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-20 12:43:34.362017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-20 12:43:34.362041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-20 12:43:34.362296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-20 12:43:34.362321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-20 12:43:34.362556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-20 12:43:34.362581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-20 12:43:34.362777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-20 12:43:34.362801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-20 12:43:34.362967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-20 12:43:34.362991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-20 12:43:34.363219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-20 12:43:34.363244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-20 12:43:34.363445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-20 12:43:34.363469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-20 12:43:34.363571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-20 12:43:34.363593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-20 12:43:34.363845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-20 12:43:34.363870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-20 12:43:34.364091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-20 12:43:34.364118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-20 12:43:34.364361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-20 12:43:34.364386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-20 12:43:34.364547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-20 12:43:34.364572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-20 12:43:34.364827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-20 12:43:34.364851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-20 12:43:34.365016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-20 12:43:34.365039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-20 12:43:34.365198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-20 12:43:34.365229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-20 12:43:34.365458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-20 12:43:34.365482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-20 12:43:34.365751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-20 12:43:34.365775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-20 12:43:34.365934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-20 12:43:34.365959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-20 12:43:34.366187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-20 12:43:34.366224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-20 12:43:34.366406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-20 12:43:34.366430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-20 12:43:34.366683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-20 12:43:34.366708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-20 12:43:34.366932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-20 12:43:34.366956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-20 12:43:34.367214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-20 12:43:34.367239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-20 12:43:34.367491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-20 12:43:34.367515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-20 12:43:34.367694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-20 12:43:34.367718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-20 12:43:34.367907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-20 12:43:34.367931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-20 12:43:34.368023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-20 12:43:34.368046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-20 12:43:34.368310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-20 12:43:34.368335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-20 12:43:34.368592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-20 12:43:34.368616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-20 12:43:34.368812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-20 12:43:34.368836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-20 12:43:34.368991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-20 12:43:34.369016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-20 12:43:34.369256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-20 12:43:34.369281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-20 12:43:34.369532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-20 12:43:34.369557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-20 12:43:34.369793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-20 12:43:34.369817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-20 12:43:34.370050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-20 12:43:34.370074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-20 12:43:34.370261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-20 12:43:34.370285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-20 12:43:34.370532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-20 12:43:34.370560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-20 12:43:34.370746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-20 12:43:34.370770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-20 12:43:34.371020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-20 12:43:34.371044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-20 12:43:34.371298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-20 12:43:34.371323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-20 12:43:34.371422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-20 12:43:34.371444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-20 12:43:34.371626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-20 12:43:34.371650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-20 12:43:34.371827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-20 12:43:34.371850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-20 12:43:34.372100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-20 12:43:34.372123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-20 12:43:34.372226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-20 12:43:34.372250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-20 12:43:34.372428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-20 12:43:34.372452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-20 12:43:34.372657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-20 12:43:34.372681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-20 12:43:34.372853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-20 12:43:34.372877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-20 12:43:34.373108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-20 12:43:34.373131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-20 12:43:34.373232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-20 12:43:34.373255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-20 12:43:34.373529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-20 12:43:34.373553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-20 12:43:34.373805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-20 12:43:34.373829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-20 12:43:34.374105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-20 12:43:34.374129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-20 12:43:34.374382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-20 12:43:34.374407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-20 12:43:34.374587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-20 12:43:34.374611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-20 12:43:34.374837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-20 12:43:34.374860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-20 12:43:34.375064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-20 12:43:34.375087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-20 12:43:34.375247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-20 12:43:34.375272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-20 12:43:34.375451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-20 12:43:34.375475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-20 12:43:34.375677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-20 12:43:34.375701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-20 12:43:34.375889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-20 12:43:34.375913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-20 12:43:34.376190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-20 12:43:34.376223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-20 12:43:34.376450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-20 12:43:34.376474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-20 12:43:34.376736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-20 12:43:34.376759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-20 12:43:34.376981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-20 12:43:34.377006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-20 12:43:34.377250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-20 12:43:34.377275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-20 12:43:34.377446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-20 12:43:34.377470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-20 12:43:34.377560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-20 12:43:34.377582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-20 12:43:34.377808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-20 12:43:34.377831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-20 12:43:34.378022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-20 12:43:34.378046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-20 12:43:34.378273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-20 12:43:34.378299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-20 12:43:34.378535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-20 12:43:34.378559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-20 12:43:34.378786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-20 12:43:34.378810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-20 12:43:34.378987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-20 12:43:34.379011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-20 12:43:34.379132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-20 12:43:34.379155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-20 12:43:34.379260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-20 12:43:34.379283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-20 12:43:34.379386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-20 12:43:34.379408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-20 12:43:34.379636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-20 12:43:34.379664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-20 12:43:34.379827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-20 12:43:34.379852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-20 12:43:34.380108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-20 12:43:34.380132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-20 12:43:34.380324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-20 12:43:34.380348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-20 12:43:34.380577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-20 12:43:34.380601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-20 12:43:34.380872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-20 12:43:34.380896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-20 12:43:34.381145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-20 12:43:34.381169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-20 12:43:34.381402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-20 12:43:34.381427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-20 12:43:34.381711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-20 12:43:34.381736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-20 12:43:34.381989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-20 12:43:34.382012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-20 12:43:34.382267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-20 12:43:34.382293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-20 12:43:34.382520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-20 12:43:34.382544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-20 12:43:34.382786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-20 12:43:34.382810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-20 12:43:34.383063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-20 12:43:34.383086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-20 12:43:34.383341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-20 12:43:34.383367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-20 12:43:34.383628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-20 12:43:34.383652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-20 12:43:34.383907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-20 12:43:34.383931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-20 12:43:34.384184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-20 12:43:34.384214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-20 12:43:34.384392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-20 12:43:34.384416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-20 12:43:34.384572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-20 12:43:34.384595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-20 12:43:34.384763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-20 12:43:34.384787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-20 12:43:34.385020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-20 12:43:34.385044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-20 12:43:34.385270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-20 12:43:34.385295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-20 12:43:34.385406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-20 12:43:34.385428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-20 12:43:34.385692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-20 12:43:34.385717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-20 12:43:34.385902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-20 12:43:34.385927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-20 12:43:34.386211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-20 12:43:34.386236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-20 12:43:34.386399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-20 12:43:34.386448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-20 12:43:34.386634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-20 12:43:34.386658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-20 12:43:34.386829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-20 12:43:34.386853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-20 12:43:34.387101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-20 12:43:34.387125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-20 12:43:34.387352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-20 12:43:34.387378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-20 12:43:34.387623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-20 12:43:34.387646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-20 12:43:34.387896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-20 12:43:34.387920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-20 12:43:34.388102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-20 12:43:34.388127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-20 12:43:34.388301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-20 12:43:34.388326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-20 12:43:34.388485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-20 12:43:34.388509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-20 12:43:34.388609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-20 12:43:34.388632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-20 12:43:34.388808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-20 12:43:34.388832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-20 12:43:34.389018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-20 12:43:34.389042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-20 12:43:34.389267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-20 12:43:34.389293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-20 12:43:34.389636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-20 12:43:34.389715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-20 12:43:34.389958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-20 12:43:34.389997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-20 12:43:34.390256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-20 12:43:34.390292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-20 12:43:34.390519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-20 12:43:34.390553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-20 12:43:34.390804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-20 12:43:34.390838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-20 12:43:34.391149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-20 12:43:34.391184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-20 12:43:34.391465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-20 12:43:34.391493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-20 12:43:34.391751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-20 12:43:34.391776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-20 12:43:34.392008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-20 12:43:34.392033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-20 12:43:34.392150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-20 12:43:34.392174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-20 12:43:34.392569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-20 12:43:34.392594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-20 12:43:34.392758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-20 12:43:34.392782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-20 12:43:34.393052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-20 12:43:34.393076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-20 12:43:34.393315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-20 12:43:34.393340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-20 12:43:34.393550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-20 12:43:34.393574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-20 12:43:34.393749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-20 12:43:34.393774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-20 12:43:34.394003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-20 12:43:34.394027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-20 12:43:34.394283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-20 12:43:34.394309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-20 12:43:34.394475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-20 12:43:34.394498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-20 12:43:34.394680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-20 12:43:34.394704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-20 12:43:34.394964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-20 12:43:34.394989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-20 12:43:34.395193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-20 12:43:34.395229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-20 12:43:34.395402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-20 12:43:34.395426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-20 12:43:34.395623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-20 12:43:34.395647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-20 12:43:34.395884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-20 12:43:34.395908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-20 12:43:34.396084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-20 12:43:34.396108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-20 12:43:34.396363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-20 12:43:34.396388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-20 12:43:34.396708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-20 12:43:34.396746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-20 12:43:34.396944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-20 12:43:34.396979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-20 12:43:34.397256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-20 12:43:34.397292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-20 12:43:34.397573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-20 12:43:34.397608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-20 12:43:34.397861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-20 12:43:34.397896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-20 12:43:34.398195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-20 12:43:34.398237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-20 12:43:34.398497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-20 12:43:34.398533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-20 12:43:34.398785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-20 12:43:34.398820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-20 12:43:34.399071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-20 12:43:34.399106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-20 12:43:34.399358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-20 12:43:34.399395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-20 12:43:34.399701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-20 12:43:34.399736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-20 12:43:34.399926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-20 12:43:34.399960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-20 12:43:34.400166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-20 12:43:34.400212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-20 12:43:34.400489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-20 12:43:34.400524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-20 12:43:34.400747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-20 12:43:34.400782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-20 12:43:34.401019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-20 12:43:34.401048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-20 12:43:34.401172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-20 12:43:34.401196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-20 12:43:34.401379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-20 12:43:34.401405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-20 12:43:34.401588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-20 12:43:34.401612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-20 12:43:34.401867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-20 12:43:34.401891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-20 12:43:34.402161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-20 12:43:34.402187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-20 12:43:34.402329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-20 12:43:34.402354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-20 12:43:34.402543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-20 12:43:34.402568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-20 12:43:34.402823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-20 12:43:34.402847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-20 12:43:34.403082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-20 12:43:34.403107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-20 12:43:34.403345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-20 12:43:34.403371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-20 12:43:34.403569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-20 12:43:34.403594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-20 12:43:34.403879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-20 12:43:34.403904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-20 12:43:34.404089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-20 12:43:34.404112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-20 12:43:34.404295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-20 12:43:34.404321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-20 12:43:34.404504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-20 12:43:34.404529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-20 12:43:34.404724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-20 12:43:34.404749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-20 12:43:34.404923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-20 12:43:34.404947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-20 12:43:34.405118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-20 12:43:34.405144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-20 12:43:34.405353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-20 12:43:34.405378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-20 12:43:34.405613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-20 12:43:34.405638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-20 12:43:34.405894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-20 12:43:34.405918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-20 12:43:34.406109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-20 12:43:34.406134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-20 12:43:34.406395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-20 12:43:34.406423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-20 12:43:34.406592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-20 12:43:34.406617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-20 12:43:34.406897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-20 12:43:34.406923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-20 12:43:34.407043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-20 12:43:34.407068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-20 12:43:34.407298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-20 12:43:34.407324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-20 12:43:34.407577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-20 12:43:34.407602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-20 12:43:34.407865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-20 12:43:34.407890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-20 12:43:34.408069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-20 12:43:34.408093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-20 12:43:34.408350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-20 12:43:34.408375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-20 12:43:34.408539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-20 12:43:34.408564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-20 12:43:34.408767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-20 12:43:34.408791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-20 12:43:34.409059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-20 12:43:34.409083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-20 12:43:34.409331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-20 12:43:34.409357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-20 12:43:34.409483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-20 12:43:34.409509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-20 12:43:34.409695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-20 12:43:34.409721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-20 12:43:34.409956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-20 12:43:34.409981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-20 12:43:34.410082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-20 12:43:34.410108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-20 12:43:34.410347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-20 12:43:34.410373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-20 12:43:34.410609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-20 12:43:34.410633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-20 12:43:34.410907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-20 12:43:34.410932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-20 12:43:34.411116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-20 12:43:34.411140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-20 12:43:34.411325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-20 12:43:34.411350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-20 12:43:34.411538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-20 12:43:34.411563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-20 12:43:34.411770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-20 12:43:34.411796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-20 12:43:34.412018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-20 12:43:34.412044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-20 12:43:34.412214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-20 12:43:34.412241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-20 12:43:34.412403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-20 12:43:34.412429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-20 12:43:34.412688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-20 12:43:34.412713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-20 12:43:34.412885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-20 12:43:34.412910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-20 12:43:34.413188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-20 12:43:34.413222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-20 12:43:34.413437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-20 12:43:34.413462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-20 12:43:34.413631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-20 12:43:34.413656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-20 12:43:34.413927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-20 12:43:34.413951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-20 12:43:34.414075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-20 12:43:34.414100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-20 12:43:34.414265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-20 12:43:34.414291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-20 12:43:34.414531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-20 12:43:34.414556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-20 12:43:34.414835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-20 12:43:34.414861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-20 12:43:34.415040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-20 12:43:34.415065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-20 12:43:34.415350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-20 12:43:34.415375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-20 12:43:34.415576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-20 12:43:34.415601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-20 12:43:34.415767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-20 12:43:34.415792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-20 12:43:34.415984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-20 12:43:34.416011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-20 12:43:34.416270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-20 12:43:34.416296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-20 12:43:34.416428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-20 12:43:34.416454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-20 12:43:34.416653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-20 12:43:34.416678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-20 12:43:34.416864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-20 12:43:34.416888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-20 12:43:34.417125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-20 12:43:34.417149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-20 12:43:34.417414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-20 12:43:34.417440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-20 12:43:34.417624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-20 12:43:34.417649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-20 12:43:34.417832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-20 12:43:34.417857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-20 12:43:34.418042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-20 12:43:34.418067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-20 12:43:34.418323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-20 12:43:34.418349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-20 12:43:34.418582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-20 12:43:34.418607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-20 12:43:34.418849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-20 12:43:34.418875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-20 12:43:34.419038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-20 12:43:34.419062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-20 12:43:34.419324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-20 12:43:34.419350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-20 12:43:34.419530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-20 12:43:34.419555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-20 12:43:34.419830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-20 12:43:34.419855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-20 12:43:34.420019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-20 12:43:34.420044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-20 12:43:34.420242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-20 12:43:34.420269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-20 12:43:34.420555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-20 12:43:34.420579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-20 12:43:34.420754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-20 12:43:34.420779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-20 12:43:34.420977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-20 12:43:34.421002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-20 12:43:34.421132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-20 12:43:34.421156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-20 12:43:34.421339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-20 12:43:34.421366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-20 12:43:34.421610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-20 12:43:34.421634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-20 12:43:34.421825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-20 12:43:34.421849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-20 12:43:34.422129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-20 12:43:34.422153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-20 12:43:34.422405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-20 12:43:34.422430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-20 12:43:34.422615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-20 12:43:34.422641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-20 12:43:34.422922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-20 12:43:34.422948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-20 12:43:34.423235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-20 12:43:34.423261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-20 12:43:34.423505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-20 12:43:34.423530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-20 12:43:34.423786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-20 12:43:34.423812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-20 12:43:34.423998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-20 12:43:34.424023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-20 12:43:34.424310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-20 12:43:34.424336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-20 12:43:34.424457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-20 12:43:34.424482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-20 12:43:34.424668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-20 12:43:34.424692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-20 12:43:34.424940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-20 12:43:34.424964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-20 12:43:34.425245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-20 12:43:34.425271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-20 12:43:34.425553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-20 12:43:34.425578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-20 12:43:34.425777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-20 12:43:34.425801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-20 12:43:34.425965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-20 12:43:34.425990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-20 12:43:34.426164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-20 12:43:34.426190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-20 12:43:34.426364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-20 12:43:34.426395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-20 12:43:34.426631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-20 12:43:34.426656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-20 12:43:34.426890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-20 12:43:34.426915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-20 12:43:34.427095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-20 12:43:34.427120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-20 12:43:34.427354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-20 12:43:34.427380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-20 12:43:34.427566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-20 12:43:34.427592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-20 12:43:34.427886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-20 12:43:34.427912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-20 12:43:34.428156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-20 12:43:34.428180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-20 12:43:34.428460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-20 12:43:34.428485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-20 12:43:34.428663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-20 12:43:34.428690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-20 12:43:34.428865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-20 12:43:34.428889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-20 12:43:34.429122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-20 12:43:34.429147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-20 12:43:34.429328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-20 12:43:34.429356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-20 12:43:34.429524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-20 12:43:34.429549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-20 12:43:34.429748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-20 12:43:34.429773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-20 12:43:34.429900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-20 12:43:34.429926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-20 12:43:34.430180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-20 12:43:34.430214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-20 12:43:34.430472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-20 12:43:34.430497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-20 12:43:34.430696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-20 12:43:34.430722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-20 12:43:34.430887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-20 12:43:34.430913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-20 12:43:34.431104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-20 12:43:34.431129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-20 12:43:34.431364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-20 12:43:34.431389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-20 12:43:34.431648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-20 12:43:34.431673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-20 12:43:34.431916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-20 12:43:34.431941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-20 12:43:34.432192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-20 12:43:34.432228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-20 12:43:34.432408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-20 12:43:34.432433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-20 12:43:34.432624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-20 12:43:34.432650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-20 12:43:34.432884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-20 12:43:34.432909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-20 12:43:34.433168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-20 12:43:34.433194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-20 12:43:34.433437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-20 12:43:34.433463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-20 12:43:34.433675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-20 12:43:34.433700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-20 12:43:34.433872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-20 12:43:34.433898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-20 12:43:34.434015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-20 12:43:34.434039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-20 12:43:34.434230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-20 12:43:34.434256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-20 12:43:34.434379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-20 12:43:34.434406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-20 12:43:34.434530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-20 12:43:34.434555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-20 12:43:34.434674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-20 12:43:34.434701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-20 12:43:34.434883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-20 12:43:34.434908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-20 12:43:34.435141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-20 12:43:34.435165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-20 12:43:34.435338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-20 12:43:34.435364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-20 12:43:34.435487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-20 12:43:34.435512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-20 12:43:34.435621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-20 12:43:34.435645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-20 12:43:34.435827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-20 12:43:34.435853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-20 12:43:34.435965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-20 12:43:34.435991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-20 12:43:34.436089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-20 12:43:34.436111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-20 12:43:34.436222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-20 12:43:34.436247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-20 12:43:34.436477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-20 12:43:34.436502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-20 12:43:34.436598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-20 12:43:34.436621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-20 12:43:34.436735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-20 12:43:34.436758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-20 12:43:34.436967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-20 12:43:34.436993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-20 12:43:34.437155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-20 12:43:34.437179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-20 12:43:34.437307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-20 12:43:34.437335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-20 12:43:34.437564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-20 12:43:34.437590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-20 12:43:34.437751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-20 12:43:34.437776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-20 12:43:34.438052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-20 12:43:34.438078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-20 12:43:34.438321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-20 12:43:34.438348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-20 12:43:34.438549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-20 12:43:34.438575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-20 12:43:34.438690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-20 12:43:34.438716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-20 12:43:34.438974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-20 12:43:34.438999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-20 12:43:34.439199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-20 12:43:34.439245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-20 12:43:34.439348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-20 12:43:34.439372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-20 12:43:34.439503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-20 12:43:34.439529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-20 12:43:34.439702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-20 12:43:34.439727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-20 12:43:34.439892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-20 12:43:34.439916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-20 12:43:34.440031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-20 12:43:34.440057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-20 12:43:34.440221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-20 12:43:34.440246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-20 12:43:34.440352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-20 12:43:34.440376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-20 12:43:34.440607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-20 12:43:34.440632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-20 12:43:34.440821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-20 12:43:34.440850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-20 12:43:34.441023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-20 12:43:34.441047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-20 12:43:34.441274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-20 12:43:34.441299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-20 12:43:34.441407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-20 12:43:34.441430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-20 12:43:34.441612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-20 12:43:34.441638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-20 12:43:34.441873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-20 12:43:34.441900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-20 12:43:34.442158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-20 12:43:34.442184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-20 12:43:34.442370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-20 12:43:34.442395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-20 12:43:34.442576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-20 12:43:34.442601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-20 12:43:34.442781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-20 12:43:34.442806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-20 12:43:34.443065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-20 12:43:34.443091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-20 12:43:34.443274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-20 12:43:34.443302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-20 12:43:34.443420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-20 12:43:34.443443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-20 12:43:34.443614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-20 12:43:34.443640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-20 12:43:34.443809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-20 12:43:34.443834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-20 12:43:34.444075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-20 12:43:34.444102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-20 12:43:34.444235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-20 12:43:34.444263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-20 12:43:34.444436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-20 12:43:34.444462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-20 12:43:34.444646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-20 12:43:34.444683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-20 12:43:34.444941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-20 12:43:34.444966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-20 12:43:34.445140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-20 12:43:34.445165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-20 12:43:34.445352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-20 12:43:34.445379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-20 12:43:34.445496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-20 12:43:34.445521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-20 12:43:34.445715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-20 12:43:34.445740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-20 12:43:34.445942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-20 12:43:34.445969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-20 12:43:34.446178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-20 12:43:34.446212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-20 12:43:34.446506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-20 12:43:34.446531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-20 12:43:34.446707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-20 12:43:34.446731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-20 12:43:34.446977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-20 12:43:34.447005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-20 12:43:34.447238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-20 12:43:34.447263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-20 12:43:34.447429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-20 12:43:34.447455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-20 12:43:34.447777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-20 12:43:34.447801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-20 12:43:34.447914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-20 12:43:34.447936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-20 12:43:34.448040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-20 12:43:34.448063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-20 12:43:34.448239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-20 12:43:34.448265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-20 12:43:34.448426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-20 12:43:34.448452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-20 12:43:34.448570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-20 12:43:34.448595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-20 12:43:34.448760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-20 12:43:34.448785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-20 12:43:34.448980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-20 12:43:34.449015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-20 12:43:34.449164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-20 12:43:34.449215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-20 12:43:34.449350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-20 12:43:34.449386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-20 12:43:34.449588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-20 12:43:34.449630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-20 12:43:34.449782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-20 12:43:34.449818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-20 12:43:34.450015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-20 12:43:34.450050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-20 12:43:34.450238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-20 12:43:34.450274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-20 12:43:34.450466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-20 12:43:34.450501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-20 12:43:34.450697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-20 12:43:34.450732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-20 12:43:34.450857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-20 12:43:34.450892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-20 12:43:34.451154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-20 12:43:34.451189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-20 12:43:34.451427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-20 12:43:34.451463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-20 12:43:34.451578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-20 12:43:34.451612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-20 12:43:34.451825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-20 12:43:34.451871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-20 12:43:34.452038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-20 12:43:34.452063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-20 12:43:34.452265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-20 12:43:34.452301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-20 12:43:34.452427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-20 12:43:34.452461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-20 12:43:34.452609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-20 12:43:34.452644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-20 12:43:34.452864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-20 12:43:34.452899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-20 12:43:34.453013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-20 12:43:34.453038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-20 12:43:34.453216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-20 12:43:34.453242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-20 12:43:34.453368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-20 12:43:34.453393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-20 12:43:34.453555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-20 12:43:34.453579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-20 12:43:34.453747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-20 12:43:34.453771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-20 12:43:34.454001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-20 12:43:34.454025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-20 12:43:34.454232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-20 12:43:34.454258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-20 12:43:34.454498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-20 12:43:34.454522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-20 12:43:34.454629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-20 12:43:34.454653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-20 12:43:34.454829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-20 12:43:34.454863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-20 12:43:34.455046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-20 12:43:34.455080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-20 12:43:34.455267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-20 12:43:34.455307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-20 12:43:34.455450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-20 12:43:34.455484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-20 12:43:34.455690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-20 12:43:34.455724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-20 12:43:34.455877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-20 12:43:34.455911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-20 12:43:34.456043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-20 12:43:34.456067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-20 12:43:34.456320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-20 12:43:34.456356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-20 12:43:34.456556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-20 12:43:34.456590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-20 12:43:34.456786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-20 12:43:34.456820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-20 12:43:34.457017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-20 12:43:34.457063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-20 12:43:34.457252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-20 12:43:34.457277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-20 12:43:34.457442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-20 12:43:34.457467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-20 12:43:34.457584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-20 12:43:34.457619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-20 12:43:34.457767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-20 12:43:34.457801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-20 12:43:34.457986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-20 12:43:34.458020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-20 12:43:34.458240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-20 12:43:34.458320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-20 12:43:34.458542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-20 12:43:34.458581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-20 12:43:34.458813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-20 12:43:34.458850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-20 12:43:34.459048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-20 12:43:34.459082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-20 12:43:34.459271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-20 12:43:34.459317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-20 12:43:34.459525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-20 12:43:34.459559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-20 12:43:34.459675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-20 12:43:34.459703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-20 12:43:34.459943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-20 12:43:34.459968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-20 12:43:34.460081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-20 12:43:34.460105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-20 12:43:34.460230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-20 12:43:34.460255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-20 12:43:34.460364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-20 12:43:34.460389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-20 12:43:34.460567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-20 12:43:34.460591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-20 12:43:34.460843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-20 12:43:34.460867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-20 12:43:34.461037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-20 12:43:34.461072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-20 12:43:34.461226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-20 12:43:34.461284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-20 12:43:34.461416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-20 12:43:34.461451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-20 12:43:34.461647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-20 12:43:34.461680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-20 12:43:34.461793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-20 12:43:34.461827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-20 12:43:34.462078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-20 12:43:34.462113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-20 12:43:34.462311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-20 12:43:34.462347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-20 12:43:34.462643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-20 12:43:34.462677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-20 12:43:34.462952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-20 12:43:34.462986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-20 12:43:34.463213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-20 12:43:34.463240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-20 12:43:34.463416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-20 12:43:34.463441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-20 12:43:34.463673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-20 12:43:34.463707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-20 12:43:34.463960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-20 12:43:34.463993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-20 12:43:34.464214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-20 12:43:34.464250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-20 12:43:34.464434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-20 12:43:34.464459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-20 12:43:34.464717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-20 12:43:34.464752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-20 12:43:34.464937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-20 12:43:34.464970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-20 12:43:34.465218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-20 12:43:34.465244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-20 12:43:34.465428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-20 12:43:34.465452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-20 12:43:34.465622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-20 12:43:34.465647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-20 12:43:34.465815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-20 12:43:34.465849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-20 12:43:34.465974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-20 12:43:34.466007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-20 12:43:34.466188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-20 12:43:34.466236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-20 12:43:34.466416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-20 12:43:34.466450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-20 12:43:34.466650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-20 12:43:34.466684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-20 12:43:34.466978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-20 12:43:34.467019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-20 12:43:34.467218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-20 12:43:34.467255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-20 12:43:34.467383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-20 12:43:34.467417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-20 12:43:34.467707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-20 12:43:34.467742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-20 12:43:34.467937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-20 12:43:34.467971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-20 12:43:34.468191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-20 12:43:34.468247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-20 12:43:34.468370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-20 12:43:34.468404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-20 12:43:34.468595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-20 12:43:34.468629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-20 12:43:34.468761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-20 12:43:34.468795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-20 12:43:34.468976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-20 12:43:34.469012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-20 12:43:34.469133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-20 12:43:34.469167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-20 12:43:34.469412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-20 12:43:34.469437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-20 12:43:34.469613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-20 12:43:34.469648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-20 12:43:34.469773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-20 12:43:34.469808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-20 12:43:34.470008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-20 12:43:34.470041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-20 12:43:34.470230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-20 12:43:34.470255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-20 12:43:34.470367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-20 12:43:34.470395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-20 12:43:34.470506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-20 12:43:34.470531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-20 12:43:34.470710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-20 12:43:34.470734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-20 12:43:34.470895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-20 12:43:34.470919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-20 12:43:34.471082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-20 12:43:34.471106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-20 12:43:34.471333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-20 12:43:34.471358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-20 12:43:34.471531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-20 12:43:34.471556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-20 12:43:34.471660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-20 12:43:34.471684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-20 12:43:34.471877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-20 12:43:34.471910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-20 12:43:34.472110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-20 12:43:34.472145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-20 12:43:34.472264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-20 12:43:34.472299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-20 12:43:34.472494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-20 12:43:34.472528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-20 12:43:34.472746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-20 12:43:34.472779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-20 12:43:34.472912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-20 12:43:34.472948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-20 12:43:34.473086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-20 12:43:34.473111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-20 12:43:34.473355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-20 12:43:34.473380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-20 12:43:34.473475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-20 12:43:34.473496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-20 12:43:34.473655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-20 12:43:34.473688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-20 12:43:34.473905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-20 12:43:34.473939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-20 12:43:34.474188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-20 12:43:34.474235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-20 12:43:34.474424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-20 12:43:34.474448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-20 12:43:34.474566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-20 12:43:34.474600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-20 12:43:34.474792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-20 12:43:34.474826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-20 12:43:34.475024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-20 12:43:34.475057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-20 12:43:34.475180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-20 12:43:34.475228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-20 12:43:34.475440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-20 12:43:34.475474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-20 12:43:34.475596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-20 12:43:34.475629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-20 12:43:34.475878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-20 12:43:34.475911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-20 12:43:34.476162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-20 12:43:34.476187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-20 12:43:34.476366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-20 12:43:34.476391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-20 12:43:34.476645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-20 12:43:34.476679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-20 12:43:34.476926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-20 12:43:34.476961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-20 12:43:34.477220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-20 12:43:34.477246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-20 12:43:34.477506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-20 12:43:34.477530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-20 12:43:34.477702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-20 12:43:34.477726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-20 12:43:34.478015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-20 12:43:34.478049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-20 12:43:34.478248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-20 12:43:34.478285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-20 12:43:34.478558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-20 12:43:34.478591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-20 12:43:34.478910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-20 12:43:34.478943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-20 12:43:34.479223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-20 12:43:34.479257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-20 12:43:34.479537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-20 12:43:34.479572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-20 12:43:34.479755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-20 12:43:34.479790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-20 12:43:34.480008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-20 12:43:34.480043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-20 12:43:34.480236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-20 12:43:34.480261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-20 12:43:34.480433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-20 12:43:34.480467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-20 12:43:34.480661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-20 12:43:34.480695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-20 12:43:34.481016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-20 12:43:34.481050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-20 12:43:34.481289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-20 12:43:34.481325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-20 12:43:34.481532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-20 12:43:34.481566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-20 12:43:34.481775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-20 12:43:34.481810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-20 12:43:34.482055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-20 12:43:34.482089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-20 12:43:34.482366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-20 12:43:34.482404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-20 12:43:34.482591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-20 12:43:34.482616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-20 12:43:34.482804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-20 12:43:34.482830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-20 12:43:34.483024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-20 12:43:34.483048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-20 12:43:34.483334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-20 12:43:34.483360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-20 12:43:34.483482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-20 12:43:34.483506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-20 12:43:34.483735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-20 12:43:34.483760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-20 12:43:34.483919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-20 12:43:34.483943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-20 12:43:34.484194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-20 12:43:34.484226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-20 12:43:34.484343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-20 12:43:34.484367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-20 12:43:34.484524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-20 12:43:34.484547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-20 12:43:34.484689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-20 12:43:34.484712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-20 12:43:34.484996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-20 12:43:34.485020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-20 12:43:34.485219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-20 12:43:34.485246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-20 12:43:34.485497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-20 12:43:34.485521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-20 12:43:34.485774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-20 12:43:34.485798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-20 12:43:34.485965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-20 12:43:34.485989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-20 12:43:34.486238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-20 12:43:34.486268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-20 12:43:34.486443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-20 12:43:34.486468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-20 12:43:34.486641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-20 12:43:34.486665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-20 12:43:34.486781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-20 12:43:34.486805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-20 12:43:34.486981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-20 12:43:34.487005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-20 12:43:34.487180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.801 [2024-11-20 12:43:34.487211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.801 qpair failed and we were unable to recover it. 00:29:28.801 [2024-11-20 12:43:34.487379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.801 [2024-11-20 12:43:34.487403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.801 qpair failed and we were unable to recover it. 00:29:28.801 [2024-11-20 12:43:34.487685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.801 [2024-11-20 12:43:34.487709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.801 qpair failed and we were unable to recover it. 00:29:28.801 [2024-11-20 12:43:34.487983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.801 [2024-11-20 12:43:34.488008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.801 qpair failed and we were unable to recover it. 00:29:28.801 [2024-11-20 12:43:34.488262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.801 [2024-11-20 12:43:34.488288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.801 qpair failed and we were unable to recover it. 00:29:28.801 [2024-11-20 12:43:34.488551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.801 [2024-11-20 12:43:34.488576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:28.801 qpair failed and we were unable to recover it. 00:29:29.130 [2024-11-20 12:43:34.488771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.130 [2024-11-20 12:43:34.488798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.130 qpair failed and we were unable to recover it. 00:29:29.130 [2024-11-20 12:43:34.488958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.130 [2024-11-20 12:43:34.488982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.130 qpair failed and we were unable to recover it. 00:29:29.130 [2024-11-20 12:43:34.489172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.130 [2024-11-20 12:43:34.489198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.130 qpair failed and we were unable to recover it. 00:29:29.130 [2024-11-20 12:43:34.489400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.130 [2024-11-20 12:43:34.489426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.130 qpair failed and we were unable to recover it. 00:29:29.130 [2024-11-20 12:43:34.489606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.130 [2024-11-20 12:43:34.489630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.130 qpair failed and we were unable to recover it. 00:29:29.130 [2024-11-20 12:43:34.489828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.130 [2024-11-20 12:43:34.489852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.130 qpair failed and we were unable to recover it. 00:29:29.130 [2024-11-20 12:43:34.490032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.130 [2024-11-20 12:43:34.490057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.130 qpair failed and we were unable to recover it. 00:29:29.130 [2024-11-20 12:43:34.490183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.130 [2024-11-20 12:43:34.490216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.130 qpair failed and we were unable to recover it. 00:29:29.130 [2024-11-20 12:43:34.490380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.130 [2024-11-20 12:43:34.490404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.130 qpair failed and we were unable to recover it. 00:29:29.130 [2024-11-20 12:43:34.490677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.130 [2024-11-20 12:43:34.490700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.130 qpair failed and we were unable to recover it. 00:29:29.130 [2024-11-20 12:43:34.490977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.130 [2024-11-20 12:43:34.491000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.130 qpair failed and we were unable to recover it. 00:29:29.130 [2024-11-20 12:43:34.491159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.130 [2024-11-20 12:43:34.491183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.130 qpair failed and we were unable to recover it. 00:29:29.130 [2024-11-20 12:43:34.491381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.130 [2024-11-20 12:43:34.491407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.130 qpair failed and we were unable to recover it. 00:29:29.130 [2024-11-20 12:43:34.491605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.130 [2024-11-20 12:43:34.491629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.130 qpair failed and we were unable to recover it. 00:29:29.130 [2024-11-20 12:43:34.491809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.130 [2024-11-20 12:43:34.491832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.130 qpair failed and we were unable to recover it. 00:29:29.130 [2024-11-20 12:43:34.491995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.130 [2024-11-20 12:43:34.492019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.130 qpair failed and we were unable to recover it. 00:29:29.130 [2024-11-20 12:43:34.492125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-11-20 12:43:34.492149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-11-20 12:43:34.492385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-11-20 12:43:34.492411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-11-20 12:43:34.492602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-11-20 12:43:34.492625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-11-20 12:43:34.492914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-11-20 12:43:34.492937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-11-20 12:43:34.493035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-11-20 12:43:34.493060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-11-20 12:43:34.493290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-11-20 12:43:34.493315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-11-20 12:43:34.493567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-11-20 12:43:34.493592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-11-20 12:43:34.493839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-11-20 12:43:34.493863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-11-20 12:43:34.494022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-11-20 12:43:34.494046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-11-20 12:43:34.494226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-11-20 12:43:34.494252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-11-20 12:43:34.494443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-11-20 12:43:34.494468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-11-20 12:43:34.494572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-11-20 12:43:34.494594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-11-20 12:43:34.494696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-11-20 12:43:34.494718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-11-20 12:43:34.494892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-11-20 12:43:34.494915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-11-20 12:43:34.495118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-11-20 12:43:34.495147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-11-20 12:43:34.495358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-11-20 12:43:34.495383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-11-20 12:43:34.495563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-11-20 12:43:34.495587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-11-20 12:43:34.495764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-11-20 12:43:34.495788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-11-20 12:43:34.496028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-11-20 12:43:34.496051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-11-20 12:43:34.496284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-11-20 12:43:34.496309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-11-20 12:43:34.496419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-11-20 12:43:34.496441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-11-20 12:43:34.496557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-11-20 12:43:34.496579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-11-20 12:43:34.496738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-11-20 12:43:34.496759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-11-20 12:43:34.496948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-11-20 12:43:34.496970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-11-20 12:43:34.497221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-11-20 12:43:34.497246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-11-20 12:43:34.497495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-11-20 12:43:34.497520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-11-20 12:43:34.497675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-11-20 12:43:34.497697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-11-20 12:43:34.497987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-11-20 12:43:34.498010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-11-20 12:43:34.498225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-11-20 12:43:34.498250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-11-20 12:43:34.498427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-11-20 12:43:34.498452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-11-20 12:43:34.498631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-11-20 12:43:34.498655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-11-20 12:43:34.498818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-11-20 12:43:34.498843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-11-20 12:43:34.499097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-11-20 12:43:34.499121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-11-20 12:43:34.499313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-11-20 12:43:34.499341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-11-20 12:43:34.499514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-11-20 12:43:34.499538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-11-20 12:43:34.499734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-11-20 12:43:34.499759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-11-20 12:43:34.499860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-11-20 12:43:34.499885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-11-20 12:43:34.500113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-11-20 12:43:34.500137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.132 [2024-11-20 12:43:34.500392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-11-20 12:43:34.500418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-11-20 12:43:34.500657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-11-20 12:43:34.500682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-11-20 12:43:34.500941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-11-20 12:43:34.500965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-11-20 12:43:34.501195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-11-20 12:43:34.501244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-11-20 12:43:34.501479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-11-20 12:43:34.501504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-11-20 12:43:34.501627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-11-20 12:43:34.501649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-11-20 12:43:34.501939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-11-20 12:43:34.501963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-11-20 12:43:34.502221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-11-20 12:43:34.502247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-11-20 12:43:34.502425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-11-20 12:43:34.502450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-11-20 12:43:34.502625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-11-20 12:43:34.502649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-11-20 12:43:34.502810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-11-20 12:43:34.502835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-11-20 12:43:34.503028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-11-20 12:43:34.503053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-11-20 12:43:34.503311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-11-20 12:43:34.503336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-11-20 12:43:34.503525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-11-20 12:43:34.503549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-11-20 12:43:34.503728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-11-20 12:43:34.503752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-11-20 12:43:34.503940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-11-20 12:43:34.503965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-11-20 12:43:34.504223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-11-20 12:43:34.504249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-11-20 12:43:34.504481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-11-20 12:43:34.504507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-11-20 12:43:34.504708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-11-20 12:43:34.504734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-11-20 12:43:34.505011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-11-20 12:43:34.505035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-11-20 12:43:34.505229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-11-20 12:43:34.505255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-11-20 12:43:34.505438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-11-20 12:43:34.505463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-11-20 12:43:34.505582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-11-20 12:43:34.505607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-11-20 12:43:34.505712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-11-20 12:43:34.505736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-11-20 12:43:34.505909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-11-20 12:43:34.505934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-11-20 12:43:34.506186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-11-20 12:43:34.506221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-11-20 12:43:34.506465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-11-20 12:43:34.506489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-11-20 12:43:34.506661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-11-20 12:43:34.506705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-11-20 12:43:34.506981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-11-20 12:43:34.507016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-11-20 12:43:34.507271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-11-20 12:43:34.507307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-11-20 12:43:34.507617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-11-20 12:43:34.507651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-11-20 12:43:34.507913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-11-20 12:43:34.507949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-11-20 12:43:34.508250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-11-20 12:43:34.508293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-11-20 12:43:34.508473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-11-20 12:43:34.508497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-11-20 12:43:34.508748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-11-20 12:43:34.508773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-11-20 12:43:34.509004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-11-20 12:43:34.509029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-11-20 12:43:34.509282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-11-20 12:43:34.509308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-11-20 12:43:34.509483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-11-20 12:43:34.509515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.133 [2024-11-20 12:43:34.509677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-11-20 12:43:34.509701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-11-20 12:43:34.509959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-11-20 12:43:34.509984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-11-20 12:43:34.510220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-11-20 12:43:34.510246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-11-20 12:43:34.510481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-11-20 12:43:34.510506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-11-20 12:43:34.510759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-11-20 12:43:34.510785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-11-20 12:43:34.511022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-11-20 12:43:34.511046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-11-20 12:43:34.511236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-11-20 12:43:34.511266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-11-20 12:43:34.511448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-11-20 12:43:34.511473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-11-20 12:43:34.511729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-11-20 12:43:34.511754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-11-20 12:43:34.511945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-11-20 12:43:34.511970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-11-20 12:43:34.512219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-11-20 12:43:34.512245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-11-20 12:43:34.512411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-11-20 12:43:34.512435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-11-20 12:43:34.512672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-11-20 12:43:34.512707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-11-20 12:43:34.512838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-11-20 12:43:34.512872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-11-20 12:43:34.513149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-11-20 12:43:34.513184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-11-20 12:43:34.513497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-11-20 12:43:34.513532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-11-20 12:43:34.513809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-11-20 12:43:34.513844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-11-20 12:43:34.514072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-11-20 12:43:34.514106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-11-20 12:43:34.514299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-11-20 12:43:34.514336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-11-20 12:43:34.514610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-11-20 12:43:34.514634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-11-20 12:43:34.514759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-11-20 12:43:34.514784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-11-20 12:43:34.515038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-11-20 12:43:34.515063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-11-20 12:43:34.515234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-11-20 12:43:34.515259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-11-20 12:43:34.515521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-11-20 12:43:34.515556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-11-20 12:43:34.515864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-11-20 12:43:34.515899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-11-20 12:43:34.516123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-11-20 12:43:34.516157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-11-20 12:43:34.516389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-11-20 12:43:34.516426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-11-20 12:43:34.516731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-11-20 12:43:34.516765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-11-20 12:43:34.517052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-11-20 12:43:34.517086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-11-20 12:43:34.517319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-11-20 12:43:34.517345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-11-20 12:43:34.517523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-11-20 12:43:34.517547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-11-20 12:43:34.517665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-11-20 12:43:34.517688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-11-20 12:43:34.517938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-11-20 12:43:34.517962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-11-20 12:43:34.518291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-11-20 12:43:34.518321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-11-20 12:43:34.518569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-11-20 12:43:34.518594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-11-20 12:43:34.518845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-11-20 12:43:34.518870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-11-20 12:43:34.519048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-11-20 12:43:34.519072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-11-20 12:43:34.519238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-11-20 12:43:34.519264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-11-20 12:43:34.519501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-11-20 12:43:34.519525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-11-20 12:43:34.519784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-11-20 12:43:34.519809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-11-20 12:43:34.520000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-11-20 12:43:34.520026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-11-20 12:43:34.520288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-11-20 12:43:34.520314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-11-20 12:43:34.520478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-11-20 12:43:34.520501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-11-20 12:43:34.520616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-11-20 12:43:34.520638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-11-20 12:43:34.520748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-11-20 12:43:34.520770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-11-20 12:43:34.521035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-11-20 12:43:34.521059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-11-20 12:43:34.521248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-11-20 12:43:34.521274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-11-20 12:43:34.521510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-11-20 12:43:34.521535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-11-20 12:43:34.521792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-11-20 12:43:34.521818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-11-20 12:43:34.522079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-11-20 12:43:34.522103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-11-20 12:43:34.522360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-11-20 12:43:34.522387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-11-20 12:43:34.522668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-11-20 12:43:34.522692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-11-20 12:43:34.522939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-11-20 12:43:34.522963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-11-20 12:43:34.523193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-11-20 12:43:34.523225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-11-20 12:43:34.523402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-11-20 12:43:34.523426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-11-20 12:43:34.523611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-11-20 12:43:34.523635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-11-20 12:43:34.523890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-11-20 12:43:34.523915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-11-20 12:43:34.524078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-11-20 12:43:34.524103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-11-20 12:43:34.524308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-11-20 12:43:34.524333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-11-20 12:43:34.524565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-11-20 12:43:34.524590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-11-20 12:43:34.524779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-11-20 12:43:34.524803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-11-20 12:43:34.525060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-11-20 12:43:34.525085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-11-20 12:43:34.525346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-11-20 12:43:34.525372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-11-20 12:43:34.525605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-11-20 12:43:34.525630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-11-20 12:43:34.525863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-11-20 12:43:34.525888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-11-20 12:43:34.526059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-11-20 12:43:34.526083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-11-20 12:43:34.526346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-11-20 12:43:34.526372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-11-20 12:43:34.526471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-11-20 12:43:34.526493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-11-20 12:43:34.526792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-11-20 12:43:34.526816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-11-20 12:43:34.526979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-11-20 12:43:34.527003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-11-20 12:43:34.527263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-11-20 12:43:34.527288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-11-20 12:43:34.527535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-11-20 12:43:34.527559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-11-20 12:43:34.527795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-11-20 12:43:34.527819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-11-20 12:43:34.527998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-11-20 12:43:34.528022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-11-20 12:43:34.528281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-11-20 12:43:34.528310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-11-20 12:43:34.528592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-11-20 12:43:34.528617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-11-20 12:43:34.528861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-11-20 12:43:34.528886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-11-20 12:43:34.529146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-11-20 12:43:34.529170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-11-20 12:43:34.529462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-11-20 12:43:34.529487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-11-20 12:43:34.529719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-11-20 12:43:34.529744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-11-20 12:43:34.530007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-11-20 12:43:34.530031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-11-20 12:43:34.530289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-11-20 12:43:34.530315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-11-20 12:43:34.530519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-11-20 12:43:34.530543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-11-20 12:43:34.530727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-11-20 12:43:34.530752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-11-20 12:43:34.530880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-11-20 12:43:34.530905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-11-20 12:43:34.531102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-11-20 12:43:34.531126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-11-20 12:43:34.531382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-11-20 12:43:34.531407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-11-20 12:43:34.531651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-11-20 12:43:34.531676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-11-20 12:43:34.531883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-11-20 12:43:34.531908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-11-20 12:43:34.532164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-11-20 12:43:34.532188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-11-20 12:43:34.532464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-11-20 12:43:34.532489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-11-20 12:43:34.532607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-11-20 12:43:34.532628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-11-20 12:43:34.532908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-11-20 12:43:34.532933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-11-20 12:43:34.533237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-11-20 12:43:34.533263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-11-20 12:43:34.533443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-11-20 12:43:34.533468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-11-20 12:43:34.533699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-11-20 12:43:34.533724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-11-20 12:43:34.533982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-11-20 12:43:34.534007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-11-20 12:43:34.534250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-11-20 12:43:34.534276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-11-20 12:43:34.534453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-11-20 12:43:34.534477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-11-20 12:43:34.534726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-11-20 12:43:34.534752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-11-20 12:43:34.534918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-11-20 12:43:34.534943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-11-20 12:43:34.535116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-11-20 12:43:34.535145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-11-20 12:43:34.535423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-11-20 12:43:34.535449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-11-20 12:43:34.535629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-11-20 12:43:34.535654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-11-20 12:43:34.535907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-11-20 12:43:34.535932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-11-20 12:43:34.536167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-11-20 12:43:34.536193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-11-20 12:43:34.536376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-11-20 12:43:34.536401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-11-20 12:43:34.536661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-11-20 12:43:34.536685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-11-20 12:43:34.536939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-11-20 12:43:34.536964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-11-20 12:43:34.537193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-11-20 12:43:34.537233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-11-20 12:43:34.537519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-11-20 12:43:34.537543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-11-20 12:43:34.537773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-11-20 12:43:34.537798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-11-20 12:43:34.538059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-11-20 12:43:34.538083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-11-20 12:43:34.538317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-11-20 12:43:34.538343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-11-20 12:43:34.538509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-11-20 12:43:34.538534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-11-20 12:43:34.538808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-11-20 12:43:34.538888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-11-20 12:43:34.539197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-11-20 12:43:34.539247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-11-20 12:43:34.539511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-11-20 12:43:34.539546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-11-20 12:43:34.539737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-11-20 12:43:34.539773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-11-20 12:43:34.540055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-11-20 12:43:34.540090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-11-20 12:43:34.540334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-11-20 12:43:34.540371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-11-20 12:43:34.540677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-11-20 12:43:34.540713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-11-20 12:43:34.540916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-11-20 12:43:34.540951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-11-20 12:43:34.541215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-11-20 12:43:34.541252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-11-20 12:43:34.541544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-11-20 12:43:34.541578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-11-20 12:43:34.541844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-11-20 12:43:34.541879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-11-20 12:43:34.542101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-11-20 12:43:34.542137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-11-20 12:43:34.542424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-11-20 12:43:34.542459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-11-20 12:43:34.542730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-11-20 12:43:34.542773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-11-20 12:43:34.543059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-11-20 12:43:34.543094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-11-20 12:43:34.543363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-11-20 12:43:34.543399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-11-20 12:43:34.543664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-11-20 12:43:34.543700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-11-20 12:43:34.543961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-11-20 12:43:34.543997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-11-20 12:43:34.544217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-11-20 12:43:34.544254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-11-20 12:43:34.544456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-11-20 12:43:34.544491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-11-20 12:43:34.544770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-11-20 12:43:34.544805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-11-20 12:43:34.545006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-11-20 12:43:34.545040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-11-20 12:43:34.545183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-11-20 12:43:34.545228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-11-20 12:43:34.545512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-11-20 12:43:34.545547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-11-20 12:43:34.545832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-11-20 12:43:34.545867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-11-20 12:43:34.546142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-11-20 12:43:34.546177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-11-20 12:43:34.546384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-11-20 12:43:34.546420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-11-20 12:43:34.546612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-11-20 12:43:34.546647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-11-20 12:43:34.546849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-11-20 12:43:34.546883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-11-20 12:43:34.547138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-11-20 12:43:34.547173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-11-20 12:43:34.547480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-11-20 12:43:34.547544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-11-20 12:43:34.547770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-11-20 12:43:34.547798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-11-20 12:43:34.548006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-11-20 12:43:34.548031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-11-20 12:43:34.548243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-11-20 12:43:34.548269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-11-20 12:43:34.548539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-11-20 12:43:34.548564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-11-20 12:43:34.548747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-11-20 12:43:34.548771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-11-20 12:43:34.548951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-11-20 12:43:34.548976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-11-20 12:43:34.549235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-11-20 12:43:34.549268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-11-20 12:43:34.549389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-11-20 12:43:34.549415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-11-20 12:43:34.549587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-11-20 12:43:34.549611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-11-20 12:43:34.549790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-11-20 12:43:34.549822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-11-20 12:43:34.550056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-11-20 12:43:34.550080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-11-20 12:43:34.550220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-11-20 12:43:34.550246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-11-20 12:43:34.550450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-11-20 12:43:34.550475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-11-20 12:43:34.550734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-11-20 12:43:34.550758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-11-20 12:43:34.550993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-11-20 12:43:34.551018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-11-20 12:43:34.551220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-11-20 12:43:34.551247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-11-20 12:43:34.551420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-11-20 12:43:34.551445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-11-20 12:43:34.551629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-11-20 12:43:34.551653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-11-20 12:43:34.551817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-11-20 12:43:34.551842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-11-20 12:43:34.552046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-11-20 12:43:34.552071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-11-20 12:43:34.552337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-11-20 12:43:34.552362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-11-20 12:43:34.552647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-11-20 12:43:34.552672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-11-20 12:43:34.552916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-11-20 12:43:34.552940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-11-20 12:43:34.553113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-11-20 12:43:34.553138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-11-20 12:43:34.553333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-11-20 12:43:34.553359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-11-20 12:43:34.553639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-11-20 12:43:34.553664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-11-20 12:43:34.553915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-11-20 12:43:34.553941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-11-20 12:43:34.554136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-11-20 12:43:34.554160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-11-20 12:43:34.554349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-11-20 12:43:34.554375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-11-20 12:43:34.554486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-11-20 12:43:34.554509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-11-20 12:43:34.554681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-11-20 12:43:34.554706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-11-20 12:43:34.554883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-11-20 12:43:34.554909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-11-20 12:43:34.555161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-11-20 12:43:34.555185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-11-20 12:43:34.555359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-11-20 12:43:34.555383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-11-20 12:43:34.555637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-11-20 12:43:34.555661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-11-20 12:43:34.555920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-11-20 12:43:34.555947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-11-20 12:43:34.556182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-11-20 12:43:34.556216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-11-20 12:43:34.556322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-11-20 12:43:34.556345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-11-20 12:43:34.556551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-11-20 12:43:34.556576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-11-20 12:43:34.556815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-11-20 12:43:34.556839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-11-20 12:43:34.557047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-11-20 12:43:34.557072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-11-20 12:43:34.557255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-11-20 12:43:34.557281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-11-20 12:43:34.557485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-11-20 12:43:34.557514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-11-20 12:43:34.557768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-11-20 12:43:34.557792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-11-20 12:43:34.557990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-11-20 12:43:34.558014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-11-20 12:43:34.558192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-11-20 12:43:34.558240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-11-20 12:43:34.558439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-11-20 12:43:34.558463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-11-20 12:43:34.558625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-11-20 12:43:34.558649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-11-20 12:43:34.558824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-11-20 12:43:34.558848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-11-20 12:43:34.559026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-11-20 12:43:34.559051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-11-20 12:43:34.559373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-11-20 12:43:34.559455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-11-20 12:43:34.559760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-11-20 12:43:34.559799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-11-20 12:43:34.560001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-11-20 12:43:34.560036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-11-20 12:43:34.560245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-11-20 12:43:34.560282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-11-20 12:43:34.560472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-11-20 12:43:34.560507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-11-20 12:43:34.560637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-11-20 12:43:34.560671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-11-20 12:43:34.560951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-11-20 12:43:34.560985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-11-20 12:43:34.561254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-11-20 12:43:34.561290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-11-20 12:43:34.561581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-11-20 12:43:34.561614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-11-20 12:43:34.561905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-11-20 12:43:34.561939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-11-20 12:43:34.562140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-11-20 12:43:34.562174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-11-20 12:43:34.562466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-11-20 12:43:34.562501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-11-20 12:43:34.562703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-11-20 12:43:34.562732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-11-20 12:43:34.563000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-11-20 12:43:34.563025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-11-20 12:43:34.563232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-11-20 12:43:34.563258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-11-20 12:43:34.563462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-11-20 12:43:34.563486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-11-20 12:43:34.563662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-11-20 12:43:34.563687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-11-20 12:43:34.563892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-11-20 12:43:34.563917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-11-20 12:43:34.564170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-11-20 12:43:34.564194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-11-20 12:43:34.564465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-11-20 12:43:34.564489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-11-20 12:43:34.564615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-11-20 12:43:34.564639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-11-20 12:43:34.564829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-11-20 12:43:34.564854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-11-20 12:43:34.565040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-11-20 12:43:34.565065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-11-20 12:43:34.565267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-11-20 12:43:34.565295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-11-20 12:43:34.565460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-11-20 12:43:34.565484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-11-20 12:43:34.565740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-11-20 12:43:34.565764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-11-20 12:43:34.566017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-11-20 12:43:34.566042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-11-20 12:43:34.566244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-11-20 12:43:34.566282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-11-20 12:43:34.566545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-11-20 12:43:34.566580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-11-20 12:43:34.566801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-11-20 12:43:34.566836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-11-20 12:43:34.567089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-11-20 12:43:34.567124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-11-20 12:43:34.567408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-11-20 12:43:34.567444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-11-20 12:43:34.567673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-11-20 12:43:34.567708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-11-20 12:43:34.567894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-11-20 12:43:34.567929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-11-20 12:43:34.568116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-11-20 12:43:34.568151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-11-20 12:43:34.568306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-11-20 12:43:34.568343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-11-20 12:43:34.568472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-11-20 12:43:34.568504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-11-20 12:43:34.568781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-11-20 12:43:34.568815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-11-20 12:43:34.569070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-11-20 12:43:34.569105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-11-20 12:43:34.569309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-11-20 12:43:34.569345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-11-20 12:43:34.569603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-11-20 12:43:34.569648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-11-20 12:43:34.569927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-11-20 12:43:34.569962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-11-20 12:43:34.570184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-11-20 12:43:34.570230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-11-20 12:43:34.570519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-11-20 12:43:34.570554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-11-20 12:43:34.570816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-11-20 12:43:34.570851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-11-20 12:43:34.571107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-11-20 12:43:34.571142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-11-20 12:43:34.571278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-11-20 12:43:34.571310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-11-20 12:43:34.571495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-11-20 12:43:34.571529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-11-20 12:43:34.571661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-11-20 12:43:34.571696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-11-20 12:43:34.571936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-11-20 12:43:34.571970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-11-20 12:43:34.572277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-11-20 12:43:34.572313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-11-20 12:43:34.572601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-11-20 12:43:34.572636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-11-20 12:43:34.572909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-11-20 12:43:34.572945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-11-20 12:43:34.573237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-11-20 12:43:34.573274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-11-20 12:43:34.573544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-11-20 12:43:34.573580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-11-20 12:43:34.573864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-11-20 12:43:34.573898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-11-20 12:43:34.574120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-11-20 12:43:34.574156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-11-20 12:43:34.574441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-11-20 12:43:34.574477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-11-20 12:43:34.574754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-11-20 12:43:34.574789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-11-20 12:43:34.574980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-11-20 12:43:34.575014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-11-20 12:43:34.575222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-11-20 12:43:34.575259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-11-20 12:43:34.575537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-11-20 12:43:34.575573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-11-20 12:43:34.575791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-11-20 12:43:34.575827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-11-20 12:43:34.576081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-11-20 12:43:34.576114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-11-20 12:43:34.576419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-11-20 12:43:34.576455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-11-20 12:43:34.576669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-11-20 12:43:34.576702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-11-20 12:43:34.576979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-11-20 12:43:34.577014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-11-20 12:43:34.577302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-11-20 12:43:34.577339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-11-20 12:43:34.577532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-11-20 12:43:34.577566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-11-20 12:43:34.577866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-11-20 12:43:34.577900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-11-20 12:43:34.578096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-11-20 12:43:34.578129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-11-20 12:43:34.578388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-11-20 12:43:34.578424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-11-20 12:43:34.578720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-11-20 12:43:34.578754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-11-20 12:43:34.578957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-11-20 12:43:34.578992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-11-20 12:43:34.579270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-11-20 12:43:34.579306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-11-20 12:43:34.579503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-11-20 12:43:34.579537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-11-20 12:43:34.579839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-11-20 12:43:34.579873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-11-20 12:43:34.580152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-11-20 12:43:34.580187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-11-20 12:43:34.580394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-11-20 12:43:34.580428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-11-20 12:43:34.580685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-11-20 12:43:34.580719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-11-20 12:43:34.580912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-11-20 12:43:34.580947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-11-20 12:43:34.581244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-11-20 12:43:34.581281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-11-20 12:43:34.581530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-11-20 12:43:34.581563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-11-20 12:43:34.581832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-11-20 12:43:34.581867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-11-20 12:43:34.582149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-11-20 12:43:34.582183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-11-20 12:43:34.582471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-11-20 12:43:34.582505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-11-20 12:43:34.582780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-11-20 12:43:34.582815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-11-20 12:43:34.583070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-11-20 12:43:34.583104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-11-20 12:43:34.583380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-11-20 12:43:34.583416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-11-20 12:43:34.583646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-11-20 12:43:34.583679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-11-20 12:43:34.583824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-11-20 12:43:34.583860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-11-20 12:43:34.584080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-11-20 12:43:34.584115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-11-20 12:43:34.584303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-11-20 12:43:34.584339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-11-20 12:43:34.584526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-11-20 12:43:34.584562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-11-20 12:43:34.584861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-11-20 12:43:34.584895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-11-20 12:43:34.585163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-11-20 12:43:34.585198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-11-20 12:43:34.585540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-11-20 12:43:34.585575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-11-20 12:43:34.585792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-11-20 12:43:34.585827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.141 [2024-11-20 12:43:34.586106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-11-20 12:43:34.586141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-11-20 12:43:34.586425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-11-20 12:43:34.586460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-11-20 12:43:34.586742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-11-20 12:43:34.586777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-11-20 12:43:34.586959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-11-20 12:43:34.586993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-11-20 12:43:34.587119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-11-20 12:43:34.587152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-11-20 12:43:34.587377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-11-20 12:43:34.587413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-11-20 12:43:34.587626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-11-20 12:43:34.587662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-11-20 12:43:34.587862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-11-20 12:43:34.587896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-11-20 12:43:34.588155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-11-20 12:43:34.588189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-11-20 12:43:34.588385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-11-20 12:43:34.588426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-11-20 12:43:34.588701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-11-20 12:43:34.588736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-11-20 12:43:34.588880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-11-20 12:43:34.588915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-11-20 12:43:34.589193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-11-20 12:43:34.589255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-11-20 12:43:34.589513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-11-20 12:43:34.589546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-11-20 12:43:34.589845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-11-20 12:43:34.589879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-11-20 12:43:34.590073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-11-20 12:43:34.590108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-11-20 12:43:34.590389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-11-20 12:43:34.590424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-11-20 12:43:34.590621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-11-20 12:43:34.590656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-11-20 12:43:34.590953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-11-20 12:43:34.590987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-11-20 12:43:34.591169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-11-20 12:43:34.591213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-11-20 12:43:34.591417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-11-20 12:43:34.591452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-11-20 12:43:34.591726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-11-20 12:43:34.591760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-11-20 12:43:34.592040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-11-20 12:43:34.592074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-11-20 12:43:34.592315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-11-20 12:43:34.592351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-11-20 12:43:34.592608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-11-20 12:43:34.592642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-11-20 12:43:34.592937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-11-20 12:43:34.592972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-11-20 12:43:34.593242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-11-20 12:43:34.593278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-11-20 12:43:34.593537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-11-20 12:43:34.593572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-11-20 12:43:34.593775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-11-20 12:43:34.593808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-11-20 12:43:34.594034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-11-20 12:43:34.594068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-11-20 12:43:34.594345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-11-20 12:43:34.594381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-11-20 12:43:34.594530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-11-20 12:43:34.594564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-11-20 12:43:34.594864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-11-20 12:43:34.594899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-11-20 12:43:34.595031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-11-20 12:43:34.595064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-11-20 12:43:34.595346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-11-20 12:43:34.595382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-11-20 12:43:34.595569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-11-20 12:43:34.595603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-11-20 12:43:34.595800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-11-20 12:43:34.595835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-11-20 12:43:34.596063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-11-20 12:43:34.596098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.142 [2024-11-20 12:43:34.596302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-11-20 12:43:34.596337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-11-20 12:43:34.596615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-11-20 12:43:34.596649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-11-20 12:43:34.596903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-11-20 12:43:34.596938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-11-20 12:43:34.597134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-11-20 12:43:34.597168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-11-20 12:43:34.597394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-11-20 12:43:34.597431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-11-20 12:43:34.597621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-11-20 12:43:34.597655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-11-20 12:43:34.597911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-11-20 12:43:34.597945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-11-20 12:43:34.598151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-11-20 12:43:34.598185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-11-20 12:43:34.598476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-11-20 12:43:34.598511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-11-20 12:43:34.598720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-11-20 12:43:34.598754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-11-20 12:43:34.599007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-11-20 12:43:34.599042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-11-20 12:43:34.599346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-11-20 12:43:34.599387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-11-20 12:43:34.599576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-11-20 12:43:34.599610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-11-20 12:43:34.599833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-11-20 12:43:34.599867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-11-20 12:43:34.600062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-11-20 12:43:34.600096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-11-20 12:43:34.600399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-11-20 12:43:34.600435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-11-20 12:43:34.600715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-11-20 12:43:34.600749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-11-20 12:43:34.601028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-11-20 12:43:34.601062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-11-20 12:43:34.601334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-11-20 12:43:34.601370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-11-20 12:43:34.601566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-11-20 12:43:34.601600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-11-20 12:43:34.601797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-11-20 12:43:34.601831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-11-20 12:43:34.602114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-11-20 12:43:34.602149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-11-20 12:43:34.602448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-11-20 12:43:34.602483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-11-20 12:43:34.602746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-11-20 12:43:34.602781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-11-20 12:43:34.603056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-11-20 12:43:34.603089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-11-20 12:43:34.603389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-11-20 12:43:34.603426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-11-20 12:43:34.603569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-11-20 12:43:34.603602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-11-20 12:43:34.603736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-11-20 12:43:34.603771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-11-20 12:43:34.603966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-11-20 12:43:34.603999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-11-20 12:43:34.604185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-11-20 12:43:34.604229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-11-20 12:43:34.604509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-11-20 12:43:34.604543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-11-20 12:43:34.604820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-11-20 12:43:34.604854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-11-20 12:43:34.605060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-11-20 12:43:34.605093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-11-20 12:43:34.605374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-11-20 12:43:34.605410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-11-20 12:43:34.605694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-11-20 12:43:34.605728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-11-20 12:43:34.606024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-11-20 12:43:34.606059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-11-20 12:43:34.606267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-11-20 12:43:34.606303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-11-20 12:43:34.606490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-11-20 12:43:34.606525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.143 [2024-11-20 12:43:34.606720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-11-20 12:43:34.606755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-11-20 12:43:34.606900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-11-20 12:43:34.606934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-11-20 12:43:34.607224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-11-20 12:43:34.607261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-11-20 12:43:34.607477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-11-20 12:43:34.607511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-11-20 12:43:34.607696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-11-20 12:43:34.607730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-11-20 12:43:34.607911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-11-20 12:43:34.607946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-11-20 12:43:34.608230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-11-20 12:43:34.608266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-11-20 12:43:34.608532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-11-20 12:43:34.608566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-11-20 12:43:34.608855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-11-20 12:43:34.608890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-11-20 12:43:34.609167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-11-20 12:43:34.609228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-11-20 12:43:34.609506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-11-20 12:43:34.609541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-11-20 12:43:34.609817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-11-20 12:43:34.609851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-11-20 12:43:34.610137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-11-20 12:43:34.610172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-11-20 12:43:34.610373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-11-20 12:43:34.610415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-11-20 12:43:34.610695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-11-20 12:43:34.610729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-11-20 12:43:34.610936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-11-20 12:43:34.610971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-11-20 12:43:34.611226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-11-20 12:43:34.611262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-11-20 12:43:34.611534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-11-20 12:43:34.611568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-11-20 12:43:34.611713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-11-20 12:43:34.611747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-11-20 12:43:34.612023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-11-20 12:43:34.612058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-11-20 12:43:34.612264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-11-20 12:43:34.612301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-11-20 12:43:34.612554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-11-20 12:43:34.612587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-11-20 12:43:34.612858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-11-20 12:43:34.612893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-11-20 12:43:34.613098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-11-20 12:43:34.613133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-11-20 12:43:34.613391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-11-20 12:43:34.613427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-11-20 12:43:34.613727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-11-20 12:43:34.613762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-11-20 12:43:34.614025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-11-20 12:43:34.614060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-11-20 12:43:34.614359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-11-20 12:43:34.614395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-11-20 12:43:34.614650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-11-20 12:43:34.614685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-11-20 12:43:34.614979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-11-20 12:43:34.615013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-11-20 12:43:34.615285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-11-20 12:43:34.615321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-11-20 12:43:34.615576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-11-20 12:43:34.615610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-11-20 12:43:34.615811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-11-20 12:43:34.615846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-11-20 12:43:34.616119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-11-20 12:43:34.616154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-11-20 12:43:34.616368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-11-20 12:43:34.616403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-11-20 12:43:34.616703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-11-20 12:43:34.616738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-11-20 12:43:34.616996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-11-20 12:43:34.617031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-11-20 12:43:34.617245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-11-20 12:43:34.617282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.144 [2024-11-20 12:43:34.617587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-11-20 12:43:34.617622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-11-20 12:43:34.617845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-11-20 12:43:34.617879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-11-20 12:43:34.618162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-11-20 12:43:34.618197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-11-20 12:43:34.618391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-11-20 12:43:34.618426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-11-20 12:43:34.618700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-11-20 12:43:34.618733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-11-20 12:43:34.619016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-11-20 12:43:34.619051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-11-20 12:43:34.619334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-11-20 12:43:34.619370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-11-20 12:43:34.619566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-11-20 12:43:34.619600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-11-20 12:43:34.619795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-11-20 12:43:34.619830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-11-20 12:43:34.620105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-11-20 12:43:34.620141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-11-20 12:43:34.620354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-11-20 12:43:34.620390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-11-20 12:43:34.620580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-11-20 12:43:34.620615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-11-20 12:43:34.620797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-11-20 12:43:34.620831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-11-20 12:43:34.621107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-11-20 12:43:34.621142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-11-20 12:43:34.621341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-11-20 12:43:34.621377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-11-20 12:43:34.621631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-11-20 12:43:34.621672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-11-20 12:43:34.621805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-11-20 12:43:34.621839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-11-20 12:43:34.622060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-11-20 12:43:34.622095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-11-20 12:43:34.622281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-11-20 12:43:34.622317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-11-20 12:43:34.622508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-11-20 12:43:34.622542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-11-20 12:43:34.622676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-11-20 12:43:34.622711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-11-20 12:43:34.622913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-11-20 12:43:34.622948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-11-20 12:43:34.623227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-11-20 12:43:34.623262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-11-20 12:43:34.623474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-11-20 12:43:34.623508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-11-20 12:43:34.623791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-11-20 12:43:34.623826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-11-20 12:43:34.624050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-11-20 12:43:34.624086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-11-20 12:43:34.624305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-11-20 12:43:34.624342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-11-20 12:43:34.624540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-11-20 12:43:34.624574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-11-20 12:43:34.624777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-11-20 12:43:34.624812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-11-20 12:43:34.625072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-11-20 12:43:34.625107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-11-20 12:43:34.625361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-11-20 12:43:34.625397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-11-20 12:43:34.625526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-11-20 12:43:34.625560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-11-20 12:43:34.625763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-11-20 12:43:34.625798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.145 [2024-11-20 12:43:34.626001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-11-20 12:43:34.626036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-11-20 12:43:34.626236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-11-20 12:43:34.626273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-11-20 12:43:34.626550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-11-20 12:43:34.626586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-11-20 12:43:34.626883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-11-20 12:43:34.626917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-11-20 12:43:34.627210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-11-20 12:43:34.627247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-11-20 12:43:34.627511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-11-20 12:43:34.627545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-11-20 12:43:34.627811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-11-20 12:43:34.627846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-11-20 12:43:34.627975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-11-20 12:43:34.628009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-11-20 12:43:34.628284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-11-20 12:43:34.628320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-11-20 12:43:34.628524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-11-20 12:43:34.628559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-11-20 12:43:34.628841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-11-20 12:43:34.628877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-11-20 12:43:34.629154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-11-20 12:43:34.629188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-11-20 12:43:34.629414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-11-20 12:43:34.629449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-11-20 12:43:34.629727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-11-20 12:43:34.629762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-11-20 12:43:34.629978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-11-20 12:43:34.630012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-11-20 12:43:34.630266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-11-20 12:43:34.630302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-11-20 12:43:34.630593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-11-20 12:43:34.630628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-11-20 12:43:34.630899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-11-20 12:43:34.630934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-11-20 12:43:34.631137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-11-20 12:43:34.631172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-11-20 12:43:34.631391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-11-20 12:43:34.631427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-11-20 12:43:34.631610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-11-20 12:43:34.631644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-11-20 12:43:34.631923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-11-20 12:43:34.631958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-11-20 12:43:34.632161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-11-20 12:43:34.632211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-11-20 12:43:34.632495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-11-20 12:43:34.632530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-11-20 12:43:34.632813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-11-20 12:43:34.632848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-11-20 12:43:34.633127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-11-20 12:43:34.633162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-11-20 12:43:34.633447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-11-20 12:43:34.633483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-11-20 12:43:34.633760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-11-20 12:43:34.633795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-11-20 12:43:34.634079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-11-20 12:43:34.634114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-11-20 12:43:34.634395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-11-20 12:43:34.634431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-11-20 12:43:34.634624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-11-20 12:43:34.634659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-11-20 12:43:34.634842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-11-20 12:43:34.634877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-11-20 12:43:34.635150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-11-20 12:43:34.635184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-11-20 12:43:34.635475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-11-20 12:43:34.635510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-11-20 12:43:34.635784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-11-20 12:43:34.635819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-11-20 12:43:34.636098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-11-20 12:43:34.636132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-11-20 12:43:34.636284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-11-20 12:43:34.636321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-11-20 12:43:34.636605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-11-20 12:43:34.636641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.146 [2024-11-20 12:43:34.636869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-11-20 12:43:34.636903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-11-20 12:43:34.637020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-11-20 12:43:34.637054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-11-20 12:43:34.637352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-11-20 12:43:34.637389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-11-20 12:43:34.637655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-11-20 12:43:34.637690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-11-20 12:43:34.637956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-11-20 12:43:34.637990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-11-20 12:43:34.638264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-11-20 12:43:34.638300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-11-20 12:43:34.638440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-11-20 12:43:34.638474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-11-20 12:43:34.638775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-11-20 12:43:34.638810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-11-20 12:43:34.638991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-11-20 12:43:34.639024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-11-20 12:43:34.639158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-11-20 12:43:34.639194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-11-20 12:43:34.639407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-11-20 12:43:34.639441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-11-20 12:43:34.639738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-11-20 12:43:34.639773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-11-20 12:43:34.640022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-11-20 12:43:34.640056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-11-20 12:43:34.640192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-11-20 12:43:34.640236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-11-20 12:43:34.640439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-11-20 12:43:34.640474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-11-20 12:43:34.640760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-11-20 12:43:34.640794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-11-20 12:43:34.641067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-11-20 12:43:34.641101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-11-20 12:43:34.641394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-11-20 12:43:34.641430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-11-20 12:43:34.641695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-11-20 12:43:34.641730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-11-20 12:43:34.641848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-11-20 12:43:34.641883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-11-20 12:43:34.642062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-11-20 12:43:34.642097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-11-20 12:43:34.642375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-11-20 12:43:34.642412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-11-20 12:43:34.642690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-11-20 12:43:34.642724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-11-20 12:43:34.642978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-11-20 12:43:34.643013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-11-20 12:43:34.643269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-11-20 12:43:34.643310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-11-20 12:43:34.643567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-11-20 12:43:34.643602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-11-20 12:43:34.643853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-11-20 12:43:34.643887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-11-20 12:43:34.644108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-11-20 12:43:34.644143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-11-20 12:43:34.644431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-11-20 12:43:34.644467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-11-20 12:43:34.644698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-11-20 12:43:34.644733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-11-20 12:43:34.645010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-11-20 12:43:34.645045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-11-20 12:43:34.645183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-11-20 12:43:34.645227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-11-20 12:43:34.645534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-11-20 12:43:34.645569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-11-20 12:43:34.645701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-11-20 12:43:34.645736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-11-20 12:43:34.646015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-11-20 12:43:34.646051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-11-20 12:43:34.646237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-11-20 12:43:34.646274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-11-20 12:43:34.646533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-11-20 12:43:34.646568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-11-20 12:43:34.646775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-11-20 12:43:34.646809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.147 [2024-11-20 12:43:34.647070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-11-20 12:43:34.647106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-11-20 12:43:34.647319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-11-20 12:43:34.647356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-11-20 12:43:34.647600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-11-20 12:43:34.647635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-11-20 12:43:34.647788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-11-20 12:43:34.647823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-11-20 12:43:34.648005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-11-20 12:43:34.648040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-11-20 12:43:34.648261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-11-20 12:43:34.648298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-11-20 12:43:34.648499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-11-20 12:43:34.648534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-11-20 12:43:34.648733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-11-20 12:43:34.648768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-11-20 12:43:34.649022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-11-20 12:43:34.649058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-11-20 12:43:34.649360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-11-20 12:43:34.649397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-11-20 12:43:34.649619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-11-20 12:43:34.649654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-11-20 12:43:34.649916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-11-20 12:43:34.649950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-11-20 12:43:34.650139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-11-20 12:43:34.650175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-11-20 12:43:34.650385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-11-20 12:43:34.650423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-11-20 12:43:34.650700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-11-20 12:43:34.650735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-11-20 12:43:34.650950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-11-20 12:43:34.650985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-11-20 12:43:34.651266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-11-20 12:43:34.651304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-11-20 12:43:34.651577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-11-20 12:43:34.651611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-11-20 12:43:34.651893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-11-20 12:43:34.651928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-11-20 12:43:34.652218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-11-20 12:43:34.652254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-11-20 12:43:34.652458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-11-20 12:43:34.652493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-11-20 12:43:34.652624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-11-20 12:43:34.652658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-11-20 12:43:34.652925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-11-20 12:43:34.652959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-11-20 12:43:34.653238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-11-20 12:43:34.653274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-11-20 12:43:34.653478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-11-20 12:43:34.653513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-11-20 12:43:34.653714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-11-20 12:43:34.653750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-11-20 12:43:34.654017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-11-20 12:43:34.654059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-11-20 12:43:34.654246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-11-20 12:43:34.654284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-11-20 12:43:34.654553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-11-20 12:43:34.654588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-11-20 12:43:34.654856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-11-20 12:43:34.654891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-11-20 12:43:34.655186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-11-20 12:43:34.655234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-11-20 12:43:34.655488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-11-20 12:43:34.655521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-11-20 12:43:34.655803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-11-20 12:43:34.655838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-11-20 12:43:34.656117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-11-20 12:43:34.656153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-11-20 12:43:34.656439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-11-20 12:43:34.656476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-11-20 12:43:34.656684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-11-20 12:43:34.656719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-11-20 12:43:34.656976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-11-20 12:43:34.657011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-11-20 12:43:34.657266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-11-20 12:43:34.657302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-11-20 12:43:34.657608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-11-20 12:43:34.657643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-11-20 12:43:34.657916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-11-20 12:43:34.657951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-11-20 12:43:34.658235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-11-20 12:43:34.658273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-11-20 12:43:34.658551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-11-20 12:43:34.658586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-11-20 12:43:34.658784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-11-20 12:43:34.658819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-11-20 12:43:34.659074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-11-20 12:43:34.659109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-11-20 12:43:34.659385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-11-20 12:43:34.659422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-11-20 12:43:34.659678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-11-20 12:43:34.659713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-11-20 12:43:34.659908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-11-20 12:43:34.659943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-11-20 12:43:34.660223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-11-20 12:43:34.660260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-11-20 12:43:34.660540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-11-20 12:43:34.660574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-11-20 12:43:34.660851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-11-20 12:43:34.660886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-11-20 12:43:34.661172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-11-20 12:43:34.661217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-11-20 12:43:34.661535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-11-20 12:43:34.661571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-11-20 12:43:34.661778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-11-20 12:43:34.661813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-11-20 12:43:34.662084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-11-20 12:43:34.662120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-11-20 12:43:34.662254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-11-20 12:43:34.662292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-11-20 12:43:34.662489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-11-20 12:43:34.662524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-11-20 12:43:34.662725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-11-20 12:43:34.662761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-11-20 12:43:34.663038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-11-20 12:43:34.663074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-11-20 12:43:34.663360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-11-20 12:43:34.663397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-11-20 12:43:34.663718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-11-20 12:43:34.663752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-11-20 12:43:34.663959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-11-20 12:43:34.663994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-11-20 12:43:34.664199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-11-20 12:43:34.664245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-11-20 12:43:34.664528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-11-20 12:43:34.664563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-11-20 12:43:34.664746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-11-20 12:43:34.664781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-11-20 12:43:34.665044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-11-20 12:43:34.665080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-11-20 12:43:34.665300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-11-20 12:43:34.665336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-11-20 12:43:34.665562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-11-20 12:43:34.665602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-11-20 12:43:34.665809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-11-20 12:43:34.665844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-11-20 12:43:34.666123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-11-20 12:43:34.666157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-11-20 12:43:34.666368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-11-20 12:43:34.666405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-11-20 12:43:34.666702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-11-20 12:43:34.666737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-11-20 12:43:34.666938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-11-20 12:43:34.666973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-11-20 12:43:34.667251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-11-20 12:43:34.667288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-11-20 12:43:34.667591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-11-20 12:43:34.667625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-11-20 12:43:34.667827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-11-20 12:43:34.667861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-11-20 12:43:34.668048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-11-20 12:43:34.668083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-11-20 12:43:34.668371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-11-20 12:43:34.668408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-11-20 12:43:34.668591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-11-20 12:43:34.668626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-11-20 12:43:34.668881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-11-20 12:43:34.668917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-11-20 12:43:34.669112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-11-20 12:43:34.669148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-11-20 12:43:34.669469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-11-20 12:43:34.669505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-11-20 12:43:34.669709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-11-20 12:43:34.669743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-11-20 12:43:34.669998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-11-20 12:43:34.670033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-11-20 12:43:34.670306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-11-20 12:43:34.670343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-11-20 12:43:34.670544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-11-20 12:43:34.670579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-11-20 12:43:34.670827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-11-20 12:43:34.670861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-11-20 12:43:34.671137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-11-20 12:43:34.671172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-11-20 12:43:34.671486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-11-20 12:43:34.671522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-11-20 12:43:34.671801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-11-20 12:43:34.671835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-11-20 12:43:34.672067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-11-20 12:43:34.672102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-11-20 12:43:34.672286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-11-20 12:43:34.672324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-11-20 12:43:34.672604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-11-20 12:43:34.672639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-11-20 12:43:34.672892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-11-20 12:43:34.672927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-11-20 12:43:34.673141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-11-20 12:43:34.673177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-11-20 12:43:34.673393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-11-20 12:43:34.673428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-11-20 12:43:34.673621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-11-20 12:43:34.673656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-11-20 12:43:34.673908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-11-20 12:43:34.673943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-11-20 12:43:34.674159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-11-20 12:43:34.674193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-11-20 12:43:34.674492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-11-20 12:43:34.674528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-11-20 12:43:34.674792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-11-20 12:43:34.674827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-11-20 12:43:34.675040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-11-20 12:43:34.675075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-11-20 12:43:34.675354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-11-20 12:43:34.675391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-11-20 12:43:34.675590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-11-20 12:43:34.675625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-11-20 12:43:34.675826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-11-20 12:43:34.675859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-11-20 12:43:34.676139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-11-20 12:43:34.676175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-11-20 12:43:34.676374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-11-20 12:43:34.676409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-11-20 12:43:34.676639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-11-20 12:43:34.676680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-11-20 12:43:34.676884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-11-20 12:43:34.676920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-11-20 12:43:34.677120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-11-20 12:43:34.677156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-11-20 12:43:34.677423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-11-20 12:43:34.677459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-11-20 12:43:34.677745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-11-20 12:43:34.677780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-11-20 12:43:34.678051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-11-20 12:43:34.678085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-11-20 12:43:34.678374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-11-20 12:43:34.678411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-11-20 12:43:34.678689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-11-20 12:43:34.678724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-11-20 12:43:34.678914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-11-20 12:43:34.678947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-11-20 12:43:34.679146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-11-20 12:43:34.679181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-11-20 12:43:34.679430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-11-20 12:43:34.679467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-11-20 12:43:34.679744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-11-20 12:43:34.679780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-11-20 12:43:34.680058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-11-20 12:43:34.680093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-11-20 12:43:34.680349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-11-20 12:43:34.680386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-11-20 12:43:34.680662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-11-20 12:43:34.680696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-11-20 12:43:34.680978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-11-20 12:43:34.681012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-11-20 12:43:34.681292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-11-20 12:43:34.681329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-11-20 12:43:34.681524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-11-20 12:43:34.681558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-11-20 12:43:34.681828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-11-20 12:43:34.681863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-11-20 12:43:34.682140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-11-20 12:43:34.682175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-11-20 12:43:34.682464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-11-20 12:43:34.682499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-11-20 12:43:34.682770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-11-20 12:43:34.682805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-11-20 12:43:34.683053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-11-20 12:43:34.683088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-11-20 12:43:34.683371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-11-20 12:43:34.683408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-11-20 12:43:34.683612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-11-20 12:43:34.683647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-11-20 12:43:34.683923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-11-20 12:43:34.683958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-11-20 12:43:34.684085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-11-20 12:43:34.684119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-11-20 12:43:34.684322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-11-20 12:43:34.684358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-11-20 12:43:34.684613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-11-20 12:43:34.684648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-11-20 12:43:34.684930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-11-20 12:43:34.684964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-11-20 12:43:34.685169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-11-20 12:43:34.685213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-11-20 12:43:34.685400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-11-20 12:43:34.685436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-11-20 12:43:34.685648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-11-20 12:43:34.685682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-11-20 12:43:34.685962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-11-20 12:43:34.685998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-11-20 12:43:34.686226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-11-20 12:43:34.686263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-11-20 12:43:34.686459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-11-20 12:43:34.686494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-11-20 12:43:34.686689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-11-20 12:43:34.686724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-11-20 12:43:34.686998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-11-20 12:43:34.687033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-11-20 12:43:34.687316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-11-20 12:43:34.687353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-11-20 12:43:34.687580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-11-20 12:43:34.687615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.151 [2024-11-20 12:43:34.687919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.151 [2024-11-20 12:43:34.687959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.151 qpair failed and we were unable to recover it. 00:29:29.151 [2024-11-20 12:43:34.688185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.151 [2024-11-20 12:43:34.688230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.151 qpair failed and we were unable to recover it. 00:29:29.151 [2024-11-20 12:43:34.688534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.151 [2024-11-20 12:43:34.688568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.151 qpair failed and we were unable to recover it. 00:29:29.151 [2024-11-20 12:43:34.688821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.151 [2024-11-20 12:43:34.688856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.151 qpair failed and we were unable to recover it. 00:29:29.151 [2024-11-20 12:43:34.689079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.151 [2024-11-20 12:43:34.689113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.151 qpair failed and we were unable to recover it. 00:29:29.151 [2024-11-20 12:43:34.689316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.151 [2024-11-20 12:43:34.689352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.151 qpair failed and we were unable to recover it. 00:29:29.151 [2024-11-20 12:43:34.689542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.151 [2024-11-20 12:43:34.689576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.151 qpair failed and we were unable to recover it. 00:29:29.151 [2024-11-20 12:43:34.689797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.151 [2024-11-20 12:43:34.689832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.151 qpair failed and we were unable to recover it. 00:29:29.151 [2024-11-20 12:43:34.690111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.151 [2024-11-20 12:43:34.690145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.151 qpair failed and we were unable to recover it. 00:29:29.151 [2024-11-20 12:43:34.690431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.151 [2024-11-20 12:43:34.690468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.151 qpair failed and we were unable to recover it. 00:29:29.151 [2024-11-20 12:43:34.690744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.151 [2024-11-20 12:43:34.690779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.151 qpair failed and we were unable to recover it. 00:29:29.151 [2024-11-20 12:43:34.691064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.151 [2024-11-20 12:43:34.691099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.151 qpair failed and we were unable to recover it. 00:29:29.151 [2024-11-20 12:43:34.691295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.151 [2024-11-20 12:43:34.691333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.151 qpair failed and we were unable to recover it. 00:29:29.151 [2024-11-20 12:43:34.691636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.151 [2024-11-20 12:43:34.691671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.151 qpair failed and we were unable to recover it. 00:29:29.151 [2024-11-20 12:43:34.691935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.151 [2024-11-20 12:43:34.691971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.151 qpair failed and we were unable to recover it. 00:29:29.151 [2024-11-20 12:43:34.692154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.151 [2024-11-20 12:43:34.692189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.151 qpair failed and we were unable to recover it. 00:29:29.151 [2024-11-20 12:43:34.692399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.151 [2024-11-20 12:43:34.692434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.151 qpair failed and we were unable to recover it. 00:29:29.151 [2024-11-20 12:43:34.692698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.151 [2024-11-20 12:43:34.692732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.151 qpair failed and we were unable to recover it. 00:29:29.151 [2024-11-20 12:43:34.693017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.151 [2024-11-20 12:43:34.693052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.151 qpair failed and we were unable to recover it. 00:29:29.151 [2024-11-20 12:43:34.693270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.151 [2024-11-20 12:43:34.693306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.151 qpair failed and we were unable to recover it. 00:29:29.151 [2024-11-20 12:43:34.693561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.151 [2024-11-20 12:43:34.693596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.151 qpair failed and we were unable to recover it. 00:29:29.151 [2024-11-20 12:43:34.693819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.151 [2024-11-20 12:43:34.693854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.151 qpair failed and we were unable to recover it. 00:29:29.151 [2024-11-20 12:43:34.694097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.151 [2024-11-20 12:43:34.694132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.151 qpair failed and we were unable to recover it. 00:29:29.151 [2024-11-20 12:43:34.694435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.151 [2024-11-20 12:43:34.694471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.151 qpair failed and we were unable to recover it. 00:29:29.151 [2024-11-20 12:43:34.694770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.151 [2024-11-20 12:43:34.694805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.151 qpair failed and we were unable to recover it. 00:29:29.151 [2024-11-20 12:43:34.695065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.151 [2024-11-20 12:43:34.695100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.151 qpair failed and we were unable to recover it. 00:29:29.151 [2024-11-20 12:43:34.695329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.151 [2024-11-20 12:43:34.695367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.151 qpair failed and we were unable to recover it. 00:29:29.151 [2024-11-20 12:43:34.695626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.151 [2024-11-20 12:43:34.695661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.151 qpair failed and we were unable to recover it. 00:29:29.151 [2024-11-20 12:43:34.695909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.151 [2024-11-20 12:43:34.695943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.151 qpair failed and we were unable to recover it. 00:29:29.151 [2024-11-20 12:43:34.696196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.151 [2024-11-20 12:43:34.696243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.151 qpair failed and we were unable to recover it. 00:29:29.151 [2024-11-20 12:43:34.696497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.151 [2024-11-20 12:43:34.696533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.151 qpair failed and we were unable to recover it. 00:29:29.151 [2024-11-20 12:43:34.696810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.151 [2024-11-20 12:43:34.696843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.151 qpair failed and we were unable to recover it. 00:29:29.151 [2024-11-20 12:43:34.696977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.151 [2024-11-20 12:43:34.697012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.151 qpair failed and we were unable to recover it. 00:29:29.151 [2024-11-20 12:43:34.697198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.151 [2024-11-20 12:43:34.697242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.151 qpair failed and we were unable to recover it. 00:29:29.151 [2024-11-20 12:43:34.697522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.151 [2024-11-20 12:43:34.697556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.151 qpair failed and we were unable to recover it. 00:29:29.151 [2024-11-20 12:43:34.697813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.151 [2024-11-20 12:43:34.697848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.151 qpair failed and we were unable to recover it. 00:29:29.151 [2024-11-20 12:43:34.698056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.151 [2024-11-20 12:43:34.698091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.151 qpair failed and we were unable to recover it. 00:29:29.151 [2024-11-20 12:43:34.698361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.151 [2024-11-20 12:43:34.698398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.151 qpair failed and we were unable to recover it. 00:29:29.152 [2024-11-20 12:43:34.698625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.152 [2024-11-20 12:43:34.698660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.152 qpair failed and we were unable to recover it. 00:29:29.152 [2024-11-20 12:43:34.698881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.152 [2024-11-20 12:43:34.698916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.152 qpair failed and we were unable to recover it. 00:29:29.152 [2024-11-20 12:43:34.699196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.152 [2024-11-20 12:43:34.699249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.152 qpair failed and we were unable to recover it. 00:29:29.152 [2024-11-20 12:43:34.699536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.152 [2024-11-20 12:43:34.699571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.152 qpair failed and we were unable to recover it. 00:29:29.152 [2024-11-20 12:43:34.699778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.152 [2024-11-20 12:43:34.699813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.152 qpair failed and we were unable to recover it. 00:29:29.152 [2024-11-20 12:43:34.699946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.152 [2024-11-20 12:43:34.699981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.152 qpair failed and we were unable to recover it. 00:29:29.152 [2024-11-20 12:43:34.700200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.152 [2024-11-20 12:43:34.700248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.152 qpair failed and we were unable to recover it. 00:29:29.152 [2024-11-20 12:43:34.700506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.152 [2024-11-20 12:43:34.700541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.152 qpair failed and we were unable to recover it. 00:29:29.152 [2024-11-20 12:43:34.700832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.152 [2024-11-20 12:43:34.700866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.152 qpair failed and we were unable to recover it. 00:29:29.152 [2024-11-20 12:43:34.701158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.152 [2024-11-20 12:43:34.701194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.152 qpair failed and we were unable to recover it. 00:29:29.152 [2024-11-20 12:43:34.701407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.152 [2024-11-20 12:43:34.701442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.152 qpair failed and we were unable to recover it. 00:29:29.152 [2024-11-20 12:43:34.701718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.152 [2024-11-20 12:43:34.701753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.152 qpair failed and we were unable to recover it. 00:29:29.152 [2024-11-20 12:43:34.702007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.152 [2024-11-20 12:43:34.702042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.152 qpair failed and we were unable to recover it. 00:29:29.152 [2024-11-20 12:43:34.702239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.152 [2024-11-20 12:43:34.702276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.152 qpair failed and we were unable to recover it. 00:29:29.152 [2024-11-20 12:43:34.702494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.152 [2024-11-20 12:43:34.702529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.152 qpair failed and we were unable to recover it. 00:29:29.152 [2024-11-20 12:43:34.702666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.152 [2024-11-20 12:43:34.702700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.152 qpair failed and we were unable to recover it. 00:29:29.152 [2024-11-20 12:43:34.702902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.152 [2024-11-20 12:43:34.702937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.152 qpair failed and we were unable to recover it. 00:29:29.152 [2024-11-20 12:43:34.703122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.152 [2024-11-20 12:43:34.703157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.152 qpair failed and we were unable to recover it. 00:29:29.152 [2024-11-20 12:43:34.703382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.152 [2024-11-20 12:43:34.703417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.152 qpair failed and we were unable to recover it. 00:29:29.152 [2024-11-20 12:43:34.703672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.152 [2024-11-20 12:43:34.703707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.152 qpair failed and we were unable to recover it. 00:29:29.152 [2024-11-20 12:43:34.703962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.152 [2024-11-20 12:43:34.703998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.152 qpair failed and we were unable to recover it. 00:29:29.152 [2024-11-20 12:43:34.704253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.152 [2024-11-20 12:43:34.704290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.152 qpair failed and we were unable to recover it. 00:29:29.152 [2024-11-20 12:43:34.704596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.152 [2024-11-20 12:43:34.704630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.152 qpair failed and we were unable to recover it. 00:29:29.152 [2024-11-20 12:43:34.704925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.152 [2024-11-20 12:43:34.704960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.152 qpair failed and we were unable to recover it. 00:29:29.152 [2024-11-20 12:43:34.705156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.152 [2024-11-20 12:43:34.705191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.152 qpair failed and we were unable to recover it. 00:29:29.152 [2024-11-20 12:43:34.705488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.152 [2024-11-20 12:43:34.705523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.152 qpair failed and we were unable to recover it. 00:29:29.152 [2024-11-20 12:43:34.705765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.152 [2024-11-20 12:43:34.705801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.152 qpair failed and we were unable to recover it. 00:29:29.152 [2024-11-20 12:43:34.706053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.152 [2024-11-20 12:43:34.706088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.152 qpair failed and we were unable to recover it. 00:29:29.152 [2024-11-20 12:43:34.706344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.152 [2024-11-20 12:43:34.706381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.152 qpair failed and we were unable to recover it. 00:29:29.152 [2024-11-20 12:43:34.706513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.152 [2024-11-20 12:43:34.706549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.152 qpair failed and we were unable to recover it. 00:29:29.152 [2024-11-20 12:43:34.706758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.152 [2024-11-20 12:43:34.706793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.152 qpair failed and we were unable to recover it. 00:29:29.152 [2024-11-20 12:43:34.707064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.152 [2024-11-20 12:43:34.707098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.152 qpair failed and we were unable to recover it. 00:29:29.152 [2024-11-20 12:43:34.707282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.152 [2024-11-20 12:43:34.707318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.152 qpair failed and we were unable to recover it. 00:29:29.152 [2024-11-20 12:43:34.707515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.152 [2024-11-20 12:43:34.707550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.152 qpair failed and we were unable to recover it. 00:29:29.152 [2024-11-20 12:43:34.707690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.152 [2024-11-20 12:43:34.707725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.152 qpair failed and we were unable to recover it. 00:29:29.152 [2024-11-20 12:43:34.707996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.152 [2024-11-20 12:43:34.708030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.152 qpair failed and we were unable to recover it. 00:29:29.152 [2024-11-20 12:43:34.708284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.152 [2024-11-20 12:43:34.708320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.152 qpair failed and we were unable to recover it. 00:29:29.152 [2024-11-20 12:43:34.708630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.152 [2024-11-20 12:43:34.708664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.152 qpair failed and we were unable to recover it. 00:29:29.153 [2024-11-20 12:43:34.708943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.153 [2024-11-20 12:43:34.708978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.153 qpair failed and we were unable to recover it. 00:29:29.153 [2024-11-20 12:43:34.709178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.153 [2024-11-20 12:43:34.709222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.153 qpair failed and we were unable to recover it. 00:29:29.153 [2024-11-20 12:43:34.709510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.153 [2024-11-20 12:43:34.709545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.153 qpair failed and we were unable to recover it. 00:29:29.153 [2024-11-20 12:43:34.709740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.153 [2024-11-20 12:43:34.709776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.153 qpair failed and we were unable to recover it. 00:29:29.153 [2024-11-20 12:43:34.709972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.153 [2024-11-20 12:43:34.710011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.153 qpair failed and we were unable to recover it. 00:29:29.153 [2024-11-20 12:43:34.710232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.153 [2024-11-20 12:43:34.710269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.153 qpair failed and we were unable to recover it. 00:29:29.153 [2024-11-20 12:43:34.710525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.153 [2024-11-20 12:43:34.710561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.153 qpair failed and we were unable to recover it. 00:29:29.153 [2024-11-20 12:43:34.710832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.153 [2024-11-20 12:43:34.710866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.153 qpair failed and we were unable to recover it. 00:29:29.153 [2024-11-20 12:43:34.711139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.153 [2024-11-20 12:43:34.711175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.153 qpair failed and we were unable to recover it. 00:29:29.153 [2024-11-20 12:43:34.711494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.153 [2024-11-20 12:43:34.711530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.153 qpair failed and we were unable to recover it. 00:29:29.153 [2024-11-20 12:43:34.711645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.153 [2024-11-20 12:43:34.711680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.153 qpair failed and we were unable to recover it. 00:29:29.153 [2024-11-20 12:43:34.711890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.153 [2024-11-20 12:43:34.711924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.153 qpair failed and we were unable to recover it. 00:29:29.153 [2024-11-20 12:43:34.712124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.153 [2024-11-20 12:43:34.712159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.153 qpair failed and we were unable to recover it. 00:29:29.153 [2024-11-20 12:43:34.712447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.153 [2024-11-20 12:43:34.712484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.153 qpair failed and we were unable to recover it. 00:29:29.153 [2024-11-20 12:43:34.712751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.153 [2024-11-20 12:43:34.712786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.153 qpair failed and we were unable to recover it. 00:29:29.153 [2024-11-20 12:43:34.712983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.153 [2024-11-20 12:43:34.713017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.153 qpair failed and we were unable to recover it. 00:29:29.153 [2024-11-20 12:43:34.713283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.153 [2024-11-20 12:43:34.713321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.153 qpair failed and we were unable to recover it. 00:29:29.153 [2024-11-20 12:43:34.713542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.153 [2024-11-20 12:43:34.713578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.153 qpair failed and we were unable to recover it. 00:29:29.153 [2024-11-20 12:43:34.713837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.153 [2024-11-20 12:43:34.713872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.153 qpair failed and we were unable to recover it. 00:29:29.153 [2024-11-20 12:43:34.714056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.153 [2024-11-20 12:43:34.714090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.153 qpair failed and we were unable to recover it. 00:29:29.153 [2024-11-20 12:43:34.714347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.153 [2024-11-20 12:43:34.714384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.153 qpair failed and we were unable to recover it. 00:29:29.153 [2024-11-20 12:43:34.714594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.153 [2024-11-20 12:43:34.714629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.153 qpair failed and we were unable to recover it. 00:29:29.153 [2024-11-20 12:43:34.714873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.153 [2024-11-20 12:43:34.714907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.153 qpair failed and we were unable to recover it. 00:29:29.153 [2024-11-20 12:43:34.715215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.153 [2024-11-20 12:43:34.715252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.153 qpair failed and we were unable to recover it. 00:29:29.153 [2024-11-20 12:43:34.715502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.153 [2024-11-20 12:43:34.715537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.153 qpair failed and we were unable to recover it. 00:29:29.153 [2024-11-20 12:43:34.715730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.153 [2024-11-20 12:43:34.715764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.153 qpair failed and we were unable to recover it. 00:29:29.153 [2024-11-20 12:43:34.715961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.153 [2024-11-20 12:43:34.715995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.153 qpair failed and we were unable to recover it. 00:29:29.153 [2024-11-20 12:43:34.716269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.153 [2024-11-20 12:43:34.716306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.153 qpair failed and we were unable to recover it. 00:29:29.153 [2024-11-20 12:43:34.716501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.153 [2024-11-20 12:43:34.716535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.153 qpair failed and we were unable to recover it. 00:29:29.153 [2024-11-20 12:43:34.716791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.153 [2024-11-20 12:43:34.716825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.153 qpair failed and we were unable to recover it. 00:29:29.153 [2024-11-20 12:43:34.717023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.153 [2024-11-20 12:43:34.717059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.153 qpair failed and we were unable to recover it. 00:29:29.153 [2024-11-20 12:43:34.717184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.153 [2024-11-20 12:43:34.717228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.153 qpair failed and we were unable to recover it. 00:29:29.153 [2024-11-20 12:43:34.717419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.153 [2024-11-20 12:43:34.717454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.153 qpair failed and we were unable to recover it. 00:29:29.153 [2024-11-20 12:43:34.717731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.154 [2024-11-20 12:43:34.717766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.154 qpair failed and we were unable to recover it. 00:29:29.154 [2024-11-20 12:43:34.717985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.154 [2024-11-20 12:43:34.718020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.154 qpair failed and we were unable to recover it. 00:29:29.154 [2024-11-20 12:43:34.718275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.154 [2024-11-20 12:43:34.718312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.154 qpair failed and we were unable to recover it. 00:29:29.154 [2024-11-20 12:43:34.718513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.154 [2024-11-20 12:43:34.718548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.154 qpair failed and we were unable to recover it. 00:29:29.154 [2024-11-20 12:43:34.718839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.154 [2024-11-20 12:43:34.718873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.154 qpair failed and we were unable to recover it. 00:29:29.154 [2024-11-20 12:43:34.719005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.154 [2024-11-20 12:43:34.719039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.154 qpair failed and we were unable to recover it. 00:29:29.154 [2024-11-20 12:43:34.719239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.154 [2024-11-20 12:43:34.719276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.154 qpair failed and we were unable to recover it. 00:29:29.154 [2024-11-20 12:43:34.719399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.154 [2024-11-20 12:43:34.719432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.154 qpair failed and we were unable to recover it. 00:29:29.154 [2024-11-20 12:43:34.719732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.154 [2024-11-20 12:43:34.719767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.154 qpair failed and we were unable to recover it. 00:29:29.154 [2024-11-20 12:43:34.719907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.154 [2024-11-20 12:43:34.719942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.154 qpair failed and we were unable to recover it. 00:29:29.154 [2024-11-20 12:43:34.720164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.154 [2024-11-20 12:43:34.720199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.154 qpair failed and we were unable to recover it. 00:29:29.154 [2024-11-20 12:43:34.720492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.154 [2024-11-20 12:43:34.720527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.154 qpair failed and we were unable to recover it. 00:29:29.154 [2024-11-20 12:43:34.720797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.154 [2024-11-20 12:43:34.720832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.154 qpair failed and we were unable to recover it. 00:29:29.154 [2024-11-20 12:43:34.721114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.154 [2024-11-20 12:43:34.721149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.154 qpair failed and we were unable to recover it. 00:29:29.154 [2024-11-20 12:43:34.721436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.154 [2024-11-20 12:43:34.721472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.154 qpair failed and we were unable to recover it. 00:29:29.154 [2024-11-20 12:43:34.721747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.154 [2024-11-20 12:43:34.721782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.154 qpair failed and we were unable to recover it. 00:29:29.154 [2024-11-20 12:43:34.721983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.154 [2024-11-20 12:43:34.722017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.154 qpair failed and we were unable to recover it. 00:29:29.154 [2024-11-20 12:43:34.722229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.154 [2024-11-20 12:43:34.722265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.154 qpair failed and we were unable to recover it. 00:29:29.154 [2024-11-20 12:43:34.722545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.154 [2024-11-20 12:43:34.722579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.154 qpair failed and we were unable to recover it. 00:29:29.154 [2024-11-20 12:43:34.722881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.154 [2024-11-20 12:43:34.722915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.154 qpair failed and we were unable to recover it. 00:29:29.154 [2024-11-20 12:43:34.723068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.154 [2024-11-20 12:43:34.723102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.154 qpair failed and we were unable to recover it. 00:29:29.154 [2024-11-20 12:43:34.723380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.154 [2024-11-20 12:43:34.723417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.154 qpair failed and we were unable to recover it. 00:29:29.154 [2024-11-20 12:43:34.723683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.154 [2024-11-20 12:43:34.723718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.154 qpair failed and we were unable to recover it. 00:29:29.154 [2024-11-20 12:43:34.724015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.154 [2024-11-20 12:43:34.724049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.154 qpair failed and we were unable to recover it. 00:29:29.154 [2024-11-20 12:43:34.724255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.154 [2024-11-20 12:43:34.724291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.154 qpair failed and we were unable to recover it. 00:29:29.154 [2024-11-20 12:43:34.724557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.154 [2024-11-20 12:43:34.724593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.154 qpair failed and we were unable to recover it. 00:29:29.154 [2024-11-20 12:43:34.724799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.154 [2024-11-20 12:43:34.724833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.154 qpair failed and we were unable to recover it. 00:29:29.154 [2024-11-20 12:43:34.725052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.154 [2024-11-20 12:43:34.725087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.154 qpair failed and we were unable to recover it. 00:29:29.154 [2024-11-20 12:43:34.725307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.154 [2024-11-20 12:43:34.725344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.154 qpair failed and we were unable to recover it. 00:29:29.154 [2024-11-20 12:43:34.725507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.154 [2024-11-20 12:43:34.725542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.154 qpair failed and we were unable to recover it. 00:29:29.154 [2024-11-20 12:43:34.725701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.154 [2024-11-20 12:43:34.725734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.154 qpair failed and we were unable to recover it. 00:29:29.154 [2024-11-20 12:43:34.725962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.154 [2024-11-20 12:43:34.725997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.154 qpair failed and we were unable to recover it. 00:29:29.154 [2024-11-20 12:43:34.726228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.154 [2024-11-20 12:43:34.726264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.154 qpair failed and we were unable to recover it. 00:29:29.154 [2024-11-20 12:43:34.726481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.154 [2024-11-20 12:43:34.726517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.154 qpair failed and we were unable to recover it. 00:29:29.154 [2024-11-20 12:43:34.726799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.154 [2024-11-20 12:43:34.726835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.154 qpair failed and we were unable to recover it. 00:29:29.154 [2024-11-20 12:43:34.727115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.154 [2024-11-20 12:43:34.727149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.154 qpair failed and we were unable to recover it. 00:29:29.154 [2024-11-20 12:43:34.727348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.154 [2024-11-20 12:43:34.727383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.154 qpair failed and we were unable to recover it. 00:29:29.154 [2024-11-20 12:43:34.727662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.154 [2024-11-20 12:43:34.727697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.154 qpair failed and we were unable to recover it. 00:29:29.154 [2024-11-20 12:43:34.727907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.155 [2024-11-20 12:43:34.727949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.155 qpair failed and we were unable to recover it. 00:29:29.155 [2024-11-20 12:43:34.728248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.155 [2024-11-20 12:43:34.728285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.155 qpair failed and we were unable to recover it. 00:29:29.155 [2024-11-20 12:43:34.728550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.155 [2024-11-20 12:43:34.728586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.155 qpair failed and we were unable to recover it. 00:29:29.155 [2024-11-20 12:43:34.728862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.155 [2024-11-20 12:43:34.728897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.155 qpair failed and we were unable to recover it. 00:29:29.155 [2024-11-20 12:43:34.729182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.155 [2024-11-20 12:43:34.729227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.155 qpair failed and we were unable to recover it. 00:29:29.155 [2024-11-20 12:43:34.729497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.155 [2024-11-20 12:43:34.729532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.155 qpair failed and we were unable to recover it. 00:29:29.155 [2024-11-20 12:43:34.729747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.155 [2024-11-20 12:43:34.729783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.155 qpair failed and we were unable to recover it. 00:29:29.155 [2024-11-20 12:43:34.730051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.155 [2024-11-20 12:43:34.730086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.155 qpair failed and we were unable to recover it. 00:29:29.155 [2024-11-20 12:43:34.730284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.155 [2024-11-20 12:43:34.730320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.155 qpair failed and we were unable to recover it. 00:29:29.155 [2024-11-20 12:43:34.730598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.155 [2024-11-20 12:43:34.730634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.155 qpair failed and we were unable to recover it. 00:29:29.155 [2024-11-20 12:43:34.730841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.155 [2024-11-20 12:43:34.730875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.155 qpair failed and we were unable to recover it. 00:29:29.155 [2024-11-20 12:43:34.731020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.155 [2024-11-20 12:43:34.731055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.155 qpair failed and we were unable to recover it. 00:29:29.155 [2024-11-20 12:43:34.731337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.155 [2024-11-20 12:43:34.731374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.155 qpair failed and we were unable to recover it. 00:29:29.155 [2024-11-20 12:43:34.731493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.155 [2024-11-20 12:43:34.731527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.155 qpair failed and we were unable to recover it. 00:29:29.155 [2024-11-20 12:43:34.731722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.155 [2024-11-20 12:43:34.731757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.155 qpair failed and we were unable to recover it. 00:29:29.155 [2024-11-20 12:43:34.731872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.155 [2024-11-20 12:43:34.731908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.155 qpair failed and we were unable to recover it. 00:29:29.155 [2024-11-20 12:43:34.732182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.155 [2024-11-20 12:43:34.732238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.155 qpair failed and we were unable to recover it. 00:29:29.155 [2024-11-20 12:43:34.732519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.155 [2024-11-20 12:43:34.732554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.155 qpair failed and we were unable to recover it. 00:29:29.155 [2024-11-20 12:43:34.732815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.155 [2024-11-20 12:43:34.732850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.155 qpair failed and we were unable to recover it. 00:29:29.155 [2024-11-20 12:43:34.733147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.155 [2024-11-20 12:43:34.733182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.155 qpair failed and we were unable to recover it. 00:29:29.155 [2024-11-20 12:43:34.733456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.155 [2024-11-20 12:43:34.733492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.155 qpair failed and we were unable to recover it. 00:29:29.155 [2024-11-20 12:43:34.733623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.155 [2024-11-20 12:43:34.733657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.155 qpair failed and we were unable to recover it. 00:29:29.155 [2024-11-20 12:43:34.733870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.155 [2024-11-20 12:43:34.733906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.155 qpair failed and we were unable to recover it. 00:29:29.155 [2024-11-20 12:43:34.734174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.155 [2024-11-20 12:43:34.734221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.155 qpair failed and we were unable to recover it. 00:29:29.155 [2024-11-20 12:43:34.734506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.155 [2024-11-20 12:43:34.734541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.155 qpair failed and we were unable to recover it. 00:29:29.155 [2024-11-20 12:43:34.734828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.155 [2024-11-20 12:43:34.734863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.155 qpair failed and we were unable to recover it. 00:29:29.155 [2024-11-20 12:43:34.735070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.155 [2024-11-20 12:43:34.735105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.155 qpair failed and we were unable to recover it. 00:29:29.155 [2024-11-20 12:43:34.735233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.155 [2024-11-20 12:43:34.735271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.155 qpair failed and we were unable to recover it. 00:29:29.155 [2024-11-20 12:43:34.735559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.155 [2024-11-20 12:43:34.735593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.155 qpair failed and we were unable to recover it. 00:29:29.155 [2024-11-20 12:43:34.735848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.155 [2024-11-20 12:43:34.735883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.155 qpair failed and we were unable to recover it. 00:29:29.155 [2024-11-20 12:43:34.736100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.155 [2024-11-20 12:43:34.736135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.155 qpair failed and we were unable to recover it. 00:29:29.155 [2024-11-20 12:43:34.736437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.155 [2024-11-20 12:43:34.736474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.155 qpair failed and we were unable to recover it. 00:29:29.155 [2024-11-20 12:43:34.736730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.155 [2024-11-20 12:43:34.736765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.155 qpair failed and we were unable to recover it. 00:29:29.155 [2024-11-20 12:43:34.737064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.155 [2024-11-20 12:43:34.737098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.155 qpair failed and we were unable to recover it. 00:29:29.155 [2024-11-20 12:43:34.737323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.155 [2024-11-20 12:43:34.737359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.155 qpair failed and we were unable to recover it. 00:29:29.155 [2024-11-20 12:43:34.737562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.155 [2024-11-20 12:43:34.737597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.155 qpair failed and we were unable to recover it. 00:29:29.155 [2024-11-20 12:43:34.737893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.155 [2024-11-20 12:43:34.737929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.155 qpair failed and we were unable to recover it. 00:29:29.155 [2024-11-20 12:43:34.738152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.155 [2024-11-20 12:43:34.738186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.155 qpair failed and we were unable to recover it. 00:29:29.155 [2024-11-20 12:43:34.738459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.156 [2024-11-20 12:43:34.738495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.156 qpair failed and we were unable to recover it. 00:29:29.156 [2024-11-20 12:43:34.738773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.156 [2024-11-20 12:43:34.738808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.156 qpair failed and we were unable to recover it. 00:29:29.156 [2024-11-20 12:43:34.739087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.156 [2024-11-20 12:43:34.739127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.156 qpair failed and we were unable to recover it. 00:29:29.156 [2024-11-20 12:43:34.739406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.156 [2024-11-20 12:43:34.739443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.156 qpair failed and we were unable to recover it. 00:29:29.156 [2024-11-20 12:43:34.739719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.156 [2024-11-20 12:43:34.739755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.156 qpair failed and we were unable to recover it. 00:29:29.156 [2024-11-20 12:43:34.739939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.156 [2024-11-20 12:43:34.739973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.156 qpair failed and we were unable to recover it. 00:29:29.156 [2024-11-20 12:43:34.740262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.156 [2024-11-20 12:43:34.740297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.156 qpair failed and we were unable to recover it. 00:29:29.156 [2024-11-20 12:43:34.740607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.156 [2024-11-20 12:43:34.740641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.156 qpair failed and we were unable to recover it. 00:29:29.156 [2024-11-20 12:43:34.740920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.156 [2024-11-20 12:43:34.740956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.156 qpair failed and we were unable to recover it. 00:29:29.156 [2024-11-20 12:43:34.741237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.156 [2024-11-20 12:43:34.741273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.156 qpair failed and we were unable to recover it. 00:29:29.156 [2024-11-20 12:43:34.741554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.156 [2024-11-20 12:43:34.741588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.156 qpair failed and we were unable to recover it. 00:29:29.156 [2024-11-20 12:43:34.741866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.156 [2024-11-20 12:43:34.741901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.156 qpair failed and we were unable to recover it. 00:29:29.156 [2024-11-20 12:43:34.742119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.156 [2024-11-20 12:43:34.742155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.156 qpair failed and we were unable to recover it. 00:29:29.156 [2024-11-20 12:43:34.742474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.156 [2024-11-20 12:43:34.742511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.156 qpair failed and we were unable to recover it. 00:29:29.156 [2024-11-20 12:43:34.742807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.156 [2024-11-20 12:43:34.742841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.156 qpair failed and we were unable to recover it. 00:29:29.156 [2024-11-20 12:43:34.743059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.156 [2024-11-20 12:43:34.743093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.156 qpair failed and we were unable to recover it. 00:29:29.156 [2024-11-20 12:43:34.743355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.156 [2024-11-20 12:43:34.743392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.156 qpair failed and we were unable to recover it. 00:29:29.156 [2024-11-20 12:43:34.743534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.156 [2024-11-20 12:43:34.743569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.156 qpair failed and we were unable to recover it. 00:29:29.156 [2024-11-20 12:43:34.743842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.156 [2024-11-20 12:43:34.743877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.156 qpair failed and we were unable to recover it. 00:29:29.156 [2024-11-20 12:43:34.744022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.156 [2024-11-20 12:43:34.744057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.156 qpair failed and we were unable to recover it. 00:29:29.156 [2024-11-20 12:43:34.744333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.156 [2024-11-20 12:43:34.744369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.156 qpair failed and we were unable to recover it. 00:29:29.156 [2024-11-20 12:43:34.744583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.156 [2024-11-20 12:43:34.744618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.156 qpair failed and we were unable to recover it. 00:29:29.156 [2024-11-20 12:43:34.744801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.156 [2024-11-20 12:43:34.744836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.156 qpair failed and we were unable to recover it. 00:29:29.156 [2024-11-20 12:43:34.745016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.156 [2024-11-20 12:43:34.745051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.156 qpair failed and we were unable to recover it. 00:29:29.156 [2024-11-20 12:43:34.745280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.156 [2024-11-20 12:43:34.745317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.156 qpair failed and we were unable to recover it. 00:29:29.156 [2024-11-20 12:43:34.745576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.156 [2024-11-20 12:43:34.745611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.156 qpair failed and we were unable to recover it. 00:29:29.156 [2024-11-20 12:43:34.745855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.156 [2024-11-20 12:43:34.745890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.156 qpair failed and we were unable to recover it. 00:29:29.156 [2024-11-20 12:43:34.746119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.156 [2024-11-20 12:43:34.746154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.156 qpair failed and we were unable to recover it. 00:29:29.156 [2024-11-20 12:43:34.746294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.156 [2024-11-20 12:43:34.746329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.156 qpair failed and we were unable to recover it. 00:29:29.156 [2024-11-20 12:43:34.746544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.156 [2024-11-20 12:43:34.746579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.156 qpair failed and we were unable to recover it. 00:29:29.156 [2024-11-20 12:43:34.746861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.156 [2024-11-20 12:43:34.746896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.156 qpair failed and we were unable to recover it. 00:29:29.156 [2024-11-20 12:43:34.747148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.156 [2024-11-20 12:43:34.747183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.156 qpair failed and we were unable to recover it. 00:29:29.156 [2024-11-20 12:43:34.747494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.156 [2024-11-20 12:43:34.747530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.156 qpair failed and we were unable to recover it. 00:29:29.156 [2024-11-20 12:43:34.747724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.156 [2024-11-20 12:43:34.747760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.156 qpair failed and we were unable to recover it. 00:29:29.156 [2024-11-20 12:43:34.748062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.156 [2024-11-20 12:43:34.748097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.156 qpair failed and we were unable to recover it. 00:29:29.156 [2024-11-20 12:43:34.748283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.156 [2024-11-20 12:43:34.748320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.156 qpair failed and we were unable to recover it. 00:29:29.156 [2024-11-20 12:43:34.748605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.156 [2024-11-20 12:43:34.748640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.156 qpair failed and we were unable to recover it. 00:29:29.156 [2024-11-20 12:43:34.748861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.156 [2024-11-20 12:43:34.748895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.156 qpair failed and we were unable to recover it. 00:29:29.157 [2024-11-20 12:43:34.749149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.157 [2024-11-20 12:43:34.749184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.157 qpair failed and we were unable to recover it. 00:29:29.157 [2024-11-20 12:43:34.749493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.157 [2024-11-20 12:43:34.749529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.157 qpair failed and we were unable to recover it. 00:29:29.157 [2024-11-20 12:43:34.749780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.157 [2024-11-20 12:43:34.749815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.157 qpair failed and we were unable to recover it. 00:29:29.157 [2024-11-20 12:43:34.750067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.157 [2024-11-20 12:43:34.750101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.157 qpair failed and we were unable to recover it. 00:29:29.157 [2024-11-20 12:43:34.750304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.157 [2024-11-20 12:43:34.750347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.157 qpair failed and we were unable to recover it. 00:29:29.157 [2024-11-20 12:43:34.750612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.157 [2024-11-20 12:43:34.750647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.157 qpair failed and we were unable to recover it. 00:29:29.157 [2024-11-20 12:43:34.750862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.157 [2024-11-20 12:43:34.750896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.157 qpair failed and we were unable to recover it. 00:29:29.157 [2024-11-20 12:43:34.751173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.157 [2024-11-20 12:43:34.751217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.157 qpair failed and we were unable to recover it. 00:29:29.157 [2024-11-20 12:43:34.751446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.157 [2024-11-20 12:43:34.751481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.157 qpair failed and we were unable to recover it. 00:29:29.157 [2024-11-20 12:43:34.751690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.157 [2024-11-20 12:43:34.751724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.157 qpair failed and we were unable to recover it. 00:29:29.157 [2024-11-20 12:43:34.751977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.157 [2024-11-20 12:43:34.752011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.157 qpair failed and we were unable to recover it. 00:29:29.157 [2024-11-20 12:43:34.752264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.157 [2024-11-20 12:43:34.752301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.157 qpair failed and we were unable to recover it. 00:29:29.157 [2024-11-20 12:43:34.752606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.157 [2024-11-20 12:43:34.752641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.157 qpair failed and we were unable to recover it. 00:29:29.157 [2024-11-20 12:43:34.752897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.157 [2024-11-20 12:43:34.752932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.157 qpair failed and we were unable to recover it. 00:29:29.157 [2024-11-20 12:43:34.753150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.157 [2024-11-20 12:43:34.753185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.157 qpair failed and we were unable to recover it. 00:29:29.157 [2024-11-20 12:43:34.753406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.157 [2024-11-20 12:43:34.753441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.157 qpair failed and we were unable to recover it. 00:29:29.157 [2024-11-20 12:43:34.753558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.157 [2024-11-20 12:43:34.753592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.157 qpair failed and we were unable to recover it. 00:29:29.157 [2024-11-20 12:43:34.753794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.157 [2024-11-20 12:43:34.753828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.157 qpair failed and we were unable to recover it. 00:29:29.157 [2024-11-20 12:43:34.754088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.157 [2024-11-20 12:43:34.754124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.157 qpair failed and we were unable to recover it. 00:29:29.157 [2024-11-20 12:43:34.754337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.157 [2024-11-20 12:43:34.754375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.157 qpair failed and we were unable to recover it. 00:29:29.157 [2024-11-20 12:43:34.754650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.157 [2024-11-20 12:43:34.754685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.157 qpair failed and we were unable to recover it. 00:29:29.157 [2024-11-20 12:43:34.754941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.157 [2024-11-20 12:43:34.754976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.157 qpair failed and we were unable to recover it. 00:29:29.157 [2024-11-20 12:43:34.755196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.157 [2024-11-20 12:43:34.755242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.157 qpair failed and we were unable to recover it. 00:29:29.157 [2024-11-20 12:43:34.755507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.157 [2024-11-20 12:43:34.755541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.157 qpair failed and we were unable to recover it. 00:29:29.157 [2024-11-20 12:43:34.755688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.157 [2024-11-20 12:43:34.755722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.157 qpair failed and we were unable to recover it. 00:29:29.157 [2024-11-20 12:43:34.756024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.157 [2024-11-20 12:43:34.756058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.157 qpair failed and we were unable to recover it. 00:29:29.157 [2024-11-20 12:43:34.756340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.157 [2024-11-20 12:43:34.756377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.157 qpair failed and we were unable to recover it. 00:29:29.157 [2024-11-20 12:43:34.756584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.157 [2024-11-20 12:43:34.756619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.157 qpair failed and we were unable to recover it. 00:29:29.157 [2024-11-20 12:43:34.756902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.157 [2024-11-20 12:43:34.756937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.157 qpair failed and we were unable to recover it. 00:29:29.157 [2024-11-20 12:43:34.757141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.157 [2024-11-20 12:43:34.757176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.157 qpair failed and we were unable to recover it. 00:29:29.157 [2024-11-20 12:43:34.757420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.157 [2024-11-20 12:43:34.757456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.157 qpair failed and we were unable to recover it. 00:29:29.157 [2024-11-20 12:43:34.757662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.157 [2024-11-20 12:43:34.757696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.157 qpair failed and we were unable to recover it. 00:29:29.157 [2024-11-20 12:43:34.757927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.157 [2024-11-20 12:43:34.757962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.157 qpair failed and we were unable to recover it. 00:29:29.157 [2024-11-20 12:43:34.758246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.157 [2024-11-20 12:43:34.758282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.157 qpair failed and we were unable to recover it. 00:29:29.157 [2024-11-20 12:43:34.758579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.157 [2024-11-20 12:43:34.758613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.157 qpair failed and we were unable to recover it. 00:29:29.157 [2024-11-20 12:43:34.758818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.157 [2024-11-20 12:43:34.758852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.157 qpair failed and we were unable to recover it. 00:29:29.157 [2024-11-20 12:43:34.759068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.157 [2024-11-20 12:43:34.759102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.157 qpair failed and we were unable to recover it. 00:29:29.157 [2024-11-20 12:43:34.759358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.157 [2024-11-20 12:43:34.759394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.157 qpair failed and we were unable to recover it. 00:29:29.158 [2024-11-20 12:43:34.759647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.158 [2024-11-20 12:43:34.759682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.158 qpair failed and we were unable to recover it. 00:29:29.158 [2024-11-20 12:43:34.759865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.158 [2024-11-20 12:43:34.759899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.158 qpair failed and we were unable to recover it. 00:29:29.158 [2024-11-20 12:43:34.760225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.158 [2024-11-20 12:43:34.760262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.158 qpair failed and we were unable to recover it. 00:29:29.158 [2024-11-20 12:43:34.760456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.158 [2024-11-20 12:43:34.760490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.158 qpair failed and we were unable to recover it. 00:29:29.158 [2024-11-20 12:43:34.760771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.158 [2024-11-20 12:43:34.760804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.158 qpair failed and we were unable to recover it. 00:29:29.158 [2024-11-20 12:43:34.761063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.158 [2024-11-20 12:43:34.761098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.158 qpair failed and we were unable to recover it. 00:29:29.158 [2024-11-20 12:43:34.761404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.158 [2024-11-20 12:43:34.761452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.158 qpair failed and we were unable to recover it. 00:29:29.158 [2024-11-20 12:43:34.761705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.158 [2024-11-20 12:43:34.761739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.158 qpair failed and we were unable to recover it. 00:29:29.158 [2024-11-20 12:43:34.762035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.158 [2024-11-20 12:43:34.762070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.158 qpair failed and we were unable to recover it. 00:29:29.158 [2024-11-20 12:43:34.762341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.158 [2024-11-20 12:43:34.762376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.158 qpair failed and we were unable to recover it. 00:29:29.158 [2024-11-20 12:43:34.762572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.158 [2024-11-20 12:43:34.762607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.158 qpair failed and we were unable to recover it. 00:29:29.158 [2024-11-20 12:43:34.762812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.158 [2024-11-20 12:43:34.762846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.158 qpair failed and we were unable to recover it. 00:29:29.158 [2024-11-20 12:43:34.763108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.158 [2024-11-20 12:43:34.763142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.158 qpair failed and we were unable to recover it. 00:29:29.158 [2024-11-20 12:43:34.763433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.158 [2024-11-20 12:43:34.763468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.158 qpair failed and we were unable to recover it. 00:29:29.158 [2024-11-20 12:43:34.763738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.158 [2024-11-20 12:43:34.763772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.158 qpair failed and we were unable to recover it. 00:29:29.158 [2024-11-20 12:43:34.764051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.158 [2024-11-20 12:43:34.764086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.158 qpair failed and we were unable to recover it. 00:29:29.158 [2024-11-20 12:43:34.764377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.158 [2024-11-20 12:43:34.764412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.158 qpair failed and we were unable to recover it. 00:29:29.158 [2024-11-20 12:43:34.764686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.158 [2024-11-20 12:43:34.764722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.158 qpair failed and we were unable to recover it. 00:29:29.158 [2024-11-20 12:43:34.765002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.158 [2024-11-20 12:43:34.765036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.158 qpair failed and we were unable to recover it. 00:29:29.158 [2024-11-20 12:43:34.765338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.158 [2024-11-20 12:43:34.765397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.158 qpair failed and we were unable to recover it. 00:29:29.158 [2024-11-20 12:43:34.765539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.158 [2024-11-20 12:43:34.765573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.158 qpair failed and we were unable to recover it. 00:29:29.158 [2024-11-20 12:43:34.765842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.158 [2024-11-20 12:43:34.765875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.158 qpair failed and we were unable to recover it. 00:29:29.158 [2024-11-20 12:43:34.766180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.158 [2024-11-20 12:43:34.766224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.158 qpair failed and we were unable to recover it. 00:29:29.158 [2024-11-20 12:43:34.766484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.158 [2024-11-20 12:43:34.766518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.158 qpair failed and we were unable to recover it. 00:29:29.158 [2024-11-20 12:43:34.766703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.158 [2024-11-20 12:43:34.766737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.158 qpair failed and we were unable to recover it. 00:29:29.158 [2024-11-20 12:43:34.766933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.158 [2024-11-20 12:43:34.766967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.158 qpair failed and we were unable to recover it. 00:29:29.158 [2024-11-20 12:43:34.767170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.158 [2024-11-20 12:43:34.767214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.158 qpair failed and we were unable to recover it. 00:29:29.158 [2024-11-20 12:43:34.767491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.158 [2024-11-20 12:43:34.767527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.158 qpair failed and we were unable to recover it. 00:29:29.158 [2024-11-20 12:43:34.767825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.158 [2024-11-20 12:43:34.767859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.158 qpair failed and we were unable to recover it. 00:29:29.158 [2024-11-20 12:43:34.768122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.158 [2024-11-20 12:43:34.768155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.158 qpair failed and we were unable to recover it. 00:29:29.158 [2024-11-20 12:43:34.768389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.158 [2024-11-20 12:43:34.768426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.158 qpair failed and we were unable to recover it. 00:29:29.158 [2024-11-20 12:43:34.768727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.158 [2024-11-20 12:43:34.768762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.158 qpair failed and we were unable to recover it. 00:29:29.158 [2024-11-20 12:43:34.769064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.158 [2024-11-20 12:43:34.769099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.158 qpair failed and we were unable to recover it. 00:29:29.158 [2024-11-20 12:43:34.769391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.159 [2024-11-20 12:43:34.769428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.159 qpair failed and we were unable to recover it. 00:29:29.159 [2024-11-20 12:43:34.769726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.159 [2024-11-20 12:43:34.769762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.159 qpair failed and we were unable to recover it. 00:29:29.159 [2024-11-20 12:43:34.769974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.159 [2024-11-20 12:43:34.770008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.159 qpair failed and we were unable to recover it. 00:29:29.159 [2024-11-20 12:43:34.770220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.159 [2024-11-20 12:43:34.770256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.159 qpair failed and we were unable to recover it. 00:29:29.159 [2024-11-20 12:43:34.770451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.159 [2024-11-20 12:43:34.770487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.159 qpair failed and we were unable to recover it. 00:29:29.159 [2024-11-20 12:43:34.770694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.159 [2024-11-20 12:43:34.770727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.159 qpair failed and we were unable to recover it. 00:29:29.159 [2024-11-20 12:43:34.771003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.159 [2024-11-20 12:43:34.771037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.159 qpair failed and we were unable to recover it. 00:29:29.159 [2024-11-20 12:43:34.771340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.159 [2024-11-20 12:43:34.771376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.159 qpair failed and we were unable to recover it. 00:29:29.159 [2024-11-20 12:43:34.771637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.159 [2024-11-20 12:43:34.771672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.159 qpair failed and we were unable to recover it. 00:29:29.159 [2024-11-20 12:43:34.771967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.159 [2024-11-20 12:43:34.772002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.159 qpair failed and we were unable to recover it. 00:29:29.159 [2024-11-20 12:43:34.772267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.159 [2024-11-20 12:43:34.772304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.159 qpair failed and we were unable to recover it. 00:29:29.159 [2024-11-20 12:43:34.772570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.159 [2024-11-20 12:43:34.772604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.159 qpair failed and we were unable to recover it. 00:29:29.159 [2024-11-20 12:43:34.772899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.159 [2024-11-20 12:43:34.772935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.159 qpair failed and we were unable to recover it. 00:29:29.159 [2024-11-20 12:43:34.773235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.159 [2024-11-20 12:43:34.773277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.159 qpair failed and we were unable to recover it. 00:29:29.159 [2024-11-20 12:43:34.773466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.159 [2024-11-20 12:43:34.773500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.159 qpair failed and we were unable to recover it. 00:29:29.159 [2024-11-20 12:43:34.773624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.159 [2024-11-20 12:43:34.773658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.159 qpair failed and we were unable to recover it. 00:29:29.159 [2024-11-20 12:43:34.773905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.159 [2024-11-20 12:43:34.773939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.159 qpair failed and we were unable to recover it. 00:29:29.159 [2024-11-20 12:43:34.774190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.159 [2024-11-20 12:43:34.774236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.159 qpair failed and we were unable to recover it. 00:29:29.159 [2024-11-20 12:43:34.774489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.159 [2024-11-20 12:43:34.774523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.159 qpair failed and we were unable to recover it. 00:29:29.159 [2024-11-20 12:43:34.774778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.159 [2024-11-20 12:43:34.774813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.159 qpair failed and we were unable to recover it. 00:29:29.159 [2024-11-20 12:43:34.775022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.159 [2024-11-20 12:43:34.775056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.159 qpair failed and we were unable to recover it. 00:29:29.159 [2024-11-20 12:43:34.775347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.159 [2024-11-20 12:43:34.775383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.159 qpair failed and we were unable to recover it. 00:29:29.159 [2024-11-20 12:43:34.775520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.159 [2024-11-20 12:43:34.775554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.159 qpair failed and we were unable to recover it. 00:29:29.159 [2024-11-20 12:43:34.775760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.159 [2024-11-20 12:43:34.775794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.159 qpair failed and we were unable to recover it. 00:29:29.159 [2024-11-20 12:43:34.776086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.159 [2024-11-20 12:43:34.776121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.159 qpair failed and we were unable to recover it. 00:29:29.159 [2024-11-20 12:43:34.776347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.159 [2024-11-20 12:43:34.776383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.159 qpair failed and we were unable to recover it. 00:29:29.159 [2024-11-20 12:43:34.776644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.159 [2024-11-20 12:43:34.776678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.159 qpair failed and we were unable to recover it. 00:29:29.159 [2024-11-20 12:43:34.776982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.159 [2024-11-20 12:43:34.777017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.159 qpair failed and we were unable to recover it. 00:29:29.159 [2024-11-20 12:43:34.777225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.159 [2024-11-20 12:43:34.777262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.159 qpair failed and we were unable to recover it. 00:29:29.159 [2024-11-20 12:43:34.777542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.159 [2024-11-20 12:43:34.777577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.159 qpair failed and we were unable to recover it. 00:29:29.159 [2024-11-20 12:43:34.777852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.159 [2024-11-20 12:43:34.777887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.159 qpair failed and we were unable to recover it. 00:29:29.159 [2024-11-20 12:43:34.778148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.159 [2024-11-20 12:43:34.778183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.159 qpair failed and we were unable to recover it. 00:29:29.159 [2024-11-20 12:43:34.778411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.159 [2024-11-20 12:43:34.778445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.159 qpair failed and we were unable to recover it. 00:29:29.159 [2024-11-20 12:43:34.778702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.159 [2024-11-20 12:43:34.778737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.159 qpair failed and we were unable to recover it. 00:29:29.159 [2024-11-20 12:43:34.779022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.159 [2024-11-20 12:43:34.779056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.159 qpair failed and we were unable to recover it. 00:29:29.159 [2024-11-20 12:43:34.779354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.159 [2024-11-20 12:43:34.779391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.159 qpair failed and we were unable to recover it. 00:29:29.159 [2024-11-20 12:43:34.779657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.159 [2024-11-20 12:43:34.779691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.159 qpair failed and we were unable to recover it. 00:29:29.159 [2024-11-20 12:43:34.779915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.159 [2024-11-20 12:43:34.779950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.159 qpair failed and we were unable to recover it. 00:29:29.159 [2024-11-20 12:43:34.780223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.160 [2024-11-20 12:43:34.780260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.160 qpair failed and we were unable to recover it. 00:29:29.160 [2024-11-20 12:43:34.780526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.160 [2024-11-20 12:43:34.780562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.160 qpair failed and we were unable to recover it. 00:29:29.160 [2024-11-20 12:43:34.780846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.160 [2024-11-20 12:43:34.780881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.160 qpair failed and we were unable to recover it. 00:29:29.160 [2024-11-20 12:43:34.781178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.160 [2024-11-20 12:43:34.781220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.160 qpair failed and we were unable to recover it. 00:29:29.160 [2024-11-20 12:43:34.781476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.160 [2024-11-20 12:43:34.781511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.160 qpair failed and we were unable to recover it. 00:29:29.160 [2024-11-20 12:43:34.781805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.160 [2024-11-20 12:43:34.781840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.160 qpair failed and we were unable to recover it. 00:29:29.160 [2024-11-20 12:43:34.782111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.160 [2024-11-20 12:43:34.782147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.160 qpair failed and we were unable to recover it. 00:29:29.160 [2024-11-20 12:43:34.782434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.160 [2024-11-20 12:43:34.782470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.160 qpair failed and we were unable to recover it. 00:29:29.160 [2024-11-20 12:43:34.782662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.160 [2024-11-20 12:43:34.782696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.160 qpair failed and we were unable to recover it. 00:29:29.160 [2024-11-20 12:43:34.782956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.160 [2024-11-20 12:43:34.782991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.160 qpair failed and we were unable to recover it. 00:29:29.160 [2024-11-20 12:43:34.783222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.160 [2024-11-20 12:43:34.783258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.160 qpair failed and we were unable to recover it. 00:29:29.160 [2024-11-20 12:43:34.783541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.160 [2024-11-20 12:43:34.783575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.160 qpair failed and we were unable to recover it. 00:29:29.160 [2024-11-20 12:43:34.783787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.160 [2024-11-20 12:43:34.783821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.160 qpair failed and we were unable to recover it. 00:29:29.160 [2024-11-20 12:43:34.784112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.160 [2024-11-20 12:43:34.784146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.160 qpair failed and we were unable to recover it. 00:29:29.160 [2024-11-20 12:43:34.784374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.160 [2024-11-20 12:43:34.784410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.160 qpair failed and we were unable to recover it. 00:29:29.160 [2024-11-20 12:43:34.784679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.160 [2024-11-20 12:43:34.784719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.160 qpair failed and we were unable to recover it. 00:29:29.160 [2024-11-20 12:43:34.784950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.160 [2024-11-20 12:43:34.784984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.160 qpair failed and we were unable to recover it. 00:29:29.160 [2024-11-20 12:43:34.785239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.160 [2024-11-20 12:43:34.785275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.160 qpair failed and we were unable to recover it. 00:29:29.160 [2024-11-20 12:43:34.785581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.160 [2024-11-20 12:43:34.785616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.160 qpair failed and we were unable to recover it. 00:29:29.160 [2024-11-20 12:43:34.785894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.160 [2024-11-20 12:43:34.785929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.160 qpair failed and we were unable to recover it. 00:29:29.160 [2024-11-20 12:43:34.786219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.160 [2024-11-20 12:43:34.786255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.160 qpair failed and we were unable to recover it. 00:29:29.160 [2024-11-20 12:43:34.786530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.160 [2024-11-20 12:43:34.786565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.160 qpair failed and we were unable to recover it. 00:29:29.160 [2024-11-20 12:43:34.786841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.160 [2024-11-20 12:43:34.786875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.160 qpair failed and we were unable to recover it. 00:29:29.160 [2024-11-20 12:43:34.787161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.160 [2024-11-20 12:43:34.787196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.160 qpair failed and we were unable to recover it. 00:29:29.160 [2024-11-20 12:43:34.787474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.160 [2024-11-20 12:43:34.787509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.160 qpair failed and we were unable to recover it. 00:29:29.160 [2024-11-20 12:43:34.787790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.160 [2024-11-20 12:43:34.787823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.160 qpair failed and we were unable to recover it. 00:29:29.160 [2024-11-20 12:43:34.788071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.160 [2024-11-20 12:43:34.788106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.160 qpair failed and we were unable to recover it. 00:29:29.160 [2024-11-20 12:43:34.788296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.160 [2024-11-20 12:43:34.788332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.160 qpair failed and we were unable to recover it. 00:29:29.160 [2024-11-20 12:43:34.788588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.160 [2024-11-20 12:43:34.788621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.160 qpair failed and we were unable to recover it. 00:29:29.160 [2024-11-20 12:43:34.788902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.160 [2024-11-20 12:43:34.788938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.160 qpair failed and we were unable to recover it. 00:29:29.160 [2024-11-20 12:43:34.789135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.160 [2024-11-20 12:43:34.789170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.160 qpair failed and we were unable to recover it. 00:29:29.160 [2024-11-20 12:43:34.789387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.160 [2024-11-20 12:43:34.789423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.160 qpair failed and we were unable to recover it. 00:29:29.160 [2024-11-20 12:43:34.789634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.160 [2024-11-20 12:43:34.789669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.160 qpair failed and we were unable to recover it. 00:29:29.160 [2024-11-20 12:43:34.789854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.160 [2024-11-20 12:43:34.789889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.160 qpair failed and we were unable to recover it. 00:29:29.160 [2024-11-20 12:43:34.790167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.160 [2024-11-20 12:43:34.790213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.160 qpair failed and we were unable to recover it. 00:29:29.160 [2024-11-20 12:43:34.790406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.160 [2024-11-20 12:43:34.790440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.160 qpair failed and we were unable to recover it. 00:29:29.160 [2024-11-20 12:43:34.790571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.160 [2024-11-20 12:43:34.790605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.160 qpair failed and we were unable to recover it. 00:29:29.160 [2024-11-20 12:43:34.790883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.160 [2024-11-20 12:43:34.790917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.160 qpair failed and we were unable to recover it. 00:29:29.160 [2024-11-20 12:43:34.791179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.161 [2024-11-20 12:43:34.791235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.161 qpair failed and we were unable to recover it. 00:29:29.161 [2024-11-20 12:43:34.791458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.161 [2024-11-20 12:43:34.791493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.161 qpair failed and we were unable to recover it. 00:29:29.161 [2024-11-20 12:43:34.791705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.161 [2024-11-20 12:43:34.791740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.161 qpair failed and we were unable to recover it. 00:29:29.161 [2024-11-20 12:43:34.791994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.161 [2024-11-20 12:43:34.792030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.161 qpair failed and we were unable to recover it. 00:29:29.161 [2024-11-20 12:43:34.792250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.161 [2024-11-20 12:43:34.792287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.161 qpair failed and we were unable to recover it. 00:29:29.161 [2024-11-20 12:43:34.792563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.161 [2024-11-20 12:43:34.792597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.161 qpair failed and we were unable to recover it. 00:29:29.161 [2024-11-20 12:43:34.792782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.161 [2024-11-20 12:43:34.792816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.161 qpair failed and we were unable to recover it. 00:29:29.161 [2024-11-20 12:43:34.793083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.161 [2024-11-20 12:43:34.793118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.161 qpair failed and we were unable to recover it. 00:29:29.161 [2024-11-20 12:43:34.793401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.161 [2024-11-20 12:43:34.793437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.161 qpair failed and we were unable to recover it. 00:29:29.161 [2024-11-20 12:43:34.793715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.161 [2024-11-20 12:43:34.793749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.161 qpair failed and we were unable to recover it. 00:29:29.161 [2024-11-20 12:43:34.794002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.161 [2024-11-20 12:43:34.794037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.161 qpair failed and we were unable to recover it. 00:29:29.161 [2024-11-20 12:43:34.794239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.161 [2024-11-20 12:43:34.794275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.161 qpair failed and we were unable to recover it. 00:29:29.161 [2024-11-20 12:43:34.794463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.161 [2024-11-20 12:43:34.794498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.161 qpair failed and we were unable to recover it. 00:29:29.161 [2024-11-20 12:43:34.794752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.161 [2024-11-20 12:43:34.794787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.161 qpair failed and we were unable to recover it. 00:29:29.161 [2024-11-20 12:43:34.795043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.161 [2024-11-20 12:43:34.795076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.161 qpair failed and we were unable to recover it. 00:29:29.161 [2024-11-20 12:43:34.795275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.161 [2024-11-20 12:43:34.795311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.161 qpair failed and we were unable to recover it. 00:29:29.161 [2024-11-20 12:43:34.795525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.161 [2024-11-20 12:43:34.795560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.161 qpair failed and we were unable to recover it. 00:29:29.161 [2024-11-20 12:43:34.795841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.161 [2024-11-20 12:43:34.795881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.161 qpair failed and we were unable to recover it. 00:29:29.161 [2024-11-20 12:43:34.796159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.161 [2024-11-20 12:43:34.796194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.161 qpair failed and we were unable to recover it. 00:29:29.161 [2024-11-20 12:43:34.796398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.161 [2024-11-20 12:43:34.796433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.161 qpair failed and we were unable to recover it. 00:29:29.161 [2024-11-20 12:43:34.796635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.161 [2024-11-20 12:43:34.796669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.161 qpair failed and we were unable to recover it. 00:29:29.161 [2024-11-20 12:43:34.796851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.161 [2024-11-20 12:43:34.796887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.161 qpair failed and we were unable to recover it. 00:29:29.161 [2024-11-20 12:43:34.797167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.161 [2024-11-20 12:43:34.797213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.161 qpair failed and we were unable to recover it. 00:29:29.161 [2024-11-20 12:43:34.797482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.161 [2024-11-20 12:43:34.797516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.161 qpair failed and we were unable to recover it. 00:29:29.161 [2024-11-20 12:43:34.797714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.161 [2024-11-20 12:43:34.797750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.161 qpair failed and we were unable to recover it. 00:29:29.161 [2024-11-20 12:43:34.797974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.161 [2024-11-20 12:43:34.798009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.161 qpair failed and we were unable to recover it. 00:29:29.161 [2024-11-20 12:43:34.798223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.161 [2024-11-20 12:43:34.798258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.161 qpair failed and we were unable to recover it. 00:29:29.161 [2024-11-20 12:43:34.798394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.161 [2024-11-20 12:43:34.798430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.161 qpair failed and we were unable to recover it. 00:29:29.161 [2024-11-20 12:43:34.798652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.161 [2024-11-20 12:43:34.798686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.161 qpair failed and we were unable to recover it. 00:29:29.161 [2024-11-20 12:43:34.798871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.161 [2024-11-20 12:43:34.798906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.161 qpair failed and we were unable to recover it. 00:29:29.161 [2024-11-20 12:43:34.799243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.161 [2024-11-20 12:43:34.799280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.161 qpair failed and we were unable to recover it. 00:29:29.161 [2024-11-20 12:43:34.799526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.161 [2024-11-20 12:43:34.799561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.161 qpair failed and we were unable to recover it. 00:29:29.161 [2024-11-20 12:43:34.799711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.161 [2024-11-20 12:43:34.799747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.161 qpair failed and we were unable to recover it. 00:29:29.161 [2024-11-20 12:43:34.800022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.161 [2024-11-20 12:43:34.800056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.161 qpair failed and we were unable to recover it. 00:29:29.161 [2024-11-20 12:43:34.800316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.161 [2024-11-20 12:43:34.800353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.161 qpair failed and we were unable to recover it. 00:29:29.161 [2024-11-20 12:43:34.800650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.161 [2024-11-20 12:43:34.800686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.161 qpair failed and we were unable to recover it. 00:29:29.161 [2024-11-20 12:43:34.800949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.161 [2024-11-20 12:43:34.800984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.161 qpair failed and we were unable to recover it. 00:29:29.161 [2024-11-20 12:43:34.801111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.161 [2024-11-20 12:43:34.801146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.161 qpair failed and we were unable to recover it. 00:29:29.162 [2024-11-20 12:43:34.801436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.162 [2024-11-20 12:43:34.801472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.162 qpair failed and we were unable to recover it. 00:29:29.162 [2024-11-20 12:43:34.801705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.162 [2024-11-20 12:43:34.801740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.162 qpair failed and we were unable to recover it. 00:29:29.162 [2024-11-20 12:43:34.802014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.162 [2024-11-20 12:43:34.802049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.162 qpair failed and we were unable to recover it. 00:29:29.162 [2024-11-20 12:43:34.802329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.162 [2024-11-20 12:43:34.802366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.162 qpair failed and we were unable to recover it. 00:29:29.162 [2024-11-20 12:43:34.802508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.162 [2024-11-20 12:43:34.802542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.162 qpair failed and we were unable to recover it. 00:29:29.162 [2024-11-20 12:43:34.802737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.162 [2024-11-20 12:43:34.802771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.162 qpair failed and we were unable to recover it. 00:29:29.162 [2024-11-20 12:43:34.803056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.162 [2024-11-20 12:43:34.803091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.162 qpair failed and we were unable to recover it. 00:29:29.162 [2024-11-20 12:43:34.803365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.162 [2024-11-20 12:43:34.803401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.162 qpair failed and we were unable to recover it. 00:29:29.162 [2024-11-20 12:43:34.803688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.162 [2024-11-20 12:43:34.803722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.162 qpair failed and we were unable to recover it. 00:29:29.162 [2024-11-20 12:43:34.803839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.162 [2024-11-20 12:43:34.803872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.162 qpair failed and we were unable to recover it. 00:29:29.162 [2024-11-20 12:43:34.804151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.162 [2024-11-20 12:43:34.804184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.162 qpair failed and we were unable to recover it. 00:29:29.162 [2024-11-20 12:43:34.804424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.162 [2024-11-20 12:43:34.804460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.162 qpair failed and we were unable to recover it. 00:29:29.162 [2024-11-20 12:43:34.804647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.162 [2024-11-20 12:43:34.804680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.162 qpair failed and we were unable to recover it. 00:29:29.162 [2024-11-20 12:43:34.804941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.162 [2024-11-20 12:43:34.804976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.162 qpair failed and we were unable to recover it. 00:29:29.162 [2024-11-20 12:43:34.805181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.162 [2024-11-20 12:43:34.805229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.162 qpair failed and we were unable to recover it. 00:29:29.162 [2024-11-20 12:43:34.805426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.162 [2024-11-20 12:43:34.805460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.162 qpair failed and we were unable to recover it. 00:29:29.162 [2024-11-20 12:43:34.805662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.162 [2024-11-20 12:43:34.805696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.162 qpair failed and we were unable to recover it. 00:29:29.162 [2024-11-20 12:43:34.805952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.162 [2024-11-20 12:43:34.805987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.162 qpair failed and we were unable to recover it. 00:29:29.162 [2024-11-20 12:43:34.806181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.162 [2024-11-20 12:43:34.806228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.162 qpair failed and we were unable to recover it. 00:29:29.162 [2024-11-20 12:43:34.806452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.162 [2024-11-20 12:43:34.806492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.162 qpair failed and we were unable to recover it. 00:29:29.162 [2024-11-20 12:43:34.806770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.162 [2024-11-20 12:43:34.806804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.162 qpair failed and we were unable to recover it. 00:29:29.162 [2024-11-20 12:43:34.807007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.162 [2024-11-20 12:43:34.807042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.162 qpair failed and we were unable to recover it. 00:29:29.162 [2024-11-20 12:43:34.807323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.162 [2024-11-20 12:43:34.807359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.162 qpair failed and we were unable to recover it. 00:29:29.162 [2024-11-20 12:43:34.807562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.162 [2024-11-20 12:43:34.807597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.162 qpair failed and we were unable to recover it. 00:29:29.162 [2024-11-20 12:43:34.807854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.162 [2024-11-20 12:43:34.807889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.162 qpair failed and we were unable to recover it. 00:29:29.162 [2024-11-20 12:43:34.808031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.162 [2024-11-20 12:43:34.808065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.162 qpair failed and we were unable to recover it. 00:29:29.162 [2024-11-20 12:43:34.808339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.162 [2024-11-20 12:43:34.808376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.162 qpair failed and we were unable to recover it. 00:29:29.162 [2024-11-20 12:43:34.808651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.162 [2024-11-20 12:43:34.808685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.162 qpair failed and we were unable to recover it. 00:29:29.162 [2024-11-20 12:43:34.808948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.162 [2024-11-20 12:43:34.808983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.162 qpair failed and we were unable to recover it. 00:29:29.162 [2024-11-20 12:43:34.809211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.162 [2024-11-20 12:43:34.809247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.162 qpair failed and we were unable to recover it. 00:29:29.162 [2024-11-20 12:43:34.809526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.162 [2024-11-20 12:43:34.809561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.162 qpair failed and we were unable to recover it. 00:29:29.162 [2024-11-20 12:43:34.809775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.162 [2024-11-20 12:43:34.809810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.162 qpair failed and we were unable to recover it. 00:29:29.162 [2024-11-20 12:43:34.810032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.162 [2024-11-20 12:43:34.810065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.163 qpair failed and we were unable to recover it. 00:29:29.163 [2024-11-20 12:43:34.810343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.163 [2024-11-20 12:43:34.810380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.163 qpair failed and we were unable to recover it. 00:29:29.163 [2024-11-20 12:43:34.810636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.163 [2024-11-20 12:43:34.810671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.163 qpair failed and we were unable to recover it. 00:29:29.163 [2024-11-20 12:43:34.810973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.163 [2024-11-20 12:43:34.811007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.163 qpair failed and we were unable to recover it. 00:29:29.163 [2024-11-20 12:43:34.811269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.163 [2024-11-20 12:43:34.811306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.163 qpair failed and we were unable to recover it. 00:29:29.163 [2024-11-20 12:43:34.811491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.163 [2024-11-20 12:43:34.811526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.163 qpair failed and we were unable to recover it. 00:29:29.163 [2024-11-20 12:43:34.811779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.163 [2024-11-20 12:43:34.811814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.163 qpair failed and we were unable to recover it. 00:29:29.163 [2024-11-20 12:43:34.812004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.163 [2024-11-20 12:43:34.812037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.163 qpair failed and we were unable to recover it. 00:29:29.163 [2024-11-20 12:43:34.812292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.163 [2024-11-20 12:43:34.812327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.163 qpair failed and we were unable to recover it. 00:29:29.163 [2024-11-20 12:43:34.812612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.163 [2024-11-20 12:43:34.812646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.163 qpair failed and we were unable to recover it. 00:29:29.163 [2024-11-20 12:43:34.812865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.163 [2024-11-20 12:43:34.812899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.163 qpair failed and we were unable to recover it. 00:29:29.163 [2024-11-20 12:43:34.813112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.163 [2024-11-20 12:43:34.813146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.163 qpair failed and we were unable to recover it. 00:29:29.163 [2024-11-20 12:43:34.813271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.163 [2024-11-20 12:43:34.813305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.163 qpair failed and we were unable to recover it. 00:29:29.163 [2024-11-20 12:43:34.813586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.163 [2024-11-20 12:43:34.813621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.163 qpair failed and we were unable to recover it. 00:29:29.163 [2024-11-20 12:43:34.813881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.163 [2024-11-20 12:43:34.813945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.163 qpair failed and we were unable to recover it. 00:29:29.163 [2024-11-20 12:43:34.814187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.163 [2024-11-20 12:43:34.814229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.163 qpair failed and we were unable to recover it. 00:29:29.163 [2024-11-20 12:43:34.814503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.163 [2024-11-20 12:43:34.814528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.163 qpair failed and we were unable to recover it. 00:29:29.163 [2024-11-20 12:43:34.814703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.163 [2024-11-20 12:43:34.814727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.163 qpair failed and we were unable to recover it. 00:29:29.163 [2024-11-20 12:43:34.814962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.163 [2024-11-20 12:43:34.814986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.163 qpair failed and we were unable to recover it. 00:29:29.163 [2024-11-20 12:43:34.815242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.163 [2024-11-20 12:43:34.815267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.163 qpair failed and we were unable to recover it. 00:29:29.163 [2024-11-20 12:43:34.815530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.163 [2024-11-20 12:43:34.815553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.163 qpair failed and we were unable to recover it. 00:29:29.163 [2024-11-20 12:43:34.815787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.163 [2024-11-20 12:43:34.815812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.163 qpair failed and we were unable to recover it. 00:29:29.163 [2024-11-20 12:43:34.815930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.163 [2024-11-20 12:43:34.815954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.163 qpair failed and we were unable to recover it. 00:29:29.163 [2024-11-20 12:43:34.816212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.163 [2024-11-20 12:43:34.816237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.163 qpair failed and we were unable to recover it. 00:29:29.163 [2024-11-20 12:43:34.816439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.163 [2024-11-20 12:43:34.816463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.163 qpair failed and we were unable to recover it. 00:29:29.163 [2024-11-20 12:43:34.816630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.163 [2024-11-20 12:43:34.816655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.163 qpair failed and we were unable to recover it. 00:29:29.163 [2024-11-20 12:43:34.816772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.163 [2024-11-20 12:43:34.816793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.163 qpair failed and we were unable to recover it. 00:29:29.163 [2024-11-20 12:43:34.817049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.163 [2024-11-20 12:43:34.817073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.163 qpair failed and we were unable to recover it. 00:29:29.163 [2024-11-20 12:43:34.817274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.163 [2024-11-20 12:43:34.817300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.163 qpair failed and we were unable to recover it. 00:29:29.163 [2024-11-20 12:43:34.817476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.163 [2024-11-20 12:43:34.817501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.163 qpair failed and we were unable to recover it. 00:29:29.163 [2024-11-20 12:43:34.817767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.163 [2024-11-20 12:43:34.817791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.163 qpair failed and we were unable to recover it. 00:29:29.163 [2024-11-20 12:43:34.818073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.163 [2024-11-20 12:43:34.818098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.163 qpair failed and we were unable to recover it. 00:29:29.163 [2024-11-20 12:43:34.818342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.163 [2024-11-20 12:43:34.818368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.163 qpair failed and we were unable to recover it. 00:29:29.163 [2024-11-20 12:43:34.818570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.163 [2024-11-20 12:43:34.818594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.163 qpair failed and we were unable to recover it. 00:29:29.163 [2024-11-20 12:43:34.818784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.164 [2024-11-20 12:43:34.818809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.164 qpair failed and we were unable to recover it. 00:29:29.164 [2024-11-20 12:43:34.819085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.164 [2024-11-20 12:43:34.819109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.164 qpair failed and we were unable to recover it. 00:29:29.164 [2024-11-20 12:43:34.819372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.164 [2024-11-20 12:43:34.819398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.164 qpair failed and we were unable to recover it. 00:29:29.164 [2024-11-20 12:43:34.819603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.164 [2024-11-20 12:43:34.819627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.164 qpair failed and we were unable to recover it. 00:29:29.164 [2024-11-20 12:43:34.819806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.164 [2024-11-20 12:43:34.819831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.164 qpair failed and we were unable to recover it. 00:29:29.164 [2024-11-20 12:43:34.820026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.164 [2024-11-20 12:43:34.820050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.164 qpair failed and we were unable to recover it. 00:29:29.164 [2024-11-20 12:43:34.820228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.164 [2024-11-20 12:43:34.820255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.164 qpair failed and we were unable to recover it. 00:29:29.164 [2024-11-20 12:43:34.820544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.164 [2024-11-20 12:43:34.820572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.164 qpair failed and we were unable to recover it. 00:29:29.164 [2024-11-20 12:43:34.820669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.164 [2024-11-20 12:43:34.820691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.164 qpair failed and we were unable to recover it. 00:29:29.164 [2024-11-20 12:43:34.820870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.164 [2024-11-20 12:43:34.820896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.164 qpair failed and we were unable to recover it. 00:29:29.164 [2024-11-20 12:43:34.821173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.164 [2024-11-20 12:43:34.821197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.164 qpair failed and we were unable to recover it. 00:29:29.164 [2024-11-20 12:43:34.821466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.164 [2024-11-20 12:43:34.821491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.164 qpair failed and we were unable to recover it. 00:29:29.164 [2024-11-20 12:43:34.821729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.164 [2024-11-20 12:43:34.821754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.164 qpair failed and we were unable to recover it. 00:29:29.164 [2024-11-20 12:43:34.821925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.164 [2024-11-20 12:43:34.821950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.164 qpair failed and we were unable to recover it. 00:29:29.164 [2024-11-20 12:43:34.822188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.164 [2024-11-20 12:43:34.822220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.164 qpair failed and we were unable to recover it. 00:29:29.164 [2024-11-20 12:43:34.822330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.164 [2024-11-20 12:43:34.822352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.164 qpair failed and we were unable to recover it. 00:29:29.164 [2024-11-20 12:43:34.822606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.164 [2024-11-20 12:43:34.822630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.164 qpair failed and we were unable to recover it. 00:29:29.164 [2024-11-20 12:43:34.822797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.164 [2024-11-20 12:43:34.822822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.164 qpair failed and we were unable to recover it. 00:29:29.164 [2024-11-20 12:43:34.822932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.164 [2024-11-20 12:43:34.822954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.164 qpair failed and we were unable to recover it. 00:29:29.164 [2024-11-20 12:43:34.823243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.164 [2024-11-20 12:43:34.823268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.164 qpair failed and we were unable to recover it. 00:29:29.164 [2024-11-20 12:43:34.823436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.164 [2024-11-20 12:43:34.823461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.164 qpair failed and we were unable to recover it. 00:29:29.164 [2024-11-20 12:43:34.823638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.164 [2024-11-20 12:43:34.823663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.164 qpair failed and we were unable to recover it. 00:29:29.164 [2024-11-20 12:43:34.824070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.164 [2024-11-20 12:43:34.824099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.164 qpair failed and we were unable to recover it. 00:29:29.164 [2024-11-20 12:43:34.824373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.164 [2024-11-20 12:43:34.824402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.164 qpair failed and we were unable to recover it. 00:29:29.164 [2024-11-20 12:43:34.824664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.164 [2024-11-20 12:43:34.824688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.164 qpair failed and we were unable to recover it. 00:29:29.164 [2024-11-20 12:43:34.824854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.164 [2024-11-20 12:43:34.824877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.164 qpair failed and we were unable to recover it. 00:29:29.164 [2024-11-20 12:43:34.825140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.164 [2024-11-20 12:43:34.825164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.164 qpair failed and we were unable to recover it. 00:29:29.164 [2024-11-20 12:43:34.825375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.164 [2024-11-20 12:43:34.825399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.164 qpair failed and we were unable to recover it. 00:29:29.164 [2024-11-20 12:43:34.825560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.164 [2024-11-20 12:43:34.825584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.164 qpair failed and we were unable to recover it. 00:29:29.164 [2024-11-20 12:43:34.825838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.164 [2024-11-20 12:43:34.825862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.164 qpair failed and we were unable to recover it. 00:29:29.164 [2024-11-20 12:43:34.826040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.164 [2024-11-20 12:43:34.826064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.164 qpair failed and we were unable to recover it. 00:29:29.164 [2024-11-20 12:43:34.826244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.164 [2024-11-20 12:43:34.826269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.164 qpair failed and we were unable to recover it. 00:29:29.164 [2024-11-20 12:43:34.826477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.164 [2024-11-20 12:43:34.826502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.164 qpair failed and we were unable to recover it. 00:29:29.164 [2024-11-20 12:43:34.826609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.164 [2024-11-20 12:43:34.826630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.164 qpair failed and we were unable to recover it. 00:29:29.164 [2024-11-20 12:43:34.826809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.164 [2024-11-20 12:43:34.826840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.164 qpair failed and we were unable to recover it. 00:29:29.164 [2024-11-20 12:43:34.827013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.164 [2024-11-20 12:43:34.827037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.164 qpair failed and we were unable to recover it. 00:29:29.164 [2024-11-20 12:43:34.827292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.164 [2024-11-20 12:43:34.827317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.164 qpair failed and we were unable to recover it. 00:29:29.164 [2024-11-20 12:43:34.827497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.165 [2024-11-20 12:43:34.827521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.165 qpair failed and we were unable to recover it. 00:29:29.165 [2024-11-20 12:43:34.827777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.165 [2024-11-20 12:43:34.827801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.165 qpair failed and we were unable to recover it. 00:29:29.165 [2024-11-20 12:43:34.828049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.165 [2024-11-20 12:43:34.828073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.165 qpair failed and we were unable to recover it. 00:29:29.165 [2024-11-20 12:43:34.828355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.165 [2024-11-20 12:43:34.828381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.165 qpair failed and we were unable to recover it. 00:29:29.165 [2024-11-20 12:43:34.828637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.165 [2024-11-20 12:43:34.828662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.165 qpair failed and we were unable to recover it. 00:29:29.165 [2024-11-20 12:43:34.828908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.165 [2024-11-20 12:43:34.828932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.165 qpair failed and we were unable to recover it. 00:29:29.165 [2024-11-20 12:43:34.829113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.165 [2024-11-20 12:43:34.829137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.165 qpair failed and we were unable to recover it. 00:29:29.165 [2024-11-20 12:43:34.829371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.165 [2024-11-20 12:43:34.829395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.165 qpair failed and we were unable to recover it. 00:29:29.165 [2024-11-20 12:43:34.829557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.165 [2024-11-20 12:43:34.829579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.165 qpair failed and we were unable to recover it. 00:29:29.165 [2024-11-20 12:43:34.829834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.165 [2024-11-20 12:43:34.829857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.165 qpair failed and we were unable to recover it. 00:29:29.165 [2024-11-20 12:43:34.830014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.165 [2024-11-20 12:43:34.830038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.165 qpair failed and we were unable to recover it. 00:29:29.165 [2024-11-20 12:43:34.830210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.165 [2024-11-20 12:43:34.830236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.165 qpair failed and we were unable to recover it. 00:29:29.165 [2024-11-20 12:43:34.830400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.165 [2024-11-20 12:43:34.830425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.165 qpair failed and we were unable to recover it. 00:29:29.165 [2024-11-20 12:43:34.830530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.165 [2024-11-20 12:43:34.830552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.165 qpair failed and we were unable to recover it. 00:29:29.165 [2024-11-20 12:43:34.830675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.165 [2024-11-20 12:43:34.830697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.165 qpair failed and we were unable to recover it. 00:29:29.165 [2024-11-20 12:43:34.830953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.165 [2024-11-20 12:43:34.830977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.165 qpair failed and we were unable to recover it. 00:29:29.165 [2024-11-20 12:43:34.831213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.165 [2024-11-20 12:43:34.831238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.165 qpair failed and we were unable to recover it. 00:29:29.165 [2024-11-20 12:43:34.831461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.165 [2024-11-20 12:43:34.831486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.165 qpair failed and we were unable to recover it. 00:29:29.165 [2024-11-20 12:43:34.831748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.165 [2024-11-20 12:43:34.831773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.165 qpair failed and we were unable to recover it. 00:29:29.165 [2024-11-20 12:43:34.832012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.165 [2024-11-20 12:43:34.832036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.165 qpair failed and we were unable to recover it. 00:29:29.165 [2024-11-20 12:43:34.832293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.165 [2024-11-20 12:43:34.832319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.165 qpair failed and we were unable to recover it. 00:29:29.165 [2024-11-20 12:43:34.832494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.165 [2024-11-20 12:43:34.832519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.165 qpair failed and we were unable to recover it. 00:29:29.165 [2024-11-20 12:43:34.832691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.165 [2024-11-20 12:43:34.832714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.165 qpair failed and we were unable to recover it. 00:29:29.165 [2024-11-20 12:43:34.832910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.165 [2024-11-20 12:43:34.832934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.165 qpair failed and we were unable to recover it. 00:29:29.165 [2024-11-20 12:43:34.833183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.165 [2024-11-20 12:43:34.833214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.165 qpair failed and we were unable to recover it. 00:29:29.165 [2024-11-20 12:43:34.833393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.165 [2024-11-20 12:43:34.833418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.165 qpair failed and we were unable to recover it. 00:29:29.165 [2024-11-20 12:43:34.833588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.165 [2024-11-20 12:43:34.833613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.165 qpair failed and we were unable to recover it. 00:29:29.165 [2024-11-20 12:43:34.833791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.165 [2024-11-20 12:43:34.833815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.165 qpair failed and we were unable to recover it. 00:29:29.165 [2024-11-20 12:43:34.834008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.165 [2024-11-20 12:43:34.834033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.165 qpair failed and we were unable to recover it. 00:29:29.165 [2024-11-20 12:43:34.834134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.165 [2024-11-20 12:43:34.834157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.165 qpair failed and we were unable to recover it. 00:29:29.165 [2024-11-20 12:43:34.834367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.165 [2024-11-20 12:43:34.834393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.165 qpair failed and we were unable to recover it. 00:29:29.165 [2024-11-20 12:43:34.834559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.165 [2024-11-20 12:43:34.834583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.165 qpair failed and we were unable to recover it. 00:29:29.165 [2024-11-20 12:43:34.834830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.165 [2024-11-20 12:43:34.834854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.165 qpair failed and we were unable to recover it. 00:29:29.165 [2024-11-20 12:43:34.834972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.165 [2024-11-20 12:43:34.834997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.165 qpair failed and we were unable to recover it. 00:29:29.165 [2024-11-20 12:43:34.835157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.165 [2024-11-20 12:43:34.835182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.165 qpair failed and we were unable to recover it. 00:29:29.165 [2024-11-20 12:43:34.835437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.165 [2024-11-20 12:43:34.835461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.165 qpair failed and we were unable to recover it. 00:29:29.165 [2024-11-20 12:43:34.835650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.165 [2024-11-20 12:43:34.835675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.165 qpair failed and we were unable to recover it. 00:29:29.165 [2024-11-20 12:43:34.835886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.165 [2024-11-20 12:43:34.835910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.165 qpair failed and we were unable to recover it. 00:29:29.165 [2024-11-20 12:43:34.836180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.166 [2024-11-20 12:43:34.836235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.166 qpair failed and we were unable to recover it. 00:29:29.166 [2024-11-20 12:43:34.836525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.166 [2024-11-20 12:43:34.836551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.166 qpair failed and we were unable to recover it. 00:29:29.166 [2024-11-20 12:43:34.836761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.166 [2024-11-20 12:43:34.836787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.166 qpair failed and we were unable to recover it. 00:29:29.166 [2024-11-20 12:43:34.837068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.166 [2024-11-20 12:43:34.837092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.166 qpair failed and we were unable to recover it. 00:29:29.166 [2024-11-20 12:43:34.837278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.166 [2024-11-20 12:43:34.837304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.166 qpair failed and we were unable to recover it. 00:29:29.166 [2024-11-20 12:43:34.837480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.166 [2024-11-20 12:43:34.837505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.166 qpair failed and we were unable to recover it. 00:29:29.166 [2024-11-20 12:43:34.837743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.166 [2024-11-20 12:43:34.837767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.166 qpair failed and we were unable to recover it. 00:29:29.166 [2024-11-20 12:43:34.837932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.166 [2024-11-20 12:43:34.837957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.166 qpair failed and we were unable to recover it. 00:29:29.166 [2024-11-20 12:43:34.838185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.166 [2024-11-20 12:43:34.838215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.166 qpair failed and we were unable to recover it. 00:29:29.166 [2024-11-20 12:43:34.838419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.166 [2024-11-20 12:43:34.838444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.166 qpair failed and we were unable to recover it. 00:29:29.166 [2024-11-20 12:43:34.838704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.166 [2024-11-20 12:43:34.838728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.166 qpair failed and we were unable to recover it. 00:29:29.166 [2024-11-20 12:43:34.838980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.166 [2024-11-20 12:43:34.839006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.166 qpair failed and we were unable to recover it. 00:29:29.166 [2024-11-20 12:43:34.839121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.166 [2024-11-20 12:43:34.839145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.166 qpair failed and we were unable to recover it. 00:29:29.166 [2024-11-20 12:43:34.839396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.166 [2024-11-20 12:43:34.839422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.166 qpair failed and we were unable to recover it. 00:29:29.166 [2024-11-20 12:43:34.839551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.166 [2024-11-20 12:43:34.839576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.166 qpair failed and we were unable to recover it. 00:29:29.166 [2024-11-20 12:43:34.839782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.166 [2024-11-20 12:43:34.839807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.166 qpair failed and we were unable to recover it. 00:29:29.166 [2024-11-20 12:43:34.840024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.166 [2024-11-20 12:43:34.840048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.166 qpair failed and we were unable to recover it. 00:29:29.166 [2024-11-20 12:43:34.840294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.166 [2024-11-20 12:43:34.840318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.166 qpair failed and we were unable to recover it. 00:29:29.166 [2024-11-20 12:43:34.840577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.166 [2024-11-20 12:43:34.840601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.166 qpair failed and we were unable to recover it. 00:29:29.166 [2024-11-20 12:43:34.840837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.166 [2024-11-20 12:43:34.840862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.166 qpair failed and we were unable to recover it. 00:29:29.166 [2024-11-20 12:43:34.841114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.166 [2024-11-20 12:43:34.841139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.166 qpair failed and we were unable to recover it. 00:29:29.166 [2024-11-20 12:43:34.841378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.166 [2024-11-20 12:43:34.841404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.166 qpair failed and we were unable to recover it. 00:29:29.166 [2024-11-20 12:43:34.841586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.166 [2024-11-20 12:43:34.841610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.166 qpair failed and we were unable to recover it. 00:29:29.166 [2024-11-20 12:43:34.841786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.166 [2024-11-20 12:43:34.841811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.166 qpair failed and we were unable to recover it. 00:29:29.166 [2024-11-20 12:43:34.841909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.166 [2024-11-20 12:43:34.841931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.166 qpair failed and we were unable to recover it. 00:29:29.166 [2024-11-20 12:43:34.842182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.166 [2024-11-20 12:43:34.842214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.166 qpair failed and we were unable to recover it. 00:29:29.166 [2024-11-20 12:43:34.842391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.166 [2024-11-20 12:43:34.842415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.166 qpair failed and we were unable to recover it. 00:29:29.166 [2024-11-20 12:43:34.842700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.166 [2024-11-20 12:43:34.842725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.166 qpair failed and we were unable to recover it. 00:29:29.166 [2024-11-20 12:43:34.842909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.166 [2024-11-20 12:43:34.842935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.166 qpair failed and we were unable to recover it. 00:29:29.166 [2024-11-20 12:43:34.843190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.166 [2024-11-20 12:43:34.843220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.166 qpair failed and we were unable to recover it. 00:29:29.166 [2024-11-20 12:43:34.843352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.166 [2024-11-20 12:43:34.843378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.166 qpair failed and we were unable to recover it. 00:29:29.166 [2024-11-20 12:43:34.843655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.166 [2024-11-20 12:43:34.843680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.166 qpair failed and we were unable to recover it. 00:29:29.166 [2024-11-20 12:43:34.843808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.166 [2024-11-20 12:43:34.843832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.166 qpair failed and we were unable to recover it. 00:29:29.166 [2024-11-20 12:43:34.843954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.166 [2024-11-20 12:43:34.843979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.166 qpair failed and we were unable to recover it. 00:29:29.167 [2024-11-20 12:43:34.844233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.167 [2024-11-20 12:43:34.844260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.167 qpair failed and we were unable to recover it. 00:29:29.167 [2024-11-20 12:43:34.844366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.167 [2024-11-20 12:43:34.844389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.167 qpair failed and we were unable to recover it. 00:29:29.167 [2024-11-20 12:43:34.844644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.167 [2024-11-20 12:43:34.844668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.167 qpair failed and we were unable to recover it. 00:29:29.167 [2024-11-20 12:43:34.844850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.167 [2024-11-20 12:43:34.844875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.167 qpair failed and we were unable to recover it. 00:29:29.167 [2024-11-20 12:43:34.844989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.167 [2024-11-20 12:43:34.845011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.167 qpair failed and we were unable to recover it. 00:29:29.167 [2024-11-20 12:43:34.845249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.167 [2024-11-20 12:43:34.845275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.167 qpair failed and we were unable to recover it. 00:29:29.167 [2024-11-20 12:43:34.845450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.167 [2024-11-20 12:43:34.845474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.167 qpair failed and we were unable to recover it. 00:29:29.167 [2024-11-20 12:43:34.845714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.167 [2024-11-20 12:43:34.845739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.167 qpair failed and we were unable to recover it. 00:29:29.167 [2024-11-20 12:43:34.845972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.167 [2024-11-20 12:43:34.845996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.167 qpair failed and we were unable to recover it. 00:29:29.167 [2024-11-20 12:43:34.846230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.167 [2024-11-20 12:43:34.846256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.167 qpair failed and we were unable to recover it. 00:29:29.167 [2024-11-20 12:43:34.846466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.167 [2024-11-20 12:43:34.846490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.167 qpair failed and we were unable to recover it. 00:29:29.167 [2024-11-20 12:43:34.846724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.167 [2024-11-20 12:43:34.846750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.167 qpair failed and we were unable to recover it. 00:29:29.167 [2024-11-20 12:43:34.846911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.167 [2024-11-20 12:43:34.846936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.167 qpair failed and we were unable to recover it. 00:29:29.167 [2024-11-20 12:43:34.847066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.167 [2024-11-20 12:43:34.847091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.167 qpair failed and we were unable to recover it. 00:29:29.167 [2024-11-20 12:43:34.847375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.167 [2024-11-20 12:43:34.847400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.167 qpair failed and we were unable to recover it. 00:29:29.167 [2024-11-20 12:43:34.847571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.167 [2024-11-20 12:43:34.847595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.167 qpair failed and we were unable to recover it. 00:29:29.167 [2024-11-20 12:43:34.847770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.167 [2024-11-20 12:43:34.847793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.167 qpair failed and we were unable to recover it. 00:29:29.167 [2024-11-20 12:43:34.847969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.167 [2024-11-20 12:43:34.847993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.167 qpair failed and we were unable to recover it. 00:29:29.167 [2024-11-20 12:43:34.848166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.167 [2024-11-20 12:43:34.848191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.167 qpair failed and we were unable to recover it. 00:29:29.167 [2024-11-20 12:43:34.848471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.167 [2024-11-20 12:43:34.848496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.167 qpair failed and we were unable to recover it. 00:29:29.167 [2024-11-20 12:43:34.848676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.167 [2024-11-20 12:43:34.848700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.167 qpair failed and we were unable to recover it. 00:29:29.167 [2024-11-20 12:43:34.848885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.167 [2024-11-20 12:43:34.848910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.167 qpair failed and we were unable to recover it. 00:29:29.167 [2024-11-20 12:43:34.849164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.167 [2024-11-20 12:43:34.849188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.167 qpair failed and we were unable to recover it. 00:29:29.167 [2024-11-20 12:43:34.849373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.167 [2024-11-20 12:43:34.849398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.167 qpair failed and we were unable to recover it. 00:29:29.167 [2024-11-20 12:43:34.849574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.167 [2024-11-20 12:43:34.849599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.167 qpair failed and we were unable to recover it. 00:29:29.167 [2024-11-20 12:43:34.849831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.167 [2024-11-20 12:43:34.849859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.167 qpair failed and we were unable to recover it. 00:29:29.167 [2024-11-20 12:43:34.850025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.167 [2024-11-20 12:43:34.850050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.167 qpair failed and we were unable to recover it. 00:29:29.167 [2024-11-20 12:43:34.850288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.167 [2024-11-20 12:43:34.850313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.167 qpair failed and we were unable to recover it. 00:29:29.167 [2024-11-20 12:43:34.850553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.167 [2024-11-20 12:43:34.850578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.167 qpair failed and we were unable to recover it. 00:29:29.167 [2024-11-20 12:43:34.850742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.167 [2024-11-20 12:43:34.850768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.167 qpair failed and we were unable to recover it. 00:29:29.167 [2024-11-20 12:43:34.851033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.167 [2024-11-20 12:43:34.851057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.167 qpair failed and we were unable to recover it. 00:29:29.167 [2024-11-20 12:43:34.851259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.167 [2024-11-20 12:43:34.851285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.167 qpair failed and we were unable to recover it. 00:29:29.167 [2024-11-20 12:43:34.851522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.167 [2024-11-20 12:43:34.851547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.168 qpair failed and we were unable to recover it. 00:29:29.168 [2024-11-20 12:43:34.851828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.168 [2024-11-20 12:43:34.851852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.168 qpair failed and we were unable to recover it. 00:29:29.168 [2024-11-20 12:43:34.852020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.168 [2024-11-20 12:43:34.852050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.168 qpair failed and we were unable to recover it. 00:29:29.168 [2024-11-20 12:43:34.852314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.168 [2024-11-20 12:43:34.852340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.168 qpair failed and we were unable to recover it. 00:29:29.168 [2024-11-20 12:43:34.852595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.168 [2024-11-20 12:43:34.852620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.168 qpair failed and we were unable to recover it. 00:29:29.168 [2024-11-20 12:43:34.852855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.168 [2024-11-20 12:43:34.852880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.168 qpair failed and we were unable to recover it. 00:29:29.168 [2024-11-20 12:43:34.853132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.168 [2024-11-20 12:43:34.853157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.168 qpair failed and we were unable to recover it. 00:29:29.168 [2024-11-20 12:43:34.853465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.168 [2024-11-20 12:43:34.853490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.168 qpair failed and we were unable to recover it. 00:29:29.168 [2024-11-20 12:43:34.853596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.168 [2024-11-20 12:43:34.853618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.168 qpair failed and we were unable to recover it. 00:29:29.168 [2024-11-20 12:43:34.853870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.168 [2024-11-20 12:43:34.853896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.168 qpair failed and we were unable to recover it. 00:29:29.168 [2024-11-20 12:43:34.854128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.168 [2024-11-20 12:43:34.854153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.168 qpair failed and we were unable to recover it. 00:29:29.168 [2024-11-20 12:43:34.854315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.168 [2024-11-20 12:43:34.854340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.168 qpair failed and we were unable to recover it. 00:29:29.168 [2024-11-20 12:43:34.854537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.168 [2024-11-20 12:43:34.854562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.168 qpair failed and we were unable to recover it. 00:29:29.168 [2024-11-20 12:43:34.854815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.168 [2024-11-20 12:43:34.854840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.168 qpair failed and we were unable to recover it. 00:29:29.168 [2024-11-20 12:43:34.855081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.168 [2024-11-20 12:43:34.855106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.168 qpair failed and we were unable to recover it. 00:29:29.168 [2024-11-20 12:43:34.855340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.168 [2024-11-20 12:43:34.855365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.168 qpair failed and we were unable to recover it. 00:29:29.168 [2024-11-20 12:43:34.855608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.168 [2024-11-20 12:43:34.855632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.168 qpair failed and we were unable to recover it. 00:29:29.168 [2024-11-20 12:43:34.855765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.168 [2024-11-20 12:43:34.855790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.168 qpair failed and we were unable to recover it. 00:29:29.168 [2024-11-20 12:43:34.856047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.168 [2024-11-20 12:43:34.856072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.168 qpair failed and we were unable to recover it. 00:29:29.168 [2024-11-20 12:43:34.856306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.168 [2024-11-20 12:43:34.856332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.168 qpair failed and we were unable to recover it. 00:29:29.168 [2024-11-20 12:43:34.856581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.168 [2024-11-20 12:43:34.856607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.168 qpair failed and we were unable to recover it. 00:29:29.168 [2024-11-20 12:43:34.856856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.168 [2024-11-20 12:43:34.856881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.168 qpair failed and we were unable to recover it. 00:29:29.168 [2024-11-20 12:43:34.857113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.168 [2024-11-20 12:43:34.857139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.168 qpair failed and we were unable to recover it. 00:29:29.168 [2024-11-20 12:43:34.857316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.168 [2024-11-20 12:43:34.857343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.168 qpair failed and we were unable to recover it. 00:29:29.168 [2024-11-20 12:43:34.857449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.168 [2024-11-20 12:43:34.857475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.168 qpair failed and we were unable to recover it. 00:29:29.168 [2024-11-20 12:43:34.857661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.168 [2024-11-20 12:43:34.857687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.168 qpair failed and we were unable to recover it. 00:29:29.168 [2024-11-20 12:43:34.857894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.168 [2024-11-20 12:43:34.857920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.168 qpair failed and we were unable to recover it. 00:29:29.168 [2024-11-20 12:43:34.858232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.168 [2024-11-20 12:43:34.858258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.168 qpair failed and we were unable to recover it. 00:29:29.168 [2024-11-20 12:43:34.858454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.168 [2024-11-20 12:43:34.858480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.168 qpair failed and we were unable to recover it. 00:29:29.168 [2024-11-20 12:43:34.858727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.168 [2024-11-20 12:43:34.858752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.168 qpair failed and we were unable to recover it. 00:29:29.168 [2024-11-20 12:43:34.859017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.168 [2024-11-20 12:43:34.859042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.168 qpair failed and we were unable to recover it. 00:29:29.168 [2024-11-20 12:43:34.859169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.168 [2024-11-20 12:43:34.859194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.168 qpair failed and we were unable to recover it. 00:29:29.168 [2024-11-20 12:43:34.859480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.168 [2024-11-20 12:43:34.859505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.168 qpair failed and we were unable to recover it. 00:29:29.168 [2024-11-20 12:43:34.859604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.168 [2024-11-20 12:43:34.859629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.168 qpair failed and we were unable to recover it. 00:29:29.168 [2024-11-20 12:43:34.859807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.168 [2024-11-20 12:43:34.859831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.168 qpair failed and we were unable to recover it. 00:29:29.168 [2024-11-20 12:43:34.860028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.168 [2024-11-20 12:43:34.860052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.168 qpair failed and we were unable to recover it. 00:29:29.168 [2024-11-20 12:43:34.860319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.168 [2024-11-20 12:43:34.860346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.168 qpair failed and we were unable to recover it. 00:29:29.168 [2024-11-20 12:43:34.860527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.168 [2024-11-20 12:43:34.860554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.168 qpair failed and we were unable to recover it. 00:29:29.168 [2024-11-20 12:43:34.860812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.168 [2024-11-20 12:43:34.860837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.168 qpair failed and we were unable to recover it. 00:29:29.169 [2024-11-20 12:43:34.860932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.169 [2024-11-20 12:43:34.860956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.169 qpair failed and we were unable to recover it. 00:29:29.169 [2024-11-20 12:43:34.861216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.169 [2024-11-20 12:43:34.861242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.169 qpair failed and we were unable to recover it. 00:29:29.169 [2024-11-20 12:43:34.861493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.169 [2024-11-20 12:43:34.861518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.169 qpair failed and we were unable to recover it. 00:29:29.169 [2024-11-20 12:43:34.861779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.169 [2024-11-20 12:43:34.861803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.169 qpair failed and we were unable to recover it. 00:29:29.169 [2024-11-20 12:43:34.862064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.169 [2024-11-20 12:43:34.862089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.169 qpair failed and we were unable to recover it. 00:29:29.169 [2024-11-20 12:43:34.862347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.169 [2024-11-20 12:43:34.862373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.169 qpair failed and we were unable to recover it. 00:29:29.169 [2024-11-20 12:43:34.862606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.169 [2024-11-20 12:43:34.862632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.169 qpair failed and we were unable to recover it. 00:29:29.169 [2024-11-20 12:43:34.862846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.169 [2024-11-20 12:43:34.862872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.169 qpair failed and we were unable to recover it. 00:29:29.169 [2024-11-20 12:43:34.863135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.169 [2024-11-20 12:43:34.863160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.169 qpair failed and we were unable to recover it. 00:29:29.169 [2024-11-20 12:43:34.863427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.169 [2024-11-20 12:43:34.863453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.169 qpair failed and we were unable to recover it. 00:29:29.169 [2024-11-20 12:43:34.863651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.169 [2024-11-20 12:43:34.863677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.169 qpair failed and we were unable to recover it. 00:29:29.169 [2024-11-20 12:43:34.863849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.169 [2024-11-20 12:43:34.863875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.169 qpair failed and we were unable to recover it. 00:29:29.169 [2024-11-20 12:43:34.864009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.169 [2024-11-20 12:43:34.864034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.169 qpair failed and we were unable to recover it. 00:29:29.169 [2024-11-20 12:43:34.864295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.169 [2024-11-20 12:43:34.864321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.169 qpair failed and we were unable to recover it. 00:29:29.169 [2024-11-20 12:43:34.864509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.169 [2024-11-20 12:43:34.864534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.169 qpair failed and we were unable to recover it. 00:29:29.169 [2024-11-20 12:43:34.864791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.169 [2024-11-20 12:43:34.864817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.169 qpair failed and we were unable to recover it. 00:29:29.169 [2024-11-20 12:43:34.864993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.169 [2024-11-20 12:43:34.865019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.169 qpair failed and we were unable to recover it. 00:29:29.169 [2024-11-20 12:43:34.865127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.169 [2024-11-20 12:43:34.865152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.169 qpair failed and we were unable to recover it. 00:29:29.169 [2024-11-20 12:43:34.865403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.169 [2024-11-20 12:43:34.865430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.169 qpair failed and we were unable to recover it. 00:29:29.169 [2024-11-20 12:43:34.865641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.169 [2024-11-20 12:43:34.865667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.169 qpair failed and we were unable to recover it. 00:29:29.169 [2024-11-20 12:43:34.865841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.169 [2024-11-20 12:43:34.865865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.169 qpair failed and we were unable to recover it. 00:29:29.169 [2024-11-20 12:43:34.866125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.169 [2024-11-20 12:43:34.866150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.169 qpair failed and we were unable to recover it. 00:29:29.169 [2024-11-20 12:43:34.866432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.169 [2024-11-20 12:43:34.866458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.169 qpair failed and we were unable to recover it. 00:29:29.169 [2024-11-20 12:43:34.866700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.169 [2024-11-20 12:43:34.866725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.169 qpair failed and we were unable to recover it. 00:29:29.169 [2024-11-20 12:43:34.866963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.169 [2024-11-20 12:43:34.866988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.169 qpair failed and we were unable to recover it. 00:29:29.169 [2024-11-20 12:43:34.867246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.169 [2024-11-20 12:43:34.867272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.169 qpair failed and we were unable to recover it. 00:29:29.169 [2024-11-20 12:43:34.867473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.169 [2024-11-20 12:43:34.867498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.169 qpair failed and we were unable to recover it. 00:29:29.169 [2024-11-20 12:43:34.867734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.169 [2024-11-20 12:43:34.867759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.169 qpair failed and we were unable to recover it. 00:29:29.169 [2024-11-20 12:43:34.868043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.169 [2024-11-20 12:43:34.868069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.169 qpair failed and we were unable to recover it. 00:29:29.169 [2024-11-20 12:43:34.868301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.169 [2024-11-20 12:43:34.868327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.169 qpair failed and we were unable to recover it. 00:29:29.169 [2024-11-20 12:43:34.868437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.169 [2024-11-20 12:43:34.868462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.169 qpair failed and we were unable to recover it. 00:29:29.169 [2024-11-20 12:43:34.868580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.169 [2024-11-20 12:43:34.868610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.169 qpair failed and we were unable to recover it. 00:29:29.169 [2024-11-20 12:43:34.868889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.169 [2024-11-20 12:43:34.868914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.169 qpair failed and we were unable to recover it. 00:29:29.169 [2024-11-20 12:43:34.869080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.169 [2024-11-20 12:43:34.869106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.169 qpair failed and we were unable to recover it. 00:29:29.170 [2024-11-20 12:43:34.869322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.170 [2024-11-20 12:43:34.869348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.170 qpair failed and we were unable to recover it. 00:29:29.170 [2024-11-20 12:43:34.869579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.170 [2024-11-20 12:43:34.869604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.170 qpair failed and we were unable to recover it. 00:29:29.170 [2024-11-20 12:43:34.869869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.170 [2024-11-20 12:43:34.869893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.170 qpair failed and we were unable to recover it. 00:29:29.170 [2024-11-20 12:43:34.870055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.170 [2024-11-20 12:43:34.870080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.170 qpair failed and we were unable to recover it. 00:29:29.170 [2024-11-20 12:43:34.870258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.170 [2024-11-20 12:43:34.870282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.170 qpair failed and we were unable to recover it. 00:29:29.170 [2024-11-20 12:43:34.870460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.170 [2024-11-20 12:43:34.870483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.170 qpair failed and we were unable to recover it. 00:29:29.170 [2024-11-20 12:43:34.870749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.170 [2024-11-20 12:43:34.870774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.170 qpair failed and we were unable to recover it. 00:29:29.448 [2024-11-20 12:43:34.870988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.448 [2024-11-20 12:43:34.871015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.448 qpair failed and we were unable to recover it. 00:29:29.448 [2024-11-20 12:43:34.871275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.448 [2024-11-20 12:43:34.871301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.448 qpair failed and we were unable to recover it. 00:29:29.448 [2024-11-20 12:43:34.871515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.448 [2024-11-20 12:43:34.871542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.448 qpair failed and we were unable to recover it. 00:29:29.448 [2024-11-20 12:43:34.871705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.448 [2024-11-20 12:43:34.871730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-11-20 12:43:34.872025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-11-20 12:43:34.872050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-11-20 12:43:34.872311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-11-20 12:43:34.872337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-11-20 12:43:34.872600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-11-20 12:43:34.872626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-11-20 12:43:34.872827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-11-20 12:43:34.872853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-11-20 12:43:34.872980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-11-20 12:43:34.873005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-11-20 12:43:34.873103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-11-20 12:43:34.873129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-11-20 12:43:34.873249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-11-20 12:43:34.873276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-11-20 12:43:34.873441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-11-20 12:43:34.873465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-11-20 12:43:34.873629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-11-20 12:43:34.873654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-11-20 12:43:34.873838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-11-20 12:43:34.873862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-11-20 12:43:34.874112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-11-20 12:43:34.874138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-11-20 12:43:34.874395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-11-20 12:43:34.874421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-11-20 12:43:34.874610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-11-20 12:43:34.874635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-11-20 12:43:34.874796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-11-20 12:43:34.874821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-11-20 12:43:34.875093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-11-20 12:43:34.875119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-11-20 12:43:34.875299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-11-20 12:43:34.875326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-11-20 12:43:34.875490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-11-20 12:43:34.875515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-11-20 12:43:34.875700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-11-20 12:43:34.875725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-11-20 12:43:34.875843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-11-20 12:43:34.875868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-11-20 12:43:34.876146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-11-20 12:43:34.876172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-11-20 12:43:34.876359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-11-20 12:43:34.876384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-11-20 12:43:34.876645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-11-20 12:43:34.876670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-11-20 12:43:34.876948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-11-20 12:43:34.876973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-11-20 12:43:34.877246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-11-20 12:43:34.877271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-11-20 12:43:34.877447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-11-20 12:43:34.877473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-11-20 12:43:34.877641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-11-20 12:43:34.877665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-11-20 12:43:34.877924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-11-20 12:43:34.877947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-11-20 12:43:34.878208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-11-20 12:43:34.878237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-11-20 12:43:34.878440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-11-20 12:43:34.878465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-11-20 12:43:34.878723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-11-20 12:43:34.878748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-11-20 12:43:34.878938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-11-20 12:43:34.878963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-11-20 12:43:34.879198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-11-20 12:43:34.879232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-11-20 12:43:34.879355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-11-20 12:43:34.879379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-11-20 12:43:34.879562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-11-20 12:43:34.879587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-11-20 12:43:34.879768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-11-20 12:43:34.879793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-11-20 12:43:34.880050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-11-20 12:43:34.880074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-11-20 12:43:34.880324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-11-20 12:43:34.880351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-11-20 12:43:34.880515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-11-20 12:43:34.880541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-11-20 12:43:34.880778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-11-20 12:43:34.880803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-11-20 12:43:34.881040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-11-20 12:43:34.881065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-11-20 12:43:34.881268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-11-20 12:43:34.881293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-11-20 12:43:34.881559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-11-20 12:43:34.881584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-11-20 12:43:34.881834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-11-20 12:43:34.881859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-11-20 12:43:34.882025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-11-20 12:43:34.882050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-11-20 12:43:34.882228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-11-20 12:43:34.882254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-11-20 12:43:34.882417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-11-20 12:43:34.882443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-11-20 12:43:34.882697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-11-20 12:43:34.882722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-11-20 12:43:34.882995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-11-20 12:43:34.883021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-11-20 12:43:34.883225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-11-20 12:43:34.883251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-11-20 12:43:34.883509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-11-20 12:43:34.883533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-11-20 12:43:34.883816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-11-20 12:43:34.883841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-11-20 12:43:34.884002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-11-20 12:43:34.884028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-11-20 12:43:34.884192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-11-20 12:43:34.884239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-11-20 12:43:34.884497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-11-20 12:43:34.884523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-11-20 12:43:34.884696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-11-20 12:43:34.884730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-11-20 12:43:34.884911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-11-20 12:43:34.884935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-11-20 12:43:34.885170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-11-20 12:43:34.885196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-11-20 12:43:34.885380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-11-20 12:43:34.885406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-11-20 12:43:34.885591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-11-20 12:43:34.885615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-11-20 12:43:34.885708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-11-20 12:43:34.885734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-11-20 12:43:34.885918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-11-20 12:43:34.885944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-11-20 12:43:34.886115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-11-20 12:43:34.886139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-11-20 12:43:34.886395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-11-20 12:43:34.886421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-11-20 12:43:34.886513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-11-20 12:43:34.886537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-11-20 12:43:34.886800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-11-20 12:43:34.886824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-11-20 12:43:34.887005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-11-20 12:43:34.887030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-11-20 12:43:34.887301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-11-20 12:43:34.887327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-11-20 12:43:34.887613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-11-20 12:43:34.887639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-11-20 12:43:34.887818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-11-20 12:43:34.887844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-11-20 12:43:34.888045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-11-20 12:43:34.888070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-11-20 12:43:34.888237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-11-20 12:43:34.888263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-11-20 12:43:34.888426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-11-20 12:43:34.888452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-11-20 12:43:34.888733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-11-20 12:43:34.888760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-11-20 12:43:34.888925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-11-20 12:43:34.888951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-11-20 12:43:34.889156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-11-20 12:43:34.889181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-11-20 12:43:34.889289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-11-20 12:43:34.889314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-11-20 12:43:34.889476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-11-20 12:43:34.889501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-11-20 12:43:34.889734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-11-20 12:43:34.889760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-11-20 12:43:34.890044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-11-20 12:43:34.890070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-11-20 12:43:34.890177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-11-20 12:43:34.890212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-11-20 12:43:34.890457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-11-20 12:43:34.890482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-11-20 12:43:34.890736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-11-20 12:43:34.890760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-11-20 12:43:34.891004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-11-20 12:43:34.891029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-11-20 12:43:34.891263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-11-20 12:43:34.891289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-11-20 12:43:34.891476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-11-20 12:43:34.891501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-11-20 12:43:34.891755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-11-20 12:43:34.891780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-11-20 12:43:34.892058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-11-20 12:43:34.892082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-11-20 12:43:34.892278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-11-20 12:43:34.892304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-11-20 12:43:34.892487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-11-20 12:43:34.892512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-11-20 12:43:34.892744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-11-20 12:43:34.892769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-11-20 12:43:34.892936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-11-20 12:43:34.892961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-11-20 12:43:34.893157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-11-20 12:43:34.893182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-11-20 12:43:34.893365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-11-20 12:43:34.893391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-11-20 12:43:34.893655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-11-20 12:43:34.893679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-11-20 12:43:34.893855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-11-20 12:43:34.893881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-11-20 12:43:34.894160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-11-20 12:43:34.894189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-11-20 12:43:34.894479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-11-20 12:43:34.894504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-11-20 12:43:34.894746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-11-20 12:43:34.894772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-11-20 12:43:34.894948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-11-20 12:43:34.894975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-11-20 12:43:34.895247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-11-20 12:43:34.895273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-11-20 12:43:34.895438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-11-20 12:43:34.895464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-11-20 12:43:34.895698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-11-20 12:43:34.895722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-11-20 12:43:34.895887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-11-20 12:43:34.895913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-11-20 12:43:34.896087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-11-20 12:43:34.896113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-11-20 12:43:34.896371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-11-20 12:43:34.896397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-11-20 12:43:34.896593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-11-20 12:43:34.896617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-11-20 12:43:34.896802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-11-20 12:43:34.896826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-11-20 12:43:34.897062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-11-20 12:43:34.897087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-11-20 12:43:34.897347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-11-20 12:43:34.897373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-11-20 12:43:34.897554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-11-20 12:43:34.897578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-11-20 12:43:34.897844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-11-20 12:43:34.897869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-11-20 12:43:34.898155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-11-20 12:43:34.898180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-11-20 12:43:34.898386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-11-20 12:43:34.898411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-11-20 12:43:34.898691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-11-20 12:43:34.898716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-11-20 12:43:34.899002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-11-20 12:43:34.899027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-11-20 12:43:34.899199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-11-20 12:43:34.899234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-11-20 12:43:34.899398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-11-20 12:43:34.899423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-11-20 12:43:34.899604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-11-20 12:43:34.899628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-11-20 12:43:34.899863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-11-20 12:43:34.899892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-11-20 12:43:34.900150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-11-20 12:43:34.900176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-11-20 12:43:34.900284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-11-20 12:43:34.900310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-11-20 12:43:34.900544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-11-20 12:43:34.900568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-11-20 12:43:34.900764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-11-20 12:43:34.900809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-11-20 12:43:34.901093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-11-20 12:43:34.901129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-11-20 12:43:34.901320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-11-20 12:43:34.901358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-11-20 12:43:34.901634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-11-20 12:43:34.901669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-11-20 12:43:34.901994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-11-20 12:43:34.902029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-11-20 12:43:34.902348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-11-20 12:43:34.902385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-11-20 12:43:34.902655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-11-20 12:43:34.902690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-11-20 12:43:34.902967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-11-20 12:43:34.903002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-11-20 12:43:34.903200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-11-20 12:43:34.903256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-11-20 12:43:34.903539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-11-20 12:43:34.903563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-11-20 12:43:34.903685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-11-20 12:43:34.903709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-11-20 12:43:34.903802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-11-20 12:43:34.903828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-11-20 12:43:34.904083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-11-20 12:43:34.904107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-11-20 12:43:34.904271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-11-20 12:43:34.904296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-11-20 12:43:34.904559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-11-20 12:43:34.904595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-11-20 12:43:34.904876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-11-20 12:43:34.904911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-11-20 12:43:34.905136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-11-20 12:43:34.905160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-11-20 12:43:34.905417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-11-20 12:43:34.905444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-11-20 12:43:34.905706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-11-20 12:43:34.905730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-11-20 12:43:34.906009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-11-20 12:43:34.906034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-11-20 12:43:34.906294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-11-20 12:43:34.906332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-11-20 12:43:34.906628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-11-20 12:43:34.906663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-11-20 12:43:34.906923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-11-20 12:43:34.906947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-11-20 12:43:34.907058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-11-20 12:43:34.907082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-11-20 12:43:34.907256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-11-20 12:43:34.907281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-11-20 12:43:34.907462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-11-20 12:43:34.907486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-11-20 12:43:34.907745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-11-20 12:43:34.907778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-11-20 12:43:34.908033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-11-20 12:43:34.908067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-11-20 12:43:34.908293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-11-20 12:43:34.908319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-11-20 12:43:34.908512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-11-20 12:43:34.908536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-11-20 12:43:34.908792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-11-20 12:43:34.908816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-11-20 12:43:34.909010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-11-20 12:43:34.909044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-11-20 12:43:34.909316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-11-20 12:43:34.909352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-11-20 12:43:34.909557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-11-20 12:43:34.909591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-11-20 12:43:34.909788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-11-20 12:43:34.909823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-11-20 12:43:34.910115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-11-20 12:43:34.910149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-11-20 12:43:34.910292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-11-20 12:43:34.910327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-11-20 12:43:34.910635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-11-20 12:43:34.910670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-11-20 12:43:34.910949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-11-20 12:43:34.910982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-11-20 12:43:34.911165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-11-20 12:43:34.911200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-11-20 12:43:34.911484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-11-20 12:43:34.911509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-11-20 12:43:34.911740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-11-20 12:43:34.911770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-11-20 12:43:34.911950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-11-20 12:43:34.911974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-11-20 12:43:34.912149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-11-20 12:43:34.912174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-11-20 12:43:34.912449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-11-20 12:43:34.912475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-11-20 12:43:34.912736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-11-20 12:43:34.912761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-11-20 12:43:34.913023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-11-20 12:43:34.913047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-11-20 12:43:34.913337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-11-20 12:43:34.913374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-11-20 12:43:34.913585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-11-20 12:43:34.913620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-11-20 12:43:34.913897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-11-20 12:43:34.913931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-11-20 12:43:34.914217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-11-20 12:43:34.914244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-11-20 12:43:34.914482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-11-20 12:43:34.914508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-11-20 12:43:34.914746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-11-20 12:43:34.914770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-11-20 12:43:34.915024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-11-20 12:43:34.915049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-11-20 12:43:34.915291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-11-20 12:43:34.915316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-11-20 12:43:34.915580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-11-20 12:43:34.915604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-11-20 12:43:34.915883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-11-20 12:43:34.915908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-11-20 12:43:34.916160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-11-20 12:43:34.916194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-11-20 12:43:34.916517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-11-20 12:43:34.916552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-11-20 12:43:34.916787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-11-20 12:43:34.916822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-11-20 12:43:34.917101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-11-20 12:43:34.917135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-11-20 12:43:34.917416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-11-20 12:43:34.917452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-11-20 12:43:34.917748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-11-20 12:43:34.917783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-11-20 12:43:34.918052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-11-20 12:43:34.918098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-11-20 12:43:34.918376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-11-20 12:43:34.918401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-11-20 12:43:34.918586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-11-20 12:43:34.918612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-11-20 12:43:34.918849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-11-20 12:43:34.918873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-11-20 12:43:34.919110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-11-20 12:43:34.919144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-11-20 12:43:34.919358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-11-20 12:43:34.919395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-11-20 12:43:34.919593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-11-20 12:43:34.919627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-11-20 12:43:34.919888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-11-20 12:43:34.919923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-11-20 12:43:34.920123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-11-20 12:43:34.920158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-11-20 12:43:34.920428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-11-20 12:43:34.920453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-11-20 12:43:34.920685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-11-20 12:43:34.920709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-11-20 12:43:34.920812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-11-20 12:43:34.920836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-11-20 12:43:34.921096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-11-20 12:43:34.921130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-11-20 12:43:34.921434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-11-20 12:43:34.921471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-11-20 12:43:34.921729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-11-20 12:43:34.921764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-11-20 12:43:34.921964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-11-20 12:43:34.921999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-11-20 12:43:34.922273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-11-20 12:43:34.922309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-11-20 12:43:34.922575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-11-20 12:43:34.922599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-11-20 12:43:34.922797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-11-20 12:43:34.922821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-11-20 12:43:34.923107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-11-20 12:43:34.923150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-11-20 12:43:34.923386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-11-20 12:43:34.923423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-11-20 12:43:34.923538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-11-20 12:43:34.923573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-11-20 12:43:34.923758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-11-20 12:43:34.923793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-11-20 12:43:34.924049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-11-20 12:43:34.924082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-11-20 12:43:34.924297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-11-20 12:43:34.924322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-11-20 12:43:34.924493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-11-20 12:43:34.924518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-11-20 12:43:34.924798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-11-20 12:43:34.924832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-11-20 12:43:34.925018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-11-20 12:43:34.925052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-11-20 12:43:34.925195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-11-20 12:43:34.925241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-11-20 12:43:34.925498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-11-20 12:43:34.925524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-11-20 12:43:34.925717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-11-20 12:43:34.925753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-11-20 12:43:34.925970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-11-20 12:43:34.926004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-11-20 12:43:34.926213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-11-20 12:43:34.926240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-11-20 12:43:34.926496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-11-20 12:43:34.926532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-11-20 12:43:34.926809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-11-20 12:43:34.926853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-11-20 12:43:34.927122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-11-20 12:43:34.927165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-11-20 12:43:34.927392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-11-20 12:43:34.927429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-11-20 12:43:34.927686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-11-20 12:43:34.927720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-11-20 12:43:34.928023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-11-20 12:43:34.928057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-11-20 12:43:34.928345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-11-20 12:43:34.928372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-11-20 12:43:34.928644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-11-20 12:43:34.928669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-11-20 12:43:34.928923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-11-20 12:43:34.928948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-11-20 12:43:34.929186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-11-20 12:43:34.929218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-11-20 12:43:34.929451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-11-20 12:43:34.929475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-11-20 12:43:34.929729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-11-20 12:43:34.929754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-11-20 12:43:34.929996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-11-20 12:43:34.930037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-11-20 12:43:34.930324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-11-20 12:43:34.930366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-11-20 12:43:34.930574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-11-20 12:43:34.930609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-11-20 12:43:34.930814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-11-20 12:43:34.930849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-11-20 12:43:34.931030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-11-20 12:43:34.931054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-11-20 12:43:34.931239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-11-20 12:43:34.931264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-11-20 12:43:34.931457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-11-20 12:43:34.931491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-11-20 12:43:34.931766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-11-20 12:43:34.931799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-11-20 12:43:34.932078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-11-20 12:43:34.932112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-11-20 12:43:34.932403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-11-20 12:43:34.932440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-11-20 12:43:34.932709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-11-20 12:43:34.932743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-11-20 12:43:34.933030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-11-20 12:43:34.933072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-11-20 12:43:34.933337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-11-20 12:43:34.933381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-11-20 12:43:34.933663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-11-20 12:43:34.933697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-11-20 12:43:34.933971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-11-20 12:43:34.934006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-11-20 12:43:34.934235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-11-20 12:43:34.934272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-11-20 12:43:34.934488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-11-20 12:43:34.934522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-11-20 12:43:34.934727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-11-20 12:43:34.934761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-11-20 12:43:34.934964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-11-20 12:43:34.934999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-11-20 12:43:34.935255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-11-20 12:43:34.935281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-11-20 12:43:34.935537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-11-20 12:43:34.935562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-11-20 12:43:34.935655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-11-20 12:43:34.935681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-11-20 12:43:34.935805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-11-20 12:43:34.935829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-11-20 12:43:34.936087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-11-20 12:43:34.936111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-11-20 12:43:34.936238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-11-20 12:43:34.936264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-11-20 12:43:34.936479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-11-20 12:43:34.936503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-11-20 12:43:34.936761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-11-20 12:43:34.936786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-11-20 12:43:34.936991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-11-20 12:43:34.937016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-11-20 12:43:34.937250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-11-20 12:43:34.937276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-11-20 12:43:34.937494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-11-20 12:43:34.937520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-11-20 12:43:34.937632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-11-20 12:43:34.937657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-11-20 12:43:34.937910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-11-20 12:43:34.937935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-11-20 12:43:34.938127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-11-20 12:43:34.938152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-11-20 12:43:34.938414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-11-20 12:43:34.938440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-11-20 12:43:34.938686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-11-20 12:43:34.938711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-11-20 12:43:34.938923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-11-20 12:43:34.938948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-11-20 12:43:34.939126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-11-20 12:43:34.939151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-11-20 12:43:34.939430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-11-20 12:43:34.939454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-11-20 12:43:34.939649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-11-20 12:43:34.939675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-11-20 12:43:34.939932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-11-20 12:43:34.939956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-11-20 12:43:34.940137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-11-20 12:43:34.940163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-11-20 12:43:34.940365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-11-20 12:43:34.940391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-11-20 12:43:34.940684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-11-20 12:43:34.940719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-11-20 12:43:34.941002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-11-20 12:43:34.941037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-11-20 12:43:34.941314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-11-20 12:43:34.941340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-11-20 12:43:34.941467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-11-20 12:43:34.941502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-11-20 12:43:34.941758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-11-20 12:43:34.941793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-11-20 12:43:34.942020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-11-20 12:43:34.942054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-11-20 12:43:34.942238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-11-20 12:43:34.942274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-11-20 12:43:34.942486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-11-20 12:43:34.942521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-11-20 12:43:34.942822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-11-20 12:43:34.942856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-11-20 12:43:34.943041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-11-20 12:43:34.943075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-11-20 12:43:34.943220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-11-20 12:43:34.943245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-11-20 12:43:34.943512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-11-20 12:43:34.943537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-11-20 12:43:34.943793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-11-20 12:43:34.943828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-11-20 12:43:34.944097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-11-20 12:43:34.944130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-11-20 12:43:34.944417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-11-20 12:43:34.944454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-11-20 12:43:34.944681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-11-20 12:43:34.944716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-11-20 12:43:34.944972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-11-20 12:43:34.945006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-11-20 12:43:34.945308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-11-20 12:43:34.945344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.457 [2024-11-20 12:43:34.945554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-11-20 12:43:34.945589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-11-20 12:43:34.945805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-11-20 12:43:34.945840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-11-20 12:43:34.946118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-11-20 12:43:34.946143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-11-20 12:43:34.946398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-11-20 12:43:34.946425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-11-20 12:43:34.946605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-11-20 12:43:34.946630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-11-20 12:43:34.946913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-11-20 12:43:34.946938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-11-20 12:43:34.947146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-11-20 12:43:34.947171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-11-20 12:43:34.947470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-11-20 12:43:34.947505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-11-20 12:43:34.947770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-11-20 12:43:34.947805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-11-20 12:43:34.948087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-11-20 12:43:34.948133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-11-20 12:43:34.948394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-11-20 12:43:34.948421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-11-20 12:43:34.948660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-11-20 12:43:34.948685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-11-20 12:43:34.948931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-11-20 12:43:34.948956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-11-20 12:43:34.949196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-11-20 12:43:34.949231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-11-20 12:43:34.949394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-11-20 12:43:34.949420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-11-20 12:43:34.949684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-11-20 12:43:34.949719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-11-20 12:43:34.949974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-11-20 12:43:34.950011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-11-20 12:43:34.950316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-11-20 12:43:34.950357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-11-20 12:43:34.950551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-11-20 12:43:34.950577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-11-20 12:43:34.950758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-11-20 12:43:34.950782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-11-20 12:43:34.951065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-11-20 12:43:34.951101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-11-20 12:43:34.951359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-11-20 12:43:34.951395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-11-20 12:43:34.951690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-11-20 12:43:34.951724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-11-20 12:43:34.951993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-11-20 12:43:34.952028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-11-20 12:43:34.952220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-11-20 12:43:34.952246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-11-20 12:43:34.952481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-11-20 12:43:34.952506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-11-20 12:43:34.952691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-11-20 12:43:34.952715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-11-20 12:43:34.952949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-11-20 12:43:34.952973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-11-20 12:43:34.953231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-11-20 12:43:34.953257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-11-20 12:43:34.953353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-11-20 12:43:34.953376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-11-20 12:43:34.953559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-11-20 12:43:34.953594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-11-20 12:43:34.953806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-11-20 12:43:34.953840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-11-20 12:43:34.954065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-11-20 12:43:34.954100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-11-20 12:43:34.954300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-11-20 12:43:34.954338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-11-20 12:43:34.954617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-11-20 12:43:34.954643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-11-20 12:43:34.954837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-11-20 12:43:34.954861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-11-20 12:43:34.955124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-11-20 12:43:34.955149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-11-20 12:43:34.955286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-11-20 12:43:34.955311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-11-20 12:43:34.955581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-11-20 12:43:34.955606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-11-20 12:43:34.955846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-11-20 12:43:34.955880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-11-20 12:43:34.956067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-11-20 12:43:34.956092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-11-20 12:43:34.956271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-11-20 12:43:34.956297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-11-20 12:43:34.956487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-11-20 12:43:34.956521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-11-20 12:43:34.956744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-11-20 12:43:34.956779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-11-20 12:43:34.956931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-11-20 12:43:34.956965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-11-20 12:43:34.957245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-11-20 12:43:34.957282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-11-20 12:43:34.957401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-11-20 12:43:34.957442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-11-20 12:43:34.957701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-11-20 12:43:34.957727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-11-20 12:43:34.957971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-11-20 12:43:34.957996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-11-20 12:43:34.958283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-11-20 12:43:34.958319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-11-20 12:43:34.958608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-11-20 12:43:34.958649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-11-20 12:43:34.958808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-11-20 12:43:34.958844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-11-20 12:43:34.959144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-11-20 12:43:34.959179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-11-20 12:43:34.959428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-11-20 12:43:34.959453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-11-20 12:43:34.959633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-11-20 12:43:34.959658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-11-20 12:43:34.959900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-11-20 12:43:34.959934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-11-20 12:43:34.960188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-11-20 12:43:34.960232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-11-20 12:43:34.960543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-11-20 12:43:34.960576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-11-20 12:43:34.960853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-11-20 12:43:34.960889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-11-20 12:43:34.961166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-11-20 12:43:34.961212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-11-20 12:43:34.961487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-11-20 12:43:34.961524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-11-20 12:43:34.961763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-11-20 12:43:34.961800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-11-20 12:43:34.962106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-11-20 12:43:34.962143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-11-20 12:43:34.962464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-11-20 12:43:34.962501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-11-20 12:43:34.962740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-11-20 12:43:34.962775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-11-20 12:43:34.963044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-11-20 12:43:34.963078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-11-20 12:43:34.963280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-11-20 12:43:34.963316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-11-20 12:43:34.963505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-11-20 12:43:34.963540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-11-20 12:43:34.963763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-11-20 12:43:34.963798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-11-20 12:43:34.964042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-11-20 12:43:34.964078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.459 [2024-11-20 12:43:34.964215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-11-20 12:43:34.964252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-11-20 12:43:34.964530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-11-20 12:43:34.964557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-11-20 12:43:34.964665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-11-20 12:43:34.964690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-11-20 12:43:34.964852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-11-20 12:43:34.964877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-11-20 12:43:34.965056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-11-20 12:43:34.965081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-11-20 12:43:34.965325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-11-20 12:43:34.965350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-11-20 12:43:34.965586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-11-20 12:43:34.965611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-11-20 12:43:34.965776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-11-20 12:43:34.965806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-11-20 12:43:34.965980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-11-20 12:43:34.966013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-11-20 12:43:34.966294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-11-20 12:43:34.966331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-11-20 12:43:34.966529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-11-20 12:43:34.966564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-11-20 12:43:34.966752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-11-20 12:43:34.966784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-11-20 12:43:34.967073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-11-20 12:43:34.967108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-11-20 12:43:34.967299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-11-20 12:43:34.967336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-11-20 12:43:34.967643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-11-20 12:43:34.967678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-11-20 12:43:34.967885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-11-20 12:43:34.967919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-11-20 12:43:34.968177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-11-20 12:43:34.968212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-11-20 12:43:34.968339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-11-20 12:43:34.968365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-11-20 12:43:34.968555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-11-20 12:43:34.968589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-11-20 12:43:34.968779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-11-20 12:43:34.968814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-11-20 12:43:34.968940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-11-20 12:43:34.968974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-11-20 12:43:34.969185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-11-20 12:43:34.969230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-11-20 12:43:34.969547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-11-20 12:43:34.969583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-11-20 12:43:34.969780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-11-20 12:43:34.969815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-11-20 12:43:34.970001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-11-20 12:43:34.970043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-11-20 12:43:34.970219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-11-20 12:43:34.970243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-11-20 12:43:34.970503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-11-20 12:43:34.970537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-11-20 12:43:34.970724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-11-20 12:43:34.970760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-11-20 12:43:34.970884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-11-20 12:43:34.970918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-11-20 12:43:34.971196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-11-20 12:43:34.971248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-11-20 12:43:34.971368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-11-20 12:43:34.971402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-11-20 12:43:34.971717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-11-20 12:43:34.971750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-11-20 12:43:34.972029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-11-20 12:43:34.972063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-11-20 12:43:34.972341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-11-20 12:43:34.972378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-11-20 12:43:34.972584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-11-20 12:43:34.972618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-11-20 12:43:34.972904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-11-20 12:43:34.972937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-11-20 12:43:34.973136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-11-20 12:43:34.973171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-11-20 12:43:34.973335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-11-20 12:43:34.973371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-11-20 12:43:34.973591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.460 [2024-11-20 12:43:34.973625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.460 qpair failed and we were unable to recover it. 00:29:29.460 [2024-11-20 12:43:34.973956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.460 [2024-11-20 12:43:34.973991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.460 qpair failed and we were unable to recover it. 00:29:29.460 [2024-11-20 12:43:34.974221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.460 [2024-11-20 12:43:34.974246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.460 qpair failed and we were unable to recover it. 00:29:29.460 [2024-11-20 12:43:34.974453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.460 [2024-11-20 12:43:34.974477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.460 qpair failed and we were unable to recover it. 00:29:29.460 [2024-11-20 12:43:34.974590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.460 [2024-11-20 12:43:34.974615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.460 qpair failed and we were unable to recover it. 00:29:29.460 [2024-11-20 12:43:34.974802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.460 [2024-11-20 12:43:34.974827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.460 qpair failed and we were unable to recover it. 00:29:29.460 [2024-11-20 12:43:34.975085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.460 [2024-11-20 12:43:34.975109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.460 qpair failed and we were unable to recover it. 00:29:29.460 [2024-11-20 12:43:34.975343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.460 [2024-11-20 12:43:34.975370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.460 qpair failed and we were unable to recover it. 00:29:29.460 [2024-11-20 12:43:34.975625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.460 [2024-11-20 12:43:34.975650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.460 qpair failed and we were unable to recover it. 00:29:29.460 [2024-11-20 12:43:34.975957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.460 [2024-11-20 12:43:34.975986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.460 qpair failed and we were unable to recover it. 00:29:29.460 [2024-11-20 12:43:34.976256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.460 [2024-11-20 12:43:34.976311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.460 qpair failed and we were unable to recover it. 00:29:29.460 [2024-11-20 12:43:34.976594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.460 [2024-11-20 12:43:34.976630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.460 qpair failed and we were unable to recover it. 00:29:29.460 [2024-11-20 12:43:34.976877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.460 [2024-11-20 12:43:34.976915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.460 qpair failed and we were unable to recover it. 00:29:29.460 [2024-11-20 12:43:34.977135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.460 [2024-11-20 12:43:34.977171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.460 qpair failed and we were unable to recover it. 00:29:29.460 [2024-11-20 12:43:34.977392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.460 [2024-11-20 12:43:34.977431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.460 qpair failed and we were unable to recover it. 00:29:29.460 [2024-11-20 12:43:34.977687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.460 [2024-11-20 12:43:34.977720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.460 qpair failed and we were unable to recover it. 00:29:29.460 [2024-11-20 12:43:34.977922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.460 [2024-11-20 12:43:34.977957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.460 qpair failed and we were unable to recover it. 00:29:29.460 [2024-11-20 12:43:34.978178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.460 [2024-11-20 12:43:34.978223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.460 qpair failed and we were unable to recover it. 00:29:29.460 [2024-11-20 12:43:34.978431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.460 [2024-11-20 12:43:34.978455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.460 qpair failed and we were unable to recover it. 00:29:29.460 [2024-11-20 12:43:34.978622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.460 [2024-11-20 12:43:34.978647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.460 qpair failed and we were unable to recover it. 00:29:29.460 [2024-11-20 12:43:34.978925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.460 [2024-11-20 12:43:34.978960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.460 qpair failed and we were unable to recover it. 00:29:29.460 [2024-11-20 12:43:34.979224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.460 [2024-11-20 12:43:34.979260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.460 qpair failed and we were unable to recover it. 00:29:29.460 [2024-11-20 12:43:34.979462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.460 [2024-11-20 12:43:34.979498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.460 qpair failed and we were unable to recover it. 00:29:29.460 [2024-11-20 12:43:34.979777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.460 [2024-11-20 12:43:34.979812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.460 qpair failed and we were unable to recover it. 00:29:29.460 [2024-11-20 12:43:34.979950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.460 [2024-11-20 12:43:34.979986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.460 qpair failed and we were unable to recover it. 00:29:29.460 [2024-11-20 12:43:34.980265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.460 [2024-11-20 12:43:34.980302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.460 qpair failed and we were unable to recover it. 00:29:29.460 [2024-11-20 12:43:34.980504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.460 [2024-11-20 12:43:34.980528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.460 qpair failed and we were unable to recover it. 00:29:29.460 [2024-11-20 12:43:34.980810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.460 [2024-11-20 12:43:34.980846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.460 qpair failed and we were unable to recover it. 00:29:29.460 [2024-11-20 12:43:34.981130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.460 [2024-11-20 12:43:34.981164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.460 qpair failed and we were unable to recover it. 00:29:29.460 [2024-11-20 12:43:34.981377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.460 [2024-11-20 12:43:34.981402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.460 qpair failed and we were unable to recover it. 00:29:29.460 [2024-11-20 12:43:34.981682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.460 [2024-11-20 12:43:34.981731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.460 qpair failed and we were unable to recover it. 00:29:29.460 [2024-11-20 12:43:34.982029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.460 [2024-11-20 12:43:34.982065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.460 qpair failed and we were unable to recover it. 00:29:29.460 [2024-11-20 12:43:34.982349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.460 [2024-11-20 12:43:34.982387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.460 qpair failed and we were unable to recover it. 00:29:29.460 [2024-11-20 12:43:34.982500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.460 [2024-11-20 12:43:34.982536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.460 qpair failed and we were unable to recover it. 00:29:29.460 [2024-11-20 12:43:34.982844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.460 [2024-11-20 12:43:34.982879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.460 qpair failed and we were unable to recover it. 00:29:29.460 [2024-11-20 12:43:34.983113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.460 [2024-11-20 12:43:34.983146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.460 qpair failed and we were unable to recover it. 00:29:29.460 [2024-11-20 12:43:34.983446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.461 [2024-11-20 12:43:34.983494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.461 qpair failed and we were unable to recover it. 00:29:29.461 [2024-11-20 12:43:34.983728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.461 [2024-11-20 12:43:34.983757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.461 qpair failed and we were unable to recover it. 00:29:29.461 [2024-11-20 12:43:34.984005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.461 [2024-11-20 12:43:34.984029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.461 qpair failed and we were unable to recover it. 00:29:29.461 [2024-11-20 12:43:34.984282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.461 [2024-11-20 12:43:34.984308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.461 qpair failed and we were unable to recover it. 00:29:29.461 [2024-11-20 12:43:34.984542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.461 [2024-11-20 12:43:34.984567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.461 qpair failed and we were unable to recover it. 00:29:29.461 [2024-11-20 12:43:34.984826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.461 [2024-11-20 12:43:34.984850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.461 qpair failed and we were unable to recover it. 00:29:29.461 [2024-11-20 12:43:34.985036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.461 [2024-11-20 12:43:34.985064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.461 qpair failed and we were unable to recover it. 00:29:29.461 [2024-11-20 12:43:34.985198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.461 [2024-11-20 12:43:34.985232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.461 qpair failed and we were unable to recover it. 00:29:29.461 [2024-11-20 12:43:34.985488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.461 [2024-11-20 12:43:34.985512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.461 qpair failed and we were unable to recover it. 00:29:29.461 [2024-11-20 12:43:34.985635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.461 [2024-11-20 12:43:34.985661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.461 qpair failed and we were unable to recover it. 00:29:29.461 [2024-11-20 12:43:34.985926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.461 [2024-11-20 12:43:34.985961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.461 qpair failed and we were unable to recover it. 00:29:29.461 [2024-11-20 12:43:34.986168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.461 [2024-11-20 12:43:34.986194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.461 qpair failed and we were unable to recover it. 00:29:29.461 [2024-11-20 12:43:34.986392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.461 [2024-11-20 12:43:34.986427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.461 qpair failed and we were unable to recover it. 00:29:29.461 [2024-11-20 12:43:34.986711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.461 [2024-11-20 12:43:34.986746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.461 qpair failed and we were unable to recover it. 00:29:29.461 [2024-11-20 12:43:34.986973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.461 [2024-11-20 12:43:34.987007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.461 qpair failed and we were unable to recover it. 00:29:29.461 [2024-11-20 12:43:34.987311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.461 [2024-11-20 12:43:34.987347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.461 qpair failed and we were unable to recover it. 00:29:29.461 [2024-11-20 12:43:34.987628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.461 [2024-11-20 12:43:34.987653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.461 qpair failed and we were unable to recover it. 00:29:29.461 [2024-11-20 12:43:34.987782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.461 [2024-11-20 12:43:34.987806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.461 qpair failed and we were unable to recover it. 00:29:29.461 [2024-11-20 12:43:34.988011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.461 [2024-11-20 12:43:34.988045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.461 qpair failed and we were unable to recover it. 00:29:29.461 [2024-11-20 12:43:34.988326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.461 [2024-11-20 12:43:34.988362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.461 qpair failed and we were unable to recover it. 00:29:29.461 [2024-11-20 12:43:34.988567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.461 [2024-11-20 12:43:34.988601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.461 qpair failed and we were unable to recover it. 00:29:29.461 [2024-11-20 12:43:34.988738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.461 [2024-11-20 12:43:34.988772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.461 qpair failed and we were unable to recover it. 00:29:29.461 [2024-11-20 12:43:34.989040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.461 [2024-11-20 12:43:34.989074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.461 qpair failed and we were unable to recover it. 00:29:29.461 [2024-11-20 12:43:34.989302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.461 [2024-11-20 12:43:34.989328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.461 qpair failed and we were unable to recover it. 00:29:29.461 [2024-11-20 12:43:34.989493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.461 [2024-11-20 12:43:34.989516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.461 qpair failed and we were unable to recover it. 00:29:29.461 [2024-11-20 12:43:34.989729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.461 [2024-11-20 12:43:34.989764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.461 qpair failed and we were unable to recover it. 00:29:29.461 [2024-11-20 12:43:34.990049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.461 [2024-11-20 12:43:34.990083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.461 qpair failed and we were unable to recover it. 00:29:29.461 [2024-11-20 12:43:34.990353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.461 [2024-11-20 12:43:34.990379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.461 qpair failed and we were unable to recover it. 00:29:29.461 [2024-11-20 12:43:34.990616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.461 [2024-11-20 12:43:34.990651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.461 qpair failed and we were unable to recover it. 00:29:29.461 [2024-11-20 12:43:34.990893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.461 [2024-11-20 12:43:34.990928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.461 qpair failed and we were unable to recover it. 00:29:29.461 [2024-11-20 12:43:34.991131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.461 [2024-11-20 12:43:34.991165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.461 qpair failed and we were unable to recover it. 00:29:29.461 [2024-11-20 12:43:34.991304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.461 [2024-11-20 12:43:34.991331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.461 qpair failed and we were unable to recover it. 00:29:29.461 [2024-11-20 12:43:34.991605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.461 [2024-11-20 12:43:34.991628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.461 qpair failed and we were unable to recover it. 00:29:29.461 [2024-11-20 12:43:34.991747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.461 [2024-11-20 12:43:34.991771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.461 qpair failed and we were unable to recover it. 00:29:29.461 [2024-11-20 12:43:34.991976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.461 [2024-11-20 12:43:34.992000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.461 qpair failed and we were unable to recover it. 00:29:29.461 [2024-11-20 12:43:34.992106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.461 [2024-11-20 12:43:34.992131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.461 qpair failed and we were unable to recover it. 00:29:29.461 [2024-11-20 12:43:34.992407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.461 [2024-11-20 12:43:34.992445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.461 qpair failed and we were unable to recover it. 00:29:29.461 [2024-11-20 12:43:34.992644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.462 [2024-11-20 12:43:34.992678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.462 qpair failed and we were unable to recover it. 00:29:29.462 [2024-11-20 12:43:34.992937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.462 [2024-11-20 12:43:34.992972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.462 qpair failed and we were unable to recover it. 00:29:29.462 [2024-11-20 12:43:34.993159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.462 [2024-11-20 12:43:34.993194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.462 qpair failed and we were unable to recover it. 00:29:29.462 [2024-11-20 12:43:34.993419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.462 [2024-11-20 12:43:34.993444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.462 qpair failed and we were unable to recover it. 00:29:29.462 [2024-11-20 12:43:34.993627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.462 [2024-11-20 12:43:34.993651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.462 qpair failed and we were unable to recover it. 00:29:29.462 [2024-11-20 12:43:34.993833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.462 [2024-11-20 12:43:34.993875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.462 qpair failed and we were unable to recover it. 00:29:29.462 [2024-11-20 12:43:34.994144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.462 [2024-11-20 12:43:34.994179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.462 qpair failed and we were unable to recover it. 00:29:29.462 [2024-11-20 12:43:34.994386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.462 [2024-11-20 12:43:34.994411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.462 qpair failed and we were unable to recover it. 00:29:29.462 [2024-11-20 12:43:34.994647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.462 [2024-11-20 12:43:34.994681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.462 qpair failed and we were unable to recover it. 00:29:29.462 [2024-11-20 12:43:34.994928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.462 [2024-11-20 12:43:34.994963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.462 qpair failed and we were unable to recover it. 00:29:29.462 [2024-11-20 12:43:34.995281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.462 [2024-11-20 12:43:34.995318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.462 qpair failed and we were unable to recover it. 00:29:29.462 [2024-11-20 12:43:34.995577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.462 [2024-11-20 12:43:34.995613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.462 qpair failed and we were unable to recover it. 00:29:29.462 [2024-11-20 12:43:34.995816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.462 [2024-11-20 12:43:34.995850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.462 qpair failed and we were unable to recover it. 00:29:29.462 [2024-11-20 12:43:34.996117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.462 [2024-11-20 12:43:34.996152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.462 qpair failed and we were unable to recover it. 00:29:29.462 [2024-11-20 12:43:34.996352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.462 [2024-11-20 12:43:34.996379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.462 qpair failed and we were unable to recover it. 00:29:29.462 [2024-11-20 12:43:34.996553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.462 [2024-11-20 12:43:34.996577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.462 qpair failed and we were unable to recover it. 00:29:29.462 [2024-11-20 12:43:34.996793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.462 [2024-11-20 12:43:34.996828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.462 qpair failed and we were unable to recover it. 00:29:29.462 [2024-11-20 12:43:34.997013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.462 [2024-11-20 12:43:34.997047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.462 qpair failed and we were unable to recover it. 00:29:29.462 [2024-11-20 12:43:34.997276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.462 [2024-11-20 12:43:34.997324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.462 qpair failed and we were unable to recover it. 00:29:29.462 [2024-11-20 12:43:34.997464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.462 [2024-11-20 12:43:34.997489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.462 qpair failed and we were unable to recover it. 00:29:29.462 [2024-11-20 12:43:34.997602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.462 [2024-11-20 12:43:34.997626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.462 qpair failed and we were unable to recover it. 00:29:29.462 [2024-11-20 12:43:34.997793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.462 [2024-11-20 12:43:34.997818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.462 qpair failed and we were unable to recover it. 00:29:29.462 [2024-11-20 12:43:34.998107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.462 [2024-11-20 12:43:34.998142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.462 qpair failed and we were unable to recover it. 00:29:29.462 [2024-11-20 12:43:34.998436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.462 [2024-11-20 12:43:34.998471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.462 qpair failed and we were unable to recover it. 00:29:29.462 [2024-11-20 12:43:34.998667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.462 [2024-11-20 12:43:34.998692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.462 qpair failed and we were unable to recover it. 00:29:29.462 [2024-11-20 12:43:34.998954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.462 [2024-11-20 12:43:34.998978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.462 qpair failed and we were unable to recover it. 00:29:29.462 [2024-11-20 12:43:34.999166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.462 [2024-11-20 12:43:34.999191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.462 qpair failed and we were unable to recover it. 00:29:29.462 [2024-11-20 12:43:34.999476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.462 [2024-11-20 12:43:34.999520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.462 qpair failed and we were unable to recover it. 00:29:29.462 [2024-11-20 12:43:34.999803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.462 [2024-11-20 12:43:34.999838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.462 qpair failed and we were unable to recover it. 00:29:29.462 [2024-11-20 12:43:35.000110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.462 [2024-11-20 12:43:35.000147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.462 qpair failed and we were unable to recover it. 00:29:29.462 [2024-11-20 12:43:35.000472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.462 [2024-11-20 12:43:35.000509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.462 qpair failed and we were unable to recover it. 00:29:29.462 [2024-11-20 12:43:35.000791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.462 [2024-11-20 12:43:35.000826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.462 qpair failed and we were unable to recover it. 00:29:29.462 [2024-11-20 12:43:35.001035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.462 [2024-11-20 12:43:35.001077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.462 qpair failed and we were unable to recover it. 00:29:29.462 [2024-11-20 12:43:35.001286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.462 [2024-11-20 12:43:35.001322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.462 qpair failed and we were unable to recover it. 00:29:29.462 [2024-11-20 12:43:35.001602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.462 [2024-11-20 12:43:35.001627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.462 qpair failed and we were unable to recover it. 00:29:29.462 [2024-11-20 12:43:35.001927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.462 [2024-11-20 12:43:35.001961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.462 qpair failed and we were unable to recover it. 00:29:29.462 [2024-11-20 12:43:35.002161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.462 [2024-11-20 12:43:35.002196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.462 qpair failed and we were unable to recover it. 00:29:29.462 [2024-11-20 12:43:35.002494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.462 [2024-11-20 12:43:35.002537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.462 qpair failed and we were unable to recover it. 00:29:29.463 [2024-11-20 12:43:35.002784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.463 [2024-11-20 12:43:35.002809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.463 qpair failed and we were unable to recover it. 00:29:29.463 [2024-11-20 12:43:35.002915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.463 [2024-11-20 12:43:35.002939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.463 qpair failed and we were unable to recover it. 00:29:29.463 [2024-11-20 12:43:35.003125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.463 [2024-11-20 12:43:35.003150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.463 qpair failed and we were unable to recover it. 00:29:29.463 [2024-11-20 12:43:35.003255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.463 [2024-11-20 12:43:35.003280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.463 qpair failed and we were unable to recover it. 00:29:29.463 [2024-11-20 12:43:35.003457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.463 [2024-11-20 12:43:35.003492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.463 qpair failed and we were unable to recover it. 00:29:29.463 [2024-11-20 12:43:35.003694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.463 [2024-11-20 12:43:35.003729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.463 qpair failed and we were unable to recover it. 00:29:29.463 [2024-11-20 12:43:35.003940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.463 [2024-11-20 12:43:35.003974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.463 qpair failed and we were unable to recover it. 00:29:29.463 [2024-11-20 12:43:35.004155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.463 [2024-11-20 12:43:35.004180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.463 qpair failed and we were unable to recover it. 00:29:29.463 [2024-11-20 12:43:35.004441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.463 [2024-11-20 12:43:35.004479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.463 qpair failed and we were unable to recover it. 00:29:29.463 [2024-11-20 12:43:35.004759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.463 [2024-11-20 12:43:35.004794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.463 qpair failed and we were unable to recover it. 00:29:29.463 [2024-11-20 12:43:35.005090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.463 [2024-11-20 12:43:35.005125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.463 qpair failed and we were unable to recover it. 00:29:29.463 [2024-11-20 12:43:35.005395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.463 [2024-11-20 12:43:35.005433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.463 qpair failed and we were unable to recover it. 00:29:29.463 [2024-11-20 12:43:35.005717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.463 [2024-11-20 12:43:35.005750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.463 qpair failed and we were unable to recover it. 00:29:29.463 [2024-11-20 12:43:35.006043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.463 [2024-11-20 12:43:35.006078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.463 qpair failed and we were unable to recover it. 00:29:29.463 [2024-11-20 12:43:35.006310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.463 [2024-11-20 12:43:35.006336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.463 qpair failed and we were unable to recover it. 00:29:29.463 [2024-11-20 12:43:35.006455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.463 [2024-11-20 12:43:35.006480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.463 qpair failed and we were unable to recover it. 00:29:29.463 [2024-11-20 12:43:35.006735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.463 [2024-11-20 12:43:35.006776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.463 qpair failed and we were unable to recover it. 00:29:29.463 [2024-11-20 12:43:35.007007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.463 [2024-11-20 12:43:35.007042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.463 qpair failed and we were unable to recover it. 00:29:29.463 [2024-11-20 12:43:35.007350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.463 [2024-11-20 12:43:35.007387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.463 qpair failed and we were unable to recover it. 00:29:29.463 [2024-11-20 12:43:35.007665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.463 [2024-11-20 12:43:35.007700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.463 qpair failed and we were unable to recover it. 00:29:29.463 [2024-11-20 12:43:35.007901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.463 [2024-11-20 12:43:35.007936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.463 qpair failed and we were unable to recover it. 00:29:29.463 [2024-11-20 12:43:35.008223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.463 [2024-11-20 12:43:35.008259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.463 qpair failed and we were unable to recover it. 00:29:29.463 [2024-11-20 12:43:35.008531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.463 [2024-11-20 12:43:35.008567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.463 qpair failed and we were unable to recover it. 00:29:29.463 [2024-11-20 12:43:35.008783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.463 [2024-11-20 12:43:35.008818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.463 qpair failed and we were unable to recover it. 00:29:29.463 [2024-11-20 12:43:35.009077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.463 [2024-11-20 12:43:35.009112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.463 qpair failed and we were unable to recover it. 00:29:29.463 [2024-11-20 12:43:35.009302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.463 [2024-11-20 12:43:35.009329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.463 qpair failed and we were unable to recover it. 00:29:29.463 [2024-11-20 12:43:35.009568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.463 [2024-11-20 12:43:35.009602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.463 qpair failed and we were unable to recover it. 00:29:29.463 [2024-11-20 12:43:35.009799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.463 [2024-11-20 12:43:35.009834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.463 qpair failed and we were unable to recover it. 00:29:29.463 [2024-11-20 12:43:35.010123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.463 [2024-11-20 12:43:35.010168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.463 qpair failed and we were unable to recover it. 00:29:29.463 [2024-11-20 12:43:35.010426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.463 [2024-11-20 12:43:35.010452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.463 qpair failed and we were unable to recover it. 00:29:29.463 [2024-11-20 12:43:35.010580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.463 [2024-11-20 12:43:35.010604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.463 qpair failed and we were unable to recover it. 00:29:29.464 [2024-11-20 12:43:35.010842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.464 [2024-11-20 12:43:35.010878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.464 qpair failed and we were unable to recover it. 00:29:29.464 [2024-11-20 12:43:35.011090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.464 [2024-11-20 12:43:35.011125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.464 qpair failed and we were unable to recover it. 00:29:29.464 [2024-11-20 12:43:35.011386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.464 [2024-11-20 12:43:35.011412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.464 qpair failed and we were unable to recover it. 00:29:29.464 [2024-11-20 12:43:35.011615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.464 [2024-11-20 12:43:35.011640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.464 qpair failed and we were unable to recover it. 00:29:29.464 [2024-11-20 12:43:35.011803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.464 [2024-11-20 12:43:35.011832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.464 qpair failed and we were unable to recover it. 00:29:29.464 [2024-11-20 12:43:35.012021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.464 [2024-11-20 12:43:35.012046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.464 qpair failed and we were unable to recover it. 00:29:29.464 [2024-11-20 12:43:35.012308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.464 [2024-11-20 12:43:35.012334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.464 qpair failed and we were unable to recover it. 00:29:29.464 [2024-11-20 12:43:35.012565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.464 [2024-11-20 12:43:35.012589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.464 qpair failed and we were unable to recover it. 00:29:29.464 [2024-11-20 12:43:35.012749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.464 [2024-11-20 12:43:35.012775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.464 qpair failed and we were unable to recover it. 00:29:29.464 [2024-11-20 12:43:35.012953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.464 [2024-11-20 12:43:35.012978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.464 qpair failed and we were unable to recover it. 00:29:29.464 [2024-11-20 12:43:35.013241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.464 [2024-11-20 12:43:35.013267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.464 qpair failed and we were unable to recover it. 00:29:29.464 [2024-11-20 12:43:35.013514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.464 [2024-11-20 12:43:35.013539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.464 qpair failed and we were unable to recover it. 00:29:29.464 [2024-11-20 12:43:35.013800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.464 [2024-11-20 12:43:35.013825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.464 qpair failed and we were unable to recover it. 00:29:29.464 [2024-11-20 12:43:35.013957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.464 [2024-11-20 12:43:35.013982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.464 qpair failed and we were unable to recover it. 00:29:29.464 [2024-11-20 12:43:35.014170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.464 [2024-11-20 12:43:35.014195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.464 qpair failed and we were unable to recover it. 00:29:29.464 [2024-11-20 12:43:35.014337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.464 [2024-11-20 12:43:35.014379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.464 qpair failed and we were unable to recover it. 00:29:29.464 [2024-11-20 12:43:35.014589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.464 [2024-11-20 12:43:35.014624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.464 qpair failed and we were unable to recover it. 00:29:29.464 [2024-11-20 12:43:35.014880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.464 [2024-11-20 12:43:35.014915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.464 qpair failed and we were unable to recover it. 00:29:29.464 [2024-11-20 12:43:35.015183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.464 [2024-11-20 12:43:35.015236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.464 qpair failed and we were unable to recover it. 00:29:29.464 [2024-11-20 12:43:35.015511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.464 [2024-11-20 12:43:35.015536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.464 qpair failed and we were unable to recover it. 00:29:29.464 [2024-11-20 12:43:35.015814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.464 [2024-11-20 12:43:35.015858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.464 qpair failed and we were unable to recover it. 00:29:29.464 [2024-11-20 12:43:35.016064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.464 [2024-11-20 12:43:35.016100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.464 qpair failed and we were unable to recover it. 00:29:29.464 [2024-11-20 12:43:35.016305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.464 [2024-11-20 12:43:35.016342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.464 qpair failed and we were unable to recover it. 00:29:29.464 [2024-11-20 12:43:35.016622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.464 [2024-11-20 12:43:35.016648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.464 qpair failed and we were unable to recover it. 00:29:29.464 [2024-11-20 12:43:35.016921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.464 [2024-11-20 12:43:35.016946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.464 qpair failed and we were unable to recover it. 00:29:29.464 [2024-11-20 12:43:35.017208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.464 [2024-11-20 12:43:35.017234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.464 qpair failed and we were unable to recover it. 00:29:29.464 [2024-11-20 12:43:35.017516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.464 [2024-11-20 12:43:35.017542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.464 qpair failed and we were unable to recover it. 00:29:29.464 [2024-11-20 12:43:35.017804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.464 [2024-11-20 12:43:35.017838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.464 qpair failed and we were unable to recover it. 00:29:29.464 [2024-11-20 12:43:35.018045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.464 [2024-11-20 12:43:35.018080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.464 qpair failed and we were unable to recover it. 00:29:29.464 [2024-11-20 12:43:35.018306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.464 [2024-11-20 12:43:35.018342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.464 qpair failed and we were unable to recover it. 00:29:29.464 [2024-11-20 12:43:35.018530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.464 [2024-11-20 12:43:35.018564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.464 qpair failed and we were unable to recover it. 00:29:29.464 [2024-11-20 12:43:35.018850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.464 [2024-11-20 12:43:35.018885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.464 qpair failed and we were unable to recover it. 00:29:29.464 [2024-11-20 12:43:35.019082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.464 [2024-11-20 12:43:35.019117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.464 qpair failed and we were unable to recover it. 00:29:29.464 [2024-11-20 12:43:35.019378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.464 [2024-11-20 12:43:35.019415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.464 qpair failed and we were unable to recover it. 00:29:29.464 [2024-11-20 12:43:35.019610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.464 [2024-11-20 12:43:35.019635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.464 qpair failed and we were unable to recover it. 00:29:29.464 [2024-11-20 12:43:35.019870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.464 [2024-11-20 12:43:35.019895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.464 qpair failed and we were unable to recover it. 00:29:29.464 [2024-11-20 12:43:35.020154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.465 [2024-11-20 12:43:35.020179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.465 qpair failed and we were unable to recover it. 00:29:29.465 [2024-11-20 12:43:35.020445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.465 [2024-11-20 12:43:35.020470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.465 qpair failed and we were unable to recover it. 00:29:29.465 [2024-11-20 12:43:35.020599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.465 [2024-11-20 12:43:35.020624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.465 qpair failed and we were unable to recover it. 00:29:29.465 [2024-11-20 12:43:35.020899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.465 [2024-11-20 12:43:35.020924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.465 qpair failed and we were unable to recover it. 00:29:29.465 [2024-11-20 12:43:35.021234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.465 [2024-11-20 12:43:35.021270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.465 qpair failed and we were unable to recover it. 00:29:29.465 [2024-11-20 12:43:35.021532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.465 [2024-11-20 12:43:35.021567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.465 qpair failed and we were unable to recover it. 00:29:29.465 [2024-11-20 12:43:35.021828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.465 [2024-11-20 12:43:35.021863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.465 qpair failed and we were unable to recover it. 00:29:29.465 [2024-11-20 12:43:35.022158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.465 [2024-11-20 12:43:35.022193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.465 qpair failed and we were unable to recover it. 00:29:29.465 [2024-11-20 12:43:35.022498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.465 [2024-11-20 12:43:35.022533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.465 qpair failed and we were unable to recover it. 00:29:29.465 [2024-11-20 12:43:35.022801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.465 [2024-11-20 12:43:35.022836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.465 qpair failed and we were unable to recover it. 00:29:29.465 [2024-11-20 12:43:35.023133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.465 [2024-11-20 12:43:35.023168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.465 qpair failed and we were unable to recover it. 00:29:29.465 [2024-11-20 12:43:35.023396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.465 [2024-11-20 12:43:35.023432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.465 qpair failed and we were unable to recover it. 00:29:29.465 [2024-11-20 12:43:35.023704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.465 [2024-11-20 12:43:35.023738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.465 qpair failed and we were unable to recover it. 00:29:29.465 [2024-11-20 12:43:35.024024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.465 [2024-11-20 12:43:35.024058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.465 qpair failed and we were unable to recover it. 00:29:29.465 [2024-11-20 12:43:35.024334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.465 [2024-11-20 12:43:35.024361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.465 qpair failed and we were unable to recover it. 00:29:29.465 [2024-11-20 12:43:35.024542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.465 [2024-11-20 12:43:35.024568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.465 qpair failed and we were unable to recover it. 00:29:29.465 [2024-11-20 12:43:35.024754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.465 [2024-11-20 12:43:35.024779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.465 qpair failed and we were unable to recover it. 00:29:29.465 [2024-11-20 12:43:35.024993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.465 [2024-11-20 12:43:35.025018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.465 qpair failed and we were unable to recover it. 00:29:29.465 [2024-11-20 12:43:35.025221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.465 [2024-11-20 12:43:35.025247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.465 qpair failed and we were unable to recover it. 00:29:29.465 [2024-11-20 12:43:35.025482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.465 [2024-11-20 12:43:35.025506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.465 qpair failed and we were unable to recover it. 00:29:29.465 [2024-11-20 12:43:35.025743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.465 [2024-11-20 12:43:35.025768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.465 qpair failed and we were unable to recover it. 00:29:29.465 [2024-11-20 12:43:35.026029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.465 [2024-11-20 12:43:35.026053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.465 qpair failed and we were unable to recover it. 00:29:29.465 [2024-11-20 12:43:35.026254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.465 [2024-11-20 12:43:35.026281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.465 qpair failed and we were unable to recover it. 00:29:29.465 [2024-11-20 12:43:35.026488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.465 [2024-11-20 12:43:35.026513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.465 qpair failed and we were unable to recover it. 00:29:29.465 [2024-11-20 12:43:35.026772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.465 [2024-11-20 12:43:35.026797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.465 qpair failed and we were unable to recover it. 00:29:29.465 [2024-11-20 12:43:35.027053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.465 [2024-11-20 12:43:35.027078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.465 qpair failed and we were unable to recover it. 00:29:29.465 [2024-11-20 12:43:35.027192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.465 [2024-11-20 12:43:35.027225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.465 qpair failed and we were unable to recover it. 00:29:29.465 [2024-11-20 12:43:35.027493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.465 [2024-11-20 12:43:35.027527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.465 qpair failed and we were unable to recover it. 00:29:29.465 [2024-11-20 12:43:35.027660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.465 [2024-11-20 12:43:35.027693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.465 qpair failed and we were unable to recover it. 00:29:29.465 [2024-11-20 12:43:35.027916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.465 [2024-11-20 12:43:35.027951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.465 qpair failed and we were unable to recover it. 00:29:29.465 [2024-11-20 12:43:35.028235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.465 [2024-11-20 12:43:35.028271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.465 qpair failed and we were unable to recover it. 00:29:29.465 [2024-11-20 12:43:35.028462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.465 [2024-11-20 12:43:35.028497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.465 qpair failed and we were unable to recover it. 00:29:29.465 [2024-11-20 12:43:35.028777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.465 [2024-11-20 12:43:35.028860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.465 qpair failed and we were unable to recover it. 00:29:29.465 [2024-11-20 12:43:35.029021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.465 [2024-11-20 12:43:35.029060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.465 qpair failed and we were unable to recover it. 00:29:29.465 [2024-11-20 12:43:35.029346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.465 [2024-11-20 12:43:35.029383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.465 qpair failed and we were unable to recover it. 00:29:29.465 [2024-11-20 12:43:35.029577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.465 [2024-11-20 12:43:35.029612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.465 qpair failed and we were unable to recover it. 00:29:29.465 [2024-11-20 12:43:35.029920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.466 [2024-11-20 12:43:35.029965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.466 qpair failed and we were unable to recover it. 00:29:29.466 [2024-11-20 12:43:35.030188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.466 [2024-11-20 12:43:35.030241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.466 qpair failed and we were unable to recover it. 00:29:29.466 [2024-11-20 12:43:35.030474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.466 [2024-11-20 12:43:35.030508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.466 qpair failed and we were unable to recover it. 00:29:29.466 [2024-11-20 12:43:35.030816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.466 [2024-11-20 12:43:35.030850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.466 qpair failed and we were unable to recover it. 00:29:29.466 [2024-11-20 12:43:35.031107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.466 [2024-11-20 12:43:35.031142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.466 qpair failed and we were unable to recover it. 00:29:29.466 [2024-11-20 12:43:35.031437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.466 [2024-11-20 12:43:35.031474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.466 qpair failed and we were unable to recover it. 00:29:29.466 [2024-11-20 12:43:35.031742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.466 [2024-11-20 12:43:35.031777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.466 qpair failed and we were unable to recover it. 00:29:29.466 [2024-11-20 12:43:35.031971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.466 [2024-11-20 12:43:35.032005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.466 qpair failed and we were unable to recover it. 00:29:29.466 [2024-11-20 12:43:35.032276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.466 [2024-11-20 12:43:35.032313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.466 qpair failed and we were unable to recover it. 00:29:29.466 [2024-11-20 12:43:35.032583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.466 [2024-11-20 12:43:35.032618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.466 qpair failed and we were unable to recover it. 00:29:29.466 [2024-11-20 12:43:35.032904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.466 [2024-11-20 12:43:35.032939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.466 qpair failed and we were unable to recover it. 00:29:29.466 [2024-11-20 12:43:35.033228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.466 [2024-11-20 12:43:35.033265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.466 qpair failed and we were unable to recover it. 00:29:29.466 [2024-11-20 12:43:35.033536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.466 [2024-11-20 12:43:35.033571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.466 qpair failed and we were unable to recover it. 00:29:29.466 [2024-11-20 12:43:35.033773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.466 [2024-11-20 12:43:35.033807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.466 qpair failed and we were unable to recover it. 00:29:29.466 [2024-11-20 12:43:35.034063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.466 [2024-11-20 12:43:35.034098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.466 qpair failed and we were unable to recover it. 00:29:29.466 [2024-11-20 12:43:35.034294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.466 [2024-11-20 12:43:35.034334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.466 qpair failed and we were unable to recover it. 00:29:29.466 [2024-11-20 12:43:35.034624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.466 [2024-11-20 12:43:35.034659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.466 qpair failed and we were unable to recover it. 00:29:29.466 [2024-11-20 12:43:35.034856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.466 [2024-11-20 12:43:35.034891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.466 qpair failed and we were unable to recover it. 00:29:29.466 [2024-11-20 12:43:35.035173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.466 [2024-11-20 12:43:35.035215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.466 qpair failed and we were unable to recover it. 00:29:29.466 [2024-11-20 12:43:35.035434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.466 [2024-11-20 12:43:35.035469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.466 qpair failed and we were unable to recover it. 00:29:29.466 [2024-11-20 12:43:35.035655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.466 [2024-11-20 12:43:35.035690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.466 qpair failed and we were unable to recover it. 00:29:29.466 [2024-11-20 12:43:35.035972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.466 [2024-11-20 12:43:35.036005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.466 qpair failed and we were unable to recover it. 00:29:29.466 [2024-11-20 12:43:35.036267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.466 [2024-11-20 12:43:35.036303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.466 qpair failed and we were unable to recover it. 00:29:29.466 [2024-11-20 12:43:35.036580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.466 [2024-11-20 12:43:35.036615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.466 qpair failed and we were unable to recover it. 00:29:29.466 [2024-11-20 12:43:35.036914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.466 [2024-11-20 12:43:35.036949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.466 qpair failed and we were unable to recover it. 00:29:29.466 [2024-11-20 12:43:35.037253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.466 [2024-11-20 12:43:35.037288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.466 qpair failed and we were unable to recover it. 00:29:29.466 [2024-11-20 12:43:35.037548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.466 [2024-11-20 12:43:35.037583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.466 qpair failed and we were unable to recover it. 00:29:29.466 [2024-11-20 12:43:35.037859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.466 [2024-11-20 12:43:35.037895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.466 qpair failed and we were unable to recover it. 00:29:29.466 [2024-11-20 12:43:35.038182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.466 [2024-11-20 12:43:35.038237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.466 qpair failed and we were unable to recover it. 00:29:29.466 [2024-11-20 12:43:35.038451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.466 [2024-11-20 12:43:35.038487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.466 qpair failed and we were unable to recover it. 00:29:29.466 [2024-11-20 12:43:35.038682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.466 [2024-11-20 12:43:35.038716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.466 qpair failed and we were unable to recover it. 00:29:29.466 [2024-11-20 12:43:35.038927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.466 [2024-11-20 12:43:35.038961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.466 qpair failed and we were unable to recover it. 00:29:29.466 [2024-11-20 12:43:35.039240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.466 [2024-11-20 12:43:35.039276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.466 qpair failed and we were unable to recover it. 00:29:29.466 [2024-11-20 12:43:35.039466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.466 [2024-11-20 12:43:35.039499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.466 qpair failed and we were unable to recover it. 00:29:29.466 [2024-11-20 12:43:35.039784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.466 [2024-11-20 12:43:35.039819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.466 qpair failed and we were unable to recover it. 00:29:29.466 [2024-11-20 12:43:35.040016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.466 [2024-11-20 12:43:35.040051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.466 qpair failed and we were unable to recover it. 00:29:29.466 [2024-11-20 12:43:35.040339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.467 [2024-11-20 12:43:35.040374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.467 qpair failed and we were unable to recover it. 00:29:29.467 [2024-11-20 12:43:35.040589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.467 [2024-11-20 12:43:35.040624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.467 qpair failed and we were unable to recover it. 00:29:29.467 [2024-11-20 12:43:35.040838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.467 [2024-11-20 12:43:35.040872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.467 qpair failed and we were unable to recover it. 00:29:29.467 [2024-11-20 12:43:35.041124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.467 [2024-11-20 12:43:35.041157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.467 qpair failed and we were unable to recover it. 00:29:29.467 [2024-11-20 12:43:35.041382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.467 [2024-11-20 12:43:35.041425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.467 qpair failed and we were unable to recover it. 00:29:29.467 [2024-11-20 12:43:35.041691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.467 [2024-11-20 12:43:35.041725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.467 qpair failed and we were unable to recover it. 00:29:29.467 [2024-11-20 12:43:35.041864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.467 [2024-11-20 12:43:35.041899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.467 qpair failed and we were unable to recover it. 00:29:29.467 [2024-11-20 12:43:35.042021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.467 [2024-11-20 12:43:35.042056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.467 qpair failed and we were unable to recover it. 00:29:29.467 [2024-11-20 12:43:35.042332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.467 [2024-11-20 12:43:35.042370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.467 qpair failed and we were unable to recover it. 00:29:29.467 [2024-11-20 12:43:35.042580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.467 [2024-11-20 12:43:35.042615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.467 qpair failed and we were unable to recover it. 00:29:29.467 [2024-11-20 12:43:35.042813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.467 [2024-11-20 12:43:35.042847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.467 qpair failed and we were unable to recover it. 00:29:29.467 [2024-11-20 12:43:35.042966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.467 [2024-11-20 12:43:35.043001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.467 qpair failed and we were unable to recover it. 00:29:29.467 [2024-11-20 12:43:35.043224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.467 [2024-11-20 12:43:35.043261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.467 qpair failed and we were unable to recover it. 00:29:29.467 [2024-11-20 12:43:35.043556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.467 [2024-11-20 12:43:35.043591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.467 qpair failed and we were unable to recover it. 00:29:29.467 [2024-11-20 12:43:35.043847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.467 [2024-11-20 12:43:35.043882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.467 qpair failed and we were unable to recover it. 00:29:29.467 [2024-11-20 12:43:35.044165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.467 [2024-11-20 12:43:35.044199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.467 qpair failed and we were unable to recover it. 00:29:29.467 [2024-11-20 12:43:35.044427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.467 [2024-11-20 12:43:35.044462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.467 qpair failed and we were unable to recover it. 00:29:29.467 [2024-11-20 12:43:35.044647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.467 [2024-11-20 12:43:35.044681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.467 qpair failed and we were unable to recover it. 00:29:29.467 [2024-11-20 12:43:35.044888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.467 [2024-11-20 12:43:35.044923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.467 qpair failed and we were unable to recover it. 00:29:29.467 [2024-11-20 12:43:35.045225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.467 [2024-11-20 12:43:35.045262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.467 qpair failed and we were unable to recover it. 00:29:29.467 [2024-11-20 12:43:35.045473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.467 [2024-11-20 12:43:35.045508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.467 qpair failed and we were unable to recover it. 00:29:29.467 [2024-11-20 12:43:35.045722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.467 [2024-11-20 12:43:35.045757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.467 qpair failed and we were unable to recover it. 00:29:29.467 [2024-11-20 12:43:35.046065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.467 [2024-11-20 12:43:35.046099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.467 qpair failed and we were unable to recover it. 00:29:29.467 [2024-11-20 12:43:35.046229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.467 [2024-11-20 12:43:35.046270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.467 qpair failed and we were unable to recover it. 00:29:29.467 [2024-11-20 12:43:35.046480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.467 [2024-11-20 12:43:35.046514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.467 qpair failed and we were unable to recover it. 00:29:29.467 [2024-11-20 12:43:35.046732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.467 [2024-11-20 12:43:35.046768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.467 qpair failed and we were unable to recover it. 00:29:29.467 [2024-11-20 12:43:35.047073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.467 [2024-11-20 12:43:35.047107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.467 qpair failed and we were unable to recover it. 00:29:29.467 [2024-11-20 12:43:35.047409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.467 [2024-11-20 12:43:35.047445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.467 qpair failed and we were unable to recover it. 00:29:29.467 [2024-11-20 12:43:35.047658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.467 [2024-11-20 12:43:35.047692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.467 qpair failed and we were unable to recover it. 00:29:29.467 [2024-11-20 12:43:35.047975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.467 [2024-11-20 12:43:35.048010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.467 qpair failed and we were unable to recover it. 00:29:29.467 [2024-11-20 12:43:35.048312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.467 [2024-11-20 12:43:35.048348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.467 qpair failed and we were unable to recover it. 00:29:29.467 [2024-11-20 12:43:35.048635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.467 [2024-11-20 12:43:35.048670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.467 qpair failed and we were unable to recover it. 00:29:29.467 [2024-11-20 12:43:35.048856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.467 [2024-11-20 12:43:35.048892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.467 qpair failed and we were unable to recover it. 00:29:29.467 [2024-11-20 12:43:35.049041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.467 [2024-11-20 12:43:35.049074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.468 qpair failed and we were unable to recover it. 00:29:29.468 [2024-11-20 12:43:35.049215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.468 [2024-11-20 12:43:35.049250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.468 qpair failed and we were unable to recover it. 00:29:29.468 [2024-11-20 12:43:35.049537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.468 [2024-11-20 12:43:35.049571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.468 qpair failed and we were unable to recover it. 00:29:29.468 [2024-11-20 12:43:35.049761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.468 [2024-11-20 12:43:35.049795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.468 qpair failed and we were unable to recover it. 00:29:29.468 [2024-11-20 12:43:35.050040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.468 [2024-11-20 12:43:35.050075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.468 qpair failed and we were unable to recover it. 00:29:29.468 [2024-11-20 12:43:35.050359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.468 [2024-11-20 12:43:35.050398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.468 qpair failed and we were unable to recover it. 00:29:29.468 [2024-11-20 12:43:35.050675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.468 [2024-11-20 12:43:35.050710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.468 qpair failed and we were unable to recover it. 00:29:29.468 [2024-11-20 12:43:35.050992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.468 [2024-11-20 12:43:35.051027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.468 qpair failed and we were unable to recover it. 00:29:29.468 [2024-11-20 12:43:35.051233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.468 [2024-11-20 12:43:35.051270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.468 qpair failed and we were unable to recover it. 00:29:29.468 [2024-11-20 12:43:35.051539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.468 [2024-11-20 12:43:35.051575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.468 qpair failed and we were unable to recover it. 00:29:29.468 [2024-11-20 12:43:35.051764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.468 [2024-11-20 12:43:35.051799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.468 qpair failed and we were unable to recover it. 00:29:29.468 [2024-11-20 12:43:35.052069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.468 [2024-11-20 12:43:35.052116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.468 qpair failed and we were unable to recover it. 00:29:29.468 [2024-11-20 12:43:35.052346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.468 [2024-11-20 12:43:35.052382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.468 qpair failed and we were unable to recover it. 00:29:29.468 [2024-11-20 12:43:35.052692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.468 [2024-11-20 12:43:35.052726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.468 qpair failed and we were unable to recover it. 00:29:29.468 [2024-11-20 12:43:35.052884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.468 [2024-11-20 12:43:35.052918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.468 qpair failed and we were unable to recover it. 00:29:29.468 [2024-11-20 12:43:35.053123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.468 [2024-11-20 12:43:35.053159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.468 qpair failed and we were unable to recover it. 00:29:29.468 [2024-11-20 12:43:35.053385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.468 [2024-11-20 12:43:35.053420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.468 qpair failed and we were unable to recover it. 00:29:29.468 [2024-11-20 12:43:35.053630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.468 [2024-11-20 12:43:35.053664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.468 qpair failed and we were unable to recover it. 00:29:29.468 [2024-11-20 12:43:35.053966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.468 [2024-11-20 12:43:35.054002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.468 qpair failed and we were unable to recover it. 00:29:29.468 [2024-11-20 12:43:35.054289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.468 [2024-11-20 12:43:35.054327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.468 qpair failed and we were unable to recover it. 00:29:29.468 [2024-11-20 12:43:35.054597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.468 [2024-11-20 12:43:35.054632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.468 qpair failed and we were unable to recover it. 00:29:29.468 [2024-11-20 12:43:35.054892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.468 [2024-11-20 12:43:35.054927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.468 qpair failed and we were unable to recover it. 00:29:29.468 [2024-11-20 12:43:35.055248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.468 [2024-11-20 12:43:35.055284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.468 qpair failed and we were unable to recover it. 00:29:29.468 [2024-11-20 12:43:35.055574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.468 [2024-11-20 12:43:35.055609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.468 qpair failed and we were unable to recover it. 00:29:29.468 [2024-11-20 12:43:35.055890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.468 [2024-11-20 12:43:35.055924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.468 qpair failed and we were unable to recover it. 00:29:29.468 [2024-11-20 12:43:35.056215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.468 [2024-11-20 12:43:35.056251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.468 qpair failed and we were unable to recover it. 00:29:29.468 [2024-11-20 12:43:35.056466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.468 [2024-11-20 12:43:35.056502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.468 qpair failed and we were unable to recover it. 00:29:29.468 [2024-11-20 12:43:35.056757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.468 [2024-11-20 12:43:35.056791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.468 qpair failed and we were unable to recover it. 00:29:29.468 [2024-11-20 12:43:35.057074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.468 [2024-11-20 12:43:35.057109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.468 qpair failed and we were unable to recover it. 00:29:29.468 [2024-11-20 12:43:35.057321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.468 [2024-11-20 12:43:35.057357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.468 qpair failed and we were unable to recover it. 00:29:29.468 [2024-11-20 12:43:35.057567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.468 [2024-11-20 12:43:35.057602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.468 qpair failed and we were unable to recover it. 00:29:29.468 [2024-11-20 12:43:35.057814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.468 [2024-11-20 12:43:35.057849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.468 qpair failed and we were unable to recover it. 00:29:29.468 [2024-11-20 12:43:35.058141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.468 [2024-11-20 12:43:35.058176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.468 qpair failed and we were unable to recover it. 00:29:29.468 [2024-11-20 12:43:35.058481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.468 [2024-11-20 12:43:35.058518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.468 qpair failed and we were unable to recover it. 00:29:29.468 [2024-11-20 12:43:35.058741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.468 [2024-11-20 12:43:35.058775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.468 qpair failed and we were unable to recover it. 00:29:29.468 [2024-11-20 12:43:35.059059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.468 [2024-11-20 12:43:35.059092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.468 qpair failed and we were unable to recover it. 00:29:29.468 [2024-11-20 12:43:35.059370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.468 [2024-11-20 12:43:35.059408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.468 qpair failed and we were unable to recover it. 00:29:29.468 [2024-11-20 12:43:35.059697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.468 [2024-11-20 12:43:35.059731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.468 qpair failed and we were unable to recover it. 00:29:29.469 [2024-11-20 12:43:35.059942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.469 [2024-11-20 12:43:35.059976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.469 qpair failed and we were unable to recover it. 00:29:29.469 [2024-11-20 12:43:35.060229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.469 [2024-11-20 12:43:35.060266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.469 qpair failed and we were unable to recover it. 00:29:29.469 [2024-11-20 12:43:35.060484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.469 [2024-11-20 12:43:35.060519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.469 qpair failed and we were unable to recover it. 00:29:29.469 [2024-11-20 12:43:35.060719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.469 [2024-11-20 12:43:35.060752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.469 qpair failed and we were unable to recover it. 00:29:29.469 [2024-11-20 12:43:35.060967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.469 [2024-11-20 12:43:35.061001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.469 qpair failed and we were unable to recover it. 00:29:29.469 [2024-11-20 12:43:35.061217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.469 [2024-11-20 12:43:35.061253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.469 qpair failed and we were unable to recover it. 00:29:29.469 [2024-11-20 12:43:35.061534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.469 [2024-11-20 12:43:35.061568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.469 qpair failed and we were unable to recover it. 00:29:29.469 [2024-11-20 12:43:35.061798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.469 [2024-11-20 12:43:35.061833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.469 qpair failed and we were unable to recover it. 00:29:29.469 [2024-11-20 12:43:35.062047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.469 [2024-11-20 12:43:35.062082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.469 qpair failed and we were unable to recover it. 00:29:29.469 [2024-11-20 12:43:35.062269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.469 [2024-11-20 12:43:35.062307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.469 qpair failed and we were unable to recover it. 00:29:29.469 [2024-11-20 12:43:35.062611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.469 [2024-11-20 12:43:35.062645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.469 qpair failed and we were unable to recover it. 00:29:29.469 [2024-11-20 12:43:35.062952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.469 [2024-11-20 12:43:35.062986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.469 qpair failed and we were unable to recover it. 00:29:29.469 [2024-11-20 12:43:35.063239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.469 [2024-11-20 12:43:35.063276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.469 qpair failed and we were unable to recover it. 00:29:29.469 [2024-11-20 12:43:35.063532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.469 [2024-11-20 12:43:35.063574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.469 qpair failed and we were unable to recover it. 00:29:29.469 [2024-11-20 12:43:35.063834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.469 [2024-11-20 12:43:35.063869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.469 qpair failed and we were unable to recover it. 00:29:29.469 [2024-11-20 12:43:35.064177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.469 [2024-11-20 12:43:35.064222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.469 qpair failed and we were unable to recover it. 00:29:29.469 [2024-11-20 12:43:35.064499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.469 [2024-11-20 12:43:35.064535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.469 qpair failed and we were unable to recover it. 00:29:29.469 [2024-11-20 12:43:35.064759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.469 [2024-11-20 12:43:35.064793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.469 qpair failed and we were unable to recover it. 00:29:29.469 [2024-11-20 12:43:35.064998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.469 [2024-11-20 12:43:35.065033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.469 qpair failed and we were unable to recover it. 00:29:29.469 [2024-11-20 12:43:35.065318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.469 [2024-11-20 12:43:35.065353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.469 qpair failed and we were unable to recover it. 00:29:29.469 [2024-11-20 12:43:35.065564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.469 [2024-11-20 12:43:35.065599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.469 qpair failed and we were unable to recover it. 00:29:29.469 [2024-11-20 12:43:35.065877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.469 [2024-11-20 12:43:35.065912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.469 qpair failed and we were unable to recover it. 00:29:29.469 [2024-11-20 12:43:35.066114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.469 [2024-11-20 12:43:35.066150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.469 qpair failed and we were unable to recover it. 00:29:29.469 [2024-11-20 12:43:35.066453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.469 [2024-11-20 12:43:35.066491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.469 qpair failed and we were unable to recover it. 00:29:29.469 [2024-11-20 12:43:35.066679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.469 [2024-11-20 12:43:35.066713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.469 qpair failed and we were unable to recover it. 00:29:29.469 [2024-11-20 12:43:35.067021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.469 [2024-11-20 12:43:35.067057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.469 qpair failed and we were unable to recover it. 00:29:29.469 [2024-11-20 12:43:35.067312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.469 [2024-11-20 12:43:35.067349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.469 qpair failed and we were unable to recover it. 00:29:29.469 [2024-11-20 12:43:35.067579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.469 [2024-11-20 12:43:35.067614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.469 qpair failed and we were unable to recover it. 00:29:29.469 [2024-11-20 12:43:35.067892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.469 [2024-11-20 12:43:35.067926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.469 qpair failed and we were unable to recover it. 00:29:29.469 [2024-11-20 12:43:35.068221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.469 [2024-11-20 12:43:35.068257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.469 qpair failed and we were unable to recover it. 00:29:29.469 [2024-11-20 12:43:35.068390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.469 [2024-11-20 12:43:35.068426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.469 qpair failed and we were unable to recover it. 00:29:29.469 [2024-11-20 12:43:35.068709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.469 [2024-11-20 12:43:35.068743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.469 qpair failed and we were unable to recover it. 00:29:29.469 [2024-11-20 12:43:35.068985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.469 [2024-11-20 12:43:35.069020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.469 qpair failed and we were unable to recover it. 00:29:29.469 [2024-11-20 12:43:35.069342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.469 [2024-11-20 12:43:35.069378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.469 qpair failed and we were unable to recover it. 00:29:29.469 [2024-11-20 12:43:35.069654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.469 [2024-11-20 12:43:35.069688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.469 qpair failed and we were unable to recover it. 00:29:29.469 [2024-11-20 12:43:35.069843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.470 [2024-11-20 12:43:35.069877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.470 qpair failed and we were unable to recover it. 00:29:29.470 [2024-11-20 12:43:35.070080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.470 [2024-11-20 12:43:35.070116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.470 qpair failed and we were unable to recover it. 00:29:29.470 [2024-11-20 12:43:35.070377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.470 [2024-11-20 12:43:35.070416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.470 qpair failed and we were unable to recover it. 00:29:29.470 [2024-11-20 12:43:35.070639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.470 [2024-11-20 12:43:35.070674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.470 qpair failed and we were unable to recover it. 00:29:29.470 [2024-11-20 12:43:35.070891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.470 [2024-11-20 12:43:35.070926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.470 qpair failed and we were unable to recover it. 00:29:29.470 [2024-11-20 12:43:35.071151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.470 [2024-11-20 12:43:35.071187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.470 qpair failed and we were unable to recover it. 00:29:29.470 [2024-11-20 12:43:35.071514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.470 [2024-11-20 12:43:35.071551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.470 qpair failed and we were unable to recover it. 00:29:29.470 [2024-11-20 12:43:35.071830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.470 [2024-11-20 12:43:35.071865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.470 qpair failed and we were unable to recover it. 00:29:29.470 [2024-11-20 12:43:35.072144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.470 [2024-11-20 12:43:35.072178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.470 qpair failed and we were unable to recover it. 00:29:29.470 [2024-11-20 12:43:35.072465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.470 [2024-11-20 12:43:35.072502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.470 qpair failed and we were unable to recover it. 00:29:29.470 [2024-11-20 12:43:35.072695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.470 [2024-11-20 12:43:35.072730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.470 qpair failed and we were unable to recover it. 00:29:29.470 [2024-11-20 12:43:35.073018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.470 [2024-11-20 12:43:35.073053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.470 qpair failed and we were unable to recover it. 00:29:29.470 [2024-11-20 12:43:35.073183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.470 [2024-11-20 12:43:35.073232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.470 qpair failed and we were unable to recover it. 00:29:29.470 [2024-11-20 12:43:35.073462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.470 [2024-11-20 12:43:35.073497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.470 qpair failed and we were unable to recover it. 00:29:29.470 [2024-11-20 12:43:35.073780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.470 [2024-11-20 12:43:35.073815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.470 qpair failed and we were unable to recover it. 00:29:29.470 [2024-11-20 12:43:35.074015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.470 [2024-11-20 12:43:35.074049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.470 qpair failed and we were unable to recover it. 00:29:29.470 [2024-11-20 12:43:35.074242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.470 [2024-11-20 12:43:35.074278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.470 qpair failed and we were unable to recover it. 00:29:29.470 [2024-11-20 12:43:35.074482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.470 [2024-11-20 12:43:35.074518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.470 qpair failed and we were unable to recover it. 00:29:29.470 [2024-11-20 12:43:35.074777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.470 [2024-11-20 12:43:35.074818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.470 qpair failed and we were unable to recover it. 00:29:29.470 [2024-11-20 12:43:35.074955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.470 [2024-11-20 12:43:35.074991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.470 qpair failed and we were unable to recover it. 00:29:29.470 [2024-11-20 12:43:35.075274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.470 [2024-11-20 12:43:35.075312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.470 qpair failed and we were unable to recover it. 00:29:29.470 [2024-11-20 12:43:35.075585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.470 [2024-11-20 12:43:35.075620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.470 qpair failed and we were unable to recover it. 00:29:29.470 [2024-11-20 12:43:35.075827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.470 [2024-11-20 12:43:35.075862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.470 qpair failed and we were unable to recover it. 00:29:29.470 [2024-11-20 12:43:35.076137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.470 [2024-11-20 12:43:35.076173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.470 qpair failed and we were unable to recover it. 00:29:29.470 [2024-11-20 12:43:35.076398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.470 [2024-11-20 12:43:35.076433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.470 qpair failed and we were unable to recover it. 00:29:29.470 [2024-11-20 12:43:35.076659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.470 [2024-11-20 12:43:35.076694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.470 qpair failed and we were unable to recover it. 00:29:29.470 [2024-11-20 12:43:35.076908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.470 [2024-11-20 12:43:35.076943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.470 qpair failed and we were unable to recover it. 00:29:29.470 [2024-11-20 12:43:35.077210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.470 [2024-11-20 12:43:35.077245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.470 qpair failed and we were unable to recover it. 00:29:29.470 [2024-11-20 12:43:35.077394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.470 [2024-11-20 12:43:35.077430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.470 qpair failed and we were unable to recover it. 00:29:29.470 [2024-11-20 12:43:35.077637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.470 [2024-11-20 12:43:35.077673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.470 qpair failed and we were unable to recover it. 00:29:29.470 [2024-11-20 12:43:35.077881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.470 [2024-11-20 12:43:35.077916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.470 qpair failed and we were unable to recover it. 00:29:29.470 [2024-11-20 12:43:35.078131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.470 [2024-11-20 12:43:35.078167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.470 qpair failed and we were unable to recover it. 00:29:29.470 [2024-11-20 12:43:35.078402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.470 [2024-11-20 12:43:35.078438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.470 qpair failed and we were unable to recover it. 00:29:29.470 [2024-11-20 12:43:35.078711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.470 [2024-11-20 12:43:35.078745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.470 qpair failed and we were unable to recover it. 00:29:29.470 [2024-11-20 12:43:35.079002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.470 [2024-11-20 12:43:35.079036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.470 qpair failed and we were unable to recover it. 00:29:29.470 [2024-11-20 12:43:35.079269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.470 [2024-11-20 12:43:35.079307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.470 qpair failed and we were unable to recover it. 00:29:29.470 [2024-11-20 12:43:35.079588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.470 [2024-11-20 12:43:35.079623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.470 qpair failed and we were unable to recover it. 00:29:29.471 [2024-11-20 12:43:35.079905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.471 [2024-11-20 12:43:35.079940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.471 qpair failed and we were unable to recover it. 00:29:29.471 [2024-11-20 12:43:35.080171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.471 [2024-11-20 12:43:35.080229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.471 qpair failed and we were unable to recover it. 00:29:29.471 [2024-11-20 12:43:35.080460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.471 [2024-11-20 12:43:35.080495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.471 qpair failed and we were unable to recover it. 00:29:29.471 [2024-11-20 12:43:35.080723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.471 [2024-11-20 12:43:35.080758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.471 qpair failed and we were unable to recover it. 00:29:29.471 [2024-11-20 12:43:35.080948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.471 [2024-11-20 12:43:35.080983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.471 qpair failed and we were unable to recover it. 00:29:29.471 [2024-11-20 12:43:35.081213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.471 [2024-11-20 12:43:35.081250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.471 qpair failed and we were unable to recover it. 00:29:29.471 [2024-11-20 12:43:35.081553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.471 [2024-11-20 12:43:35.081589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.471 qpair failed and we were unable to recover it. 00:29:29.471 [2024-11-20 12:43:35.081893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.471 [2024-11-20 12:43:35.081929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.471 qpair failed and we were unable to recover it. 00:29:29.471 [2024-11-20 12:43:35.082198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.471 [2024-11-20 12:43:35.082244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.471 qpair failed and we were unable to recover it. 00:29:29.471 [2024-11-20 12:43:35.082526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.471 [2024-11-20 12:43:35.082561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.471 qpair failed and we were unable to recover it. 00:29:29.471 [2024-11-20 12:43:35.082775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.471 [2024-11-20 12:43:35.082810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.471 qpair failed and we were unable to recover it. 00:29:29.471 [2024-11-20 12:43:35.083097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.471 [2024-11-20 12:43:35.083131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.471 qpair failed and we were unable to recover it. 00:29:29.471 [2024-11-20 12:43:35.083406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.471 [2024-11-20 12:43:35.083445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.471 qpair failed and we were unable to recover it. 00:29:29.471 [2024-11-20 12:43:35.083664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.471 [2024-11-20 12:43:35.083699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.471 qpair failed and we were unable to recover it. 00:29:29.471 [2024-11-20 12:43:35.084007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.471 [2024-11-20 12:43:35.084042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.471 qpair failed and we were unable to recover it. 00:29:29.471 [2024-11-20 12:43:35.084240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.471 [2024-11-20 12:43:35.084276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.471 qpair failed and we were unable to recover it. 00:29:29.471 [2024-11-20 12:43:35.084563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.471 [2024-11-20 12:43:35.084598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.471 qpair failed and we were unable to recover it. 00:29:29.471 [2024-11-20 12:43:35.084784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.471 [2024-11-20 12:43:35.084820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.471 qpair failed and we were unable to recover it. 00:29:29.471 [2024-11-20 12:43:35.085026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.471 [2024-11-20 12:43:35.085061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.471 qpair failed and we were unable to recover it. 00:29:29.471 [2024-11-20 12:43:35.085341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.471 [2024-11-20 12:43:35.085377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.471 qpair failed and we were unable to recover it. 00:29:29.471 [2024-11-20 12:43:35.085563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.471 [2024-11-20 12:43:35.085598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.471 qpair failed and we were unable to recover it. 00:29:29.471 [2024-11-20 12:43:35.085859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.471 [2024-11-20 12:43:35.085899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.471 qpair failed and we were unable to recover it. 00:29:29.471 [2024-11-20 12:43:35.086034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.471 [2024-11-20 12:43:35.086068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.471 qpair failed and we were unable to recover it. 00:29:29.471 [2024-11-20 12:43:35.086384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.471 [2024-11-20 12:43:35.086421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.471 qpair failed and we were unable to recover it. 00:29:29.471 [2024-11-20 12:43:35.086707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.471 [2024-11-20 12:43:35.086742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.471 qpair failed and we were unable to recover it. 00:29:29.471 [2024-11-20 12:43:35.086941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.471 [2024-11-20 12:43:35.086976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.471 qpair failed and we were unable to recover it. 00:29:29.471 [2024-11-20 12:43:35.087173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.471 [2024-11-20 12:43:35.087222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.471 qpair failed and we were unable to recover it. 00:29:29.471 [2024-11-20 12:43:35.087496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.471 [2024-11-20 12:43:35.087532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.471 qpair failed and we were unable to recover it. 00:29:29.471 [2024-11-20 12:43:35.087716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.471 [2024-11-20 12:43:35.087750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.471 qpair failed and we were unable to recover it. 00:29:29.471 [2024-11-20 12:43:35.087961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.471 [2024-11-20 12:43:35.087996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.471 qpair failed and we were unable to recover it. 00:29:29.471 [2024-11-20 12:43:35.088274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.471 [2024-11-20 12:43:35.088311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.471 qpair failed and we were unable to recover it. 00:29:29.471 [2024-11-20 12:43:35.088570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.471 [2024-11-20 12:43:35.088605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.471 qpair failed and we were unable to recover it. 00:29:29.471 [2024-11-20 12:43:35.088907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.471 [2024-11-20 12:43:35.088941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.471 qpair failed and we were unable to recover it. 00:29:29.471 [2024-11-20 12:43:35.089195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.471 [2024-11-20 12:43:35.089238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.471 qpair failed and we were unable to recover it. 00:29:29.471 [2024-11-20 12:43:35.089447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.471 [2024-11-20 12:43:35.089482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.471 qpair failed and we were unable to recover it. 00:29:29.471 [2024-11-20 12:43:35.089762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.471 [2024-11-20 12:43:35.089797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.471 qpair failed and we were unable to recover it. 00:29:29.471 [2024-11-20 12:43:35.089986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.471 [2024-11-20 12:43:35.090021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.471 qpair failed and we were unable to recover it. 00:29:29.472 [2024-11-20 12:43:35.090291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.472 [2024-11-20 12:43:35.090327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.472 qpair failed and we were unable to recover it. 00:29:29.472 [2024-11-20 12:43:35.090599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.472 [2024-11-20 12:43:35.090634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.472 qpair failed and we were unable to recover it. 00:29:29.472 [2024-11-20 12:43:35.090875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.472 [2024-11-20 12:43:35.090909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.472 qpair failed and we were unable to recover it. 00:29:29.472 [2024-11-20 12:43:35.091092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.472 [2024-11-20 12:43:35.091127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.472 qpair failed and we were unable to recover it. 00:29:29.472 [2024-11-20 12:43:35.091410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.472 [2024-11-20 12:43:35.091448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.472 qpair failed and we were unable to recover it. 00:29:29.472 [2024-11-20 12:43:35.091710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.472 [2024-11-20 12:43:35.091746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.472 qpair failed and we were unable to recover it. 00:29:29.472 [2024-11-20 12:43:35.092042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.472 [2024-11-20 12:43:35.092077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.472 qpair failed and we were unable to recover it. 00:29:29.472 [2024-11-20 12:43:35.092225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.472 [2024-11-20 12:43:35.092262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.472 qpair failed and we were unable to recover it. 00:29:29.472 [2024-11-20 12:43:35.092547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.472 [2024-11-20 12:43:35.092582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.472 qpair failed and we were unable to recover it. 00:29:29.472 [2024-11-20 12:43:35.092773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.472 [2024-11-20 12:43:35.092808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.472 qpair failed and we were unable to recover it. 00:29:29.472 [2024-11-20 12:43:35.093113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.472 [2024-11-20 12:43:35.093148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.472 qpair failed and we were unable to recover it. 00:29:29.472 [2024-11-20 12:43:35.093444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.472 [2024-11-20 12:43:35.093481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.472 qpair failed and we were unable to recover it. 00:29:29.472 [2024-11-20 12:43:35.093695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.472 [2024-11-20 12:43:35.093729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.472 qpair failed and we were unable to recover it. 00:29:29.472 [2024-11-20 12:43:35.093984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.472 [2024-11-20 12:43:35.094019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.472 qpair failed and we were unable to recover it. 00:29:29.472 [2024-11-20 12:43:35.094306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.472 [2024-11-20 12:43:35.094342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.472 qpair failed and we were unable to recover it. 00:29:29.472 [2024-11-20 12:43:35.094567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.472 [2024-11-20 12:43:35.094602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.472 qpair failed and we were unable to recover it. 00:29:29.472 [2024-11-20 12:43:35.094888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.472 [2024-11-20 12:43:35.094923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.472 qpair failed and we were unable to recover it. 00:29:29.472 [2024-11-20 12:43:35.095110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.472 [2024-11-20 12:43:35.095146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.472 qpair failed and we were unable to recover it. 00:29:29.472 [2024-11-20 12:43:35.095444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.472 [2024-11-20 12:43:35.095482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.472 qpair failed and we were unable to recover it. 00:29:29.472 [2024-11-20 12:43:35.095769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.472 [2024-11-20 12:43:35.095804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.472 qpair failed and we were unable to recover it. 00:29:29.472 [2024-11-20 12:43:35.096095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.472 [2024-11-20 12:43:35.096132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.472 qpair failed and we were unable to recover it. 00:29:29.472 [2024-11-20 12:43:35.096357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.472 [2024-11-20 12:43:35.096397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.472 qpair failed and we were unable to recover it. 00:29:29.472 [2024-11-20 12:43:35.096617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.472 [2024-11-20 12:43:35.096651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.472 qpair failed and we were unable to recover it. 00:29:29.472 [2024-11-20 12:43:35.096909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.472 [2024-11-20 12:43:35.096944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.472 qpair failed and we were unable to recover it. 00:29:29.472 [2024-11-20 12:43:35.097242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.472 [2024-11-20 12:43:35.097285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.472 qpair failed and we were unable to recover it. 00:29:29.472 [2024-11-20 12:43:35.097570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.472 [2024-11-20 12:43:35.097604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.472 qpair failed and we were unable to recover it. 00:29:29.472 [2024-11-20 12:43:35.097876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.472 [2024-11-20 12:43:35.097910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.472 qpair failed and we were unable to recover it. 00:29:29.472 [2024-11-20 12:43:35.098104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.472 [2024-11-20 12:43:35.098138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.472 qpair failed and we were unable to recover it. 00:29:29.472 [2024-11-20 12:43:35.098434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.472 [2024-11-20 12:43:35.098469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.472 qpair failed and we were unable to recover it. 00:29:29.472 [2024-11-20 12:43:35.098675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.472 [2024-11-20 12:43:35.098710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.472 qpair failed and we were unable to recover it. 00:29:29.472 [2024-11-20 12:43:35.098907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.472 [2024-11-20 12:43:35.098941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.472 qpair failed and we were unable to recover it. 00:29:29.472 [2024-11-20 12:43:35.099128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.472 [2024-11-20 12:43:35.099163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.472 qpair failed and we were unable to recover it. 00:29:29.472 [2024-11-20 12:43:35.099464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.472 [2024-11-20 12:43:35.099502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.473 qpair failed and we were unable to recover it. 00:29:29.473 [2024-11-20 12:43:35.099699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.473 [2024-11-20 12:43:35.099734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.473 qpair failed and we were unable to recover it. 00:29:29.473 [2024-11-20 12:43:35.099938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.473 [2024-11-20 12:43:35.099972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.473 qpair failed and we were unable to recover it. 00:29:29.473 [2024-11-20 12:43:35.100234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.473 [2024-11-20 12:43:35.100271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.473 qpair failed and we were unable to recover it. 00:29:29.473 [2024-11-20 12:43:35.100562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.473 [2024-11-20 12:43:35.100597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.473 qpair failed and we were unable to recover it. 00:29:29.473 [2024-11-20 12:43:35.100785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.473 [2024-11-20 12:43:35.100820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.473 qpair failed and we were unable to recover it. 00:29:29.473 [2024-11-20 12:43:35.101052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.473 [2024-11-20 12:43:35.101087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.473 qpair failed and we were unable to recover it. 00:29:29.473 [2024-11-20 12:43:35.101372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.473 [2024-11-20 12:43:35.101408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.473 qpair failed and we were unable to recover it. 00:29:29.473 [2024-11-20 12:43:35.101676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.473 [2024-11-20 12:43:35.101710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.473 qpair failed and we were unable to recover it. 00:29:29.473 [2024-11-20 12:43:35.101973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.473 [2024-11-20 12:43:35.102007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.473 qpair failed and we were unable to recover it. 00:29:29.473 [2024-11-20 12:43:35.102305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.473 [2024-11-20 12:43:35.102342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.473 qpair failed and we were unable to recover it. 00:29:29.473 [2024-11-20 12:43:35.102615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.473 [2024-11-20 12:43:35.102649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.473 qpair failed and we were unable to recover it. 00:29:29.473 [2024-11-20 12:43:35.102856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.473 [2024-11-20 12:43:35.102891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.473 qpair failed and we were unable to recover it. 00:29:29.473 [2024-11-20 12:43:35.103154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.473 [2024-11-20 12:43:35.103188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.473 qpair failed and we were unable to recover it. 00:29:29.473 [2024-11-20 12:43:35.103498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.473 [2024-11-20 12:43:35.103537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.473 qpair failed and we were unable to recover it. 00:29:29.473 [2024-11-20 12:43:35.103785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.473 [2024-11-20 12:43:35.103820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.473 qpair failed and we were unable to recover it. 00:29:29.473 [2024-11-20 12:43:35.104101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.473 [2024-11-20 12:43:35.104135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.473 qpair failed and we were unable to recover it. 00:29:29.473 [2024-11-20 12:43:35.104358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.473 [2024-11-20 12:43:35.104396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.473 qpair failed and we were unable to recover it. 00:29:29.473 [2024-11-20 12:43:35.104705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.473 [2024-11-20 12:43:35.104740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.473 qpair failed and we were unable to recover it. 00:29:29.473 [2024-11-20 12:43:35.105010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.473 [2024-11-20 12:43:35.105045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.473 qpair failed and we were unable to recover it. 00:29:29.473 [2024-11-20 12:43:35.105235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.473 [2024-11-20 12:43:35.105272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.473 qpair failed and we were unable to recover it. 00:29:29.473 [2024-11-20 12:43:35.105404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.473 [2024-11-20 12:43:35.105440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.473 qpair failed and we were unable to recover it. 00:29:29.473 [2024-11-20 12:43:35.105722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.473 [2024-11-20 12:43:35.105756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.473 qpair failed and we were unable to recover it. 00:29:29.473 [2024-11-20 12:43:35.106063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.473 [2024-11-20 12:43:35.106098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.473 qpair failed and we were unable to recover it. 00:29:29.473 [2024-11-20 12:43:35.106357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.473 [2024-11-20 12:43:35.106394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.473 qpair failed and we were unable to recover it. 00:29:29.473 [2024-11-20 12:43:35.106585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.473 [2024-11-20 12:43:35.106619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.473 qpair failed and we were unable to recover it. 00:29:29.473 [2024-11-20 12:43:35.106817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.473 [2024-11-20 12:43:35.106852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.473 qpair failed and we were unable to recover it. 00:29:29.473 [2024-11-20 12:43:35.107110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.473 [2024-11-20 12:43:35.107145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.473 qpair failed and we were unable to recover it. 00:29:29.473 [2024-11-20 12:43:35.107422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.473 [2024-11-20 12:43:35.107460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.473 qpair failed and we were unable to recover it. 00:29:29.473 [2024-11-20 12:43:35.107675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.473 [2024-11-20 12:43:35.107711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.473 qpair failed and we were unable to recover it. 00:29:29.473 [2024-11-20 12:43:35.107919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.473 [2024-11-20 12:43:35.107953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.473 qpair failed and we were unable to recover it. 00:29:29.473 [2024-11-20 12:43:35.108176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.473 [2024-11-20 12:43:35.108229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.473 qpair failed and we were unable to recover it. 00:29:29.473 [2024-11-20 12:43:35.108496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.473 [2024-11-20 12:43:35.108543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.473 qpair failed and we were unable to recover it. 00:29:29.473 [2024-11-20 12:43:35.108807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.473 [2024-11-20 12:43:35.108841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.473 qpair failed and we were unable to recover it. 00:29:29.473 [2024-11-20 12:43:35.109036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.473 [2024-11-20 12:43:35.109070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.473 qpair failed and we were unable to recover it. 00:29:29.473 [2024-11-20 12:43:35.109324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.473 [2024-11-20 12:43:35.109361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.473 qpair failed and we were unable to recover it. 00:29:29.473 [2024-11-20 12:43:35.109650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.473 [2024-11-20 12:43:35.109685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.473 qpair failed and we were unable to recover it. 00:29:29.473 [2024-11-20 12:43:35.109919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.474 [2024-11-20 12:43:35.109954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.474 qpair failed and we were unable to recover it. 00:29:29.474 [2024-11-20 12:43:35.110164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.474 [2024-11-20 12:43:35.110200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.474 qpair failed and we were unable to recover it. 00:29:29.474 [2024-11-20 12:43:35.110411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.474 [2024-11-20 12:43:35.110445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.474 qpair failed and we were unable to recover it. 00:29:29.474 [2024-11-20 12:43:35.110728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.474 [2024-11-20 12:43:35.110763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.474 qpair failed and we were unable to recover it. 00:29:29.474 [2024-11-20 12:43:35.110976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.474 [2024-11-20 12:43:35.111012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.474 qpair failed and we were unable to recover it. 00:29:29.474 [2024-11-20 12:43:35.111139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.474 [2024-11-20 12:43:35.111173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.474 qpair failed and we were unable to recover it. 00:29:29.474 [2024-11-20 12:43:35.111383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.474 [2024-11-20 12:43:35.111420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.474 qpair failed and we were unable to recover it. 00:29:29.474 [2024-11-20 12:43:35.111700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.474 [2024-11-20 12:43:35.111735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.474 qpair failed and we were unable to recover it. 00:29:29.474 [2024-11-20 12:43:35.111925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.474 [2024-11-20 12:43:35.111959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.474 qpair failed and we were unable to recover it. 00:29:29.474 [2024-11-20 12:43:35.112170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.474 [2024-11-20 12:43:35.112223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.474 qpair failed and we were unable to recover it. 00:29:29.474 [2024-11-20 12:43:35.112525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.474 [2024-11-20 12:43:35.112560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.474 qpair failed and we were unable to recover it. 00:29:29.474 [2024-11-20 12:43:35.112832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.474 [2024-11-20 12:43:35.112866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.474 qpair failed and we were unable to recover it. 00:29:29.474 [2024-11-20 12:43:35.113153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.474 [2024-11-20 12:43:35.113188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.474 qpair failed and we were unable to recover it. 00:29:29.474 [2024-11-20 12:43:35.113420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.474 [2024-11-20 12:43:35.113455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.474 qpair failed and we were unable to recover it. 00:29:29.474 [2024-11-20 12:43:35.113739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.474 [2024-11-20 12:43:35.113773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.474 qpair failed and we were unable to recover it. 00:29:29.474 [2024-11-20 12:43:35.114058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.474 [2024-11-20 12:43:35.114093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.474 qpair failed and we were unable to recover it. 00:29:29.474 [2024-11-20 12:43:35.114392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.474 [2024-11-20 12:43:35.114429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.474 qpair failed and we were unable to recover it. 00:29:29.474 [2024-11-20 12:43:35.114628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.474 [2024-11-20 12:43:35.114662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.474 qpair failed and we were unable to recover it. 00:29:29.474 [2024-11-20 12:43:35.114943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.474 [2024-11-20 12:43:35.114977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.474 qpair failed and we were unable to recover it. 00:29:29.474 [2024-11-20 12:43:35.115243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.474 [2024-11-20 12:43:35.115282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.474 qpair failed and we were unable to recover it. 00:29:29.474 [2024-11-20 12:43:35.115547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.474 [2024-11-20 12:43:35.115581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.474 qpair failed and we were unable to recover it. 00:29:29.474 [2024-11-20 12:43:35.115716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.474 [2024-11-20 12:43:35.115751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.474 qpair failed and we were unable to recover it. 00:29:29.474 [2024-11-20 12:43:35.116041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.474 [2024-11-20 12:43:35.116076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.474 qpair failed and we were unable to recover it. 00:29:29.474 [2024-11-20 12:43:35.116357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.474 [2024-11-20 12:43:35.116393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.474 qpair failed and we were unable to recover it. 00:29:29.474 [2024-11-20 12:43:35.116627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.474 [2024-11-20 12:43:35.116661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.474 qpair failed and we were unable to recover it. 00:29:29.474 [2024-11-20 12:43:35.116847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.474 [2024-11-20 12:43:35.116881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.474 qpair failed and we were unable to recover it. 00:29:29.474 [2024-11-20 12:43:35.117167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.474 [2024-11-20 12:43:35.117200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.474 qpair failed and we were unable to recover it. 00:29:29.474 [2024-11-20 12:43:35.117403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.474 [2024-11-20 12:43:35.117438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.474 qpair failed and we were unable to recover it. 00:29:29.474 [2024-11-20 12:43:35.117717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.474 [2024-11-20 12:43:35.117751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.474 qpair failed and we were unable to recover it. 00:29:29.474 [2024-11-20 12:43:35.118036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.474 [2024-11-20 12:43:35.118070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.474 qpair failed and we were unable to recover it. 00:29:29.474 [2024-11-20 12:43:35.118285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.474 [2024-11-20 12:43:35.118321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.474 qpair failed and we were unable to recover it. 00:29:29.474 [2024-11-20 12:43:35.118510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.474 [2024-11-20 12:43:35.118545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.474 qpair failed and we were unable to recover it. 00:29:29.474 [2024-11-20 12:43:35.118749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.474 [2024-11-20 12:43:35.118783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.474 qpair failed and we were unable to recover it. 00:29:29.474 [2024-11-20 12:43:35.119062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.474 [2024-11-20 12:43:35.119096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.474 qpair failed and we were unable to recover it. 00:29:29.474 [2024-11-20 12:43:35.119361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.474 [2024-11-20 12:43:35.119399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.474 qpair failed and we were unable to recover it. 00:29:29.474 [2024-11-20 12:43:35.119680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.474 [2024-11-20 12:43:35.119721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.474 qpair failed and we were unable to recover it. 00:29:29.474 [2024-11-20 12:43:35.119991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.474 [2024-11-20 12:43:35.120026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.474 qpair failed and we were unable to recover it. 00:29:29.474 [2024-11-20 12:43:35.120156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.474 [2024-11-20 12:43:35.120191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.474 qpair failed and we were unable to recover it. 00:29:29.475 [2024-11-20 12:43:35.120499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.475 [2024-11-20 12:43:35.120535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.475 qpair failed and we were unable to recover it. 00:29:29.475 [2024-11-20 12:43:35.120723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.475 [2024-11-20 12:43:35.120758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.475 qpair failed and we were unable to recover it. 00:29:29.475 [2024-11-20 12:43:35.121036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.475 [2024-11-20 12:43:35.121070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.475 qpair failed and we were unable to recover it. 00:29:29.475 [2024-11-20 12:43:35.121282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.475 [2024-11-20 12:43:35.121318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.475 qpair failed and we were unable to recover it. 00:29:29.475 [2024-11-20 12:43:35.121599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.475 [2024-11-20 12:43:35.121634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.475 qpair failed and we were unable to recover it. 00:29:29.475 [2024-11-20 12:43:35.121843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.475 [2024-11-20 12:43:35.121878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.475 qpair failed and we were unable to recover it. 00:29:29.475 [2024-11-20 12:43:35.122108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.475 [2024-11-20 12:43:35.122143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.475 qpair failed and we were unable to recover it. 00:29:29.475 [2024-11-20 12:43:35.122364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.475 [2024-11-20 12:43:35.122402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.475 qpair failed and we were unable to recover it. 00:29:29.475 [2024-11-20 12:43:35.122632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.475 [2024-11-20 12:43:35.122666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.475 qpair failed and we were unable to recover it. 00:29:29.475 [2024-11-20 12:43:35.122874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.475 [2024-11-20 12:43:35.122909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.475 qpair failed and we were unable to recover it. 00:29:29.475 [2024-11-20 12:43:35.123194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.475 [2024-11-20 12:43:35.123253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.475 qpair failed and we were unable to recover it. 00:29:29.475 [2024-11-20 12:43:35.123412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.475 [2024-11-20 12:43:35.123446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.475 qpair failed and we were unable to recover it. 00:29:29.475 [2024-11-20 12:43:35.123639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.475 [2024-11-20 12:43:35.123674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.475 qpair failed and we were unable to recover it. 00:29:29.475 [2024-11-20 12:43:35.123883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.475 [2024-11-20 12:43:35.123918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.475 qpair failed and we were unable to recover it. 00:29:29.475 [2024-11-20 12:43:35.124149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.475 [2024-11-20 12:43:35.124184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.475 qpair failed and we were unable to recover it. 00:29:29.475 [2024-11-20 12:43:35.124347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.475 [2024-11-20 12:43:35.124383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.475 qpair failed and we were unable to recover it. 00:29:29.475 [2024-11-20 12:43:35.124668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.475 [2024-11-20 12:43:35.124702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.475 qpair failed and we were unable to recover it. 00:29:29.475 [2024-11-20 12:43:35.125007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.475 [2024-11-20 12:43:35.125041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.475 qpair failed and we were unable to recover it. 00:29:29.475 [2024-11-20 12:43:35.125303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.475 [2024-11-20 12:43:35.125340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.475 qpair failed and we were unable to recover it. 00:29:29.475 [2024-11-20 12:43:35.125624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.475 [2024-11-20 12:43:35.125658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.475 qpair failed and we were unable to recover it. 00:29:29.475 [2024-11-20 12:43:35.125936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.475 [2024-11-20 12:43:35.125971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.475 qpair failed and we were unable to recover it. 00:29:29.475 [2024-11-20 12:43:35.126156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.475 [2024-11-20 12:43:35.126190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.475 qpair failed and we were unable to recover it. 00:29:29.475 [2024-11-20 12:43:35.126433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.475 [2024-11-20 12:43:35.126468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.475 qpair failed and we were unable to recover it. 00:29:29.475 [2024-11-20 12:43:35.126608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.475 [2024-11-20 12:43:35.126642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.475 qpair failed and we were unable to recover it. 00:29:29.475 [2024-11-20 12:43:35.126950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.475 [2024-11-20 12:43:35.126986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.475 qpair failed and we were unable to recover it. 00:29:29.475 [2024-11-20 12:43:35.127263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.475 [2024-11-20 12:43:35.127300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.475 qpair failed and we were unable to recover it. 00:29:29.475 [2024-11-20 12:43:35.127511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.475 [2024-11-20 12:43:35.127546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.475 qpair failed and we were unable to recover it. 00:29:29.475 [2024-11-20 12:43:35.127733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.475 [2024-11-20 12:43:35.127768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.475 qpair failed and we were unable to recover it. 00:29:29.475 [2024-11-20 12:43:35.128055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.475 [2024-11-20 12:43:35.128089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.475 qpair failed and we were unable to recover it. 00:29:29.475 [2024-11-20 12:43:35.128385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.475 [2024-11-20 12:43:35.128421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.475 qpair failed and we were unable to recover it. 00:29:29.475 [2024-11-20 12:43:35.128688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.475 [2024-11-20 12:43:35.128723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.475 qpair failed and we were unable to recover it. 00:29:29.475 [2024-11-20 12:43:35.128915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.475 [2024-11-20 12:43:35.128950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.475 qpair failed and we were unable to recover it. 00:29:29.475 [2024-11-20 12:43:35.129174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.475 [2024-11-20 12:43:35.129223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.475 qpair failed and we were unable to recover it. 00:29:29.475 [2024-11-20 12:43:35.129412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.475 [2024-11-20 12:43:35.129448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.475 qpair failed and we were unable to recover it. 00:29:29.475 [2024-11-20 12:43:35.129711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.475 [2024-11-20 12:43:35.129744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.475 qpair failed and we were unable to recover it. 00:29:29.475 [2024-11-20 12:43:35.129958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.475 [2024-11-20 12:43:35.129993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.475 qpair failed and we were unable to recover it. 00:29:29.475 [2024-11-20 12:43:35.130200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.475 [2024-11-20 12:43:35.130250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.475 qpair failed and we were unable to recover it. 00:29:29.475 [2024-11-20 12:43:35.130508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.476 [2024-11-20 12:43:35.130543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.476 qpair failed and we were unable to recover it. 00:29:29.476 [2024-11-20 12:43:35.130747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.476 [2024-11-20 12:43:35.130782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.476 qpair failed and we were unable to recover it. 00:29:29.476 [2024-11-20 12:43:35.130973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.476 [2024-11-20 12:43:35.131009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.476 qpair failed and we were unable to recover it. 00:29:29.476 [2024-11-20 12:43:35.131291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.476 [2024-11-20 12:43:35.131329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.476 qpair failed and we were unable to recover it. 00:29:29.476 [2024-11-20 12:43:35.131462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.476 [2024-11-20 12:43:35.131497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.476 qpair failed and we were unable to recover it. 00:29:29.476 [2024-11-20 12:43:35.131613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.476 [2024-11-20 12:43:35.131648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.476 qpair failed and we were unable to recover it. 00:29:29.476 [2024-11-20 12:43:35.131929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.476 [2024-11-20 12:43:35.131963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.476 qpair failed and we were unable to recover it. 00:29:29.476 [2024-11-20 12:43:35.132269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.476 [2024-11-20 12:43:35.132305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.476 qpair failed and we were unable to recover it. 00:29:29.476 [2024-11-20 12:43:35.132582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.476 [2024-11-20 12:43:35.132618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.476 qpair failed and we were unable to recover it. 00:29:29.476 [2024-11-20 12:43:35.132821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.476 [2024-11-20 12:43:35.132855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.476 qpair failed and we were unable to recover it. 00:29:29.476 [2024-11-20 12:43:35.133140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.476 [2024-11-20 12:43:35.133175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.476 qpair failed and we were unable to recover it. 00:29:29.476 [2024-11-20 12:43:35.133452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.476 [2024-11-20 12:43:35.133488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.476 qpair failed and we were unable to recover it. 00:29:29.476 [2024-11-20 12:43:35.133711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.476 [2024-11-20 12:43:35.133745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.476 qpair failed and we were unable to recover it. 00:29:29.476 [2024-11-20 12:43:35.133932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.476 [2024-11-20 12:43:35.133968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.476 qpair failed and we were unable to recover it. 00:29:29.476 [2024-11-20 12:43:35.134238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.476 [2024-11-20 12:43:35.134275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.476 qpair failed and we were unable to recover it. 00:29:29.476 [2024-11-20 12:43:35.134547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.476 [2024-11-20 12:43:35.134581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.476 qpair failed and we were unable to recover it. 00:29:29.476 [2024-11-20 12:43:35.134784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.476 [2024-11-20 12:43:35.134819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.476 qpair failed and we were unable to recover it. 00:29:29.476 [2024-11-20 12:43:35.135094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.476 [2024-11-20 12:43:35.135129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.476 qpair failed and we were unable to recover it. 00:29:29.476 [2024-11-20 12:43:35.135360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.476 [2024-11-20 12:43:35.135399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.476 qpair failed and we were unable to recover it. 00:29:29.476 [2024-11-20 12:43:35.135614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.476 [2024-11-20 12:43:35.135649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.476 qpair failed and we were unable to recover it. 00:29:29.476 [2024-11-20 12:43:35.135878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.476 [2024-11-20 12:43:35.135913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.476 qpair failed and we were unable to recover it. 00:29:29.476 [2024-11-20 12:43:35.136224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.476 [2024-11-20 12:43:35.136260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.476 qpair failed and we were unable to recover it. 00:29:29.476 [2024-11-20 12:43:35.136535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.476 [2024-11-20 12:43:35.136571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.476 qpair failed and we were unable to recover it. 00:29:29.476 [2024-11-20 12:43:35.136778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.476 [2024-11-20 12:43:35.136812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.476 qpair failed and we were unable to recover it. 00:29:29.476 [2024-11-20 12:43:35.136969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.476 [2024-11-20 12:43:35.137004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.476 qpair failed and we were unable to recover it. 00:29:29.476 [2024-11-20 12:43:35.137285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.476 [2024-11-20 12:43:35.137321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.476 qpair failed and we were unable to recover it. 00:29:29.476 [2024-11-20 12:43:35.137581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.476 [2024-11-20 12:43:35.137616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.476 qpair failed and we were unable to recover it. 00:29:29.476 [2024-11-20 12:43:35.137816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.476 [2024-11-20 12:43:35.137858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.476 qpair failed and we were unable to recover it. 00:29:29.476 [2024-11-20 12:43:35.138131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.476 [2024-11-20 12:43:35.138165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.476 qpair failed and we were unable to recover it. 00:29:29.476 [2024-11-20 12:43:35.138487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.476 [2024-11-20 12:43:35.138523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.476 qpair failed and we were unable to recover it. 00:29:29.476 [2024-11-20 12:43:35.138805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.476 [2024-11-20 12:43:35.138839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.476 qpair failed and we were unable to recover it. 00:29:29.476 [2024-11-20 12:43:35.139045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.476 [2024-11-20 12:43:35.139080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.476 qpair failed and we were unable to recover it. 00:29:29.476 [2024-11-20 12:43:35.139231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.476 [2024-11-20 12:43:35.139271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.476 qpair failed and we were unable to recover it. 00:29:29.476 [2024-11-20 12:43:35.139423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.476 [2024-11-20 12:43:35.139458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.476 qpair failed and we were unable to recover it. 00:29:29.476 [2024-11-20 12:43:35.139593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.476 [2024-11-20 12:43:35.139629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.476 qpair failed and we were unable to recover it. 00:29:29.476 [2024-11-20 12:43:35.139883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.476 [2024-11-20 12:43:35.139917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.476 qpair failed and we were unable to recover it. 00:29:29.476 [2024-11-20 12:43:35.140193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.476 [2024-11-20 12:43:35.140247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.476 qpair failed and we were unable to recover it. 00:29:29.476 [2024-11-20 12:43:35.140388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.476 [2024-11-20 12:43:35.140423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.476 qpair failed and we were unable to recover it. 00:29:29.476 [2024-11-20 12:43:35.140617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.477 [2024-11-20 12:43:35.140650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.477 qpair failed and we were unable to recover it. 00:29:29.477 [2024-11-20 12:43:35.140783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.477 [2024-11-20 12:43:35.140818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.477 qpair failed and we were unable to recover it. 00:29:29.477 [2024-11-20 12:43:35.141104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.477 [2024-11-20 12:43:35.141139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.477 qpair failed and we were unable to recover it. 00:29:29.477 [2024-11-20 12:43:35.141355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.477 [2024-11-20 12:43:35.141391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.477 qpair failed and we were unable to recover it. 00:29:29.477 [2024-11-20 12:43:35.141603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.477 [2024-11-20 12:43:35.141637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.477 qpair failed and we were unable to recover it. 00:29:29.477 [2024-11-20 12:43:35.141903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.477 [2024-11-20 12:43:35.141938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.477 qpair failed and we were unable to recover it. 00:29:29.477 [2024-11-20 12:43:35.142127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.477 [2024-11-20 12:43:35.142162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.477 qpair failed and we were unable to recover it. 00:29:29.477 [2024-11-20 12:43:35.142325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.477 [2024-11-20 12:43:35.142361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.477 qpair failed and we were unable to recover it. 00:29:29.477 [2024-11-20 12:43:35.142653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.477 [2024-11-20 12:43:35.142688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.477 qpair failed and we were unable to recover it. 00:29:29.477 [2024-11-20 12:43:35.142893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.477 [2024-11-20 12:43:35.142928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.477 qpair failed and we were unable to recover it. 00:29:29.477 [2024-11-20 12:43:35.143186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.477 [2024-11-20 12:43:35.143240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.477 qpair failed and we were unable to recover it. 00:29:29.477 [2024-11-20 12:43:35.143435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.477 [2024-11-20 12:43:35.143470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.477 qpair failed and we were unable to recover it. 00:29:29.477 [2024-11-20 12:43:35.143607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.477 [2024-11-20 12:43:35.143641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.477 qpair failed and we were unable to recover it. 00:29:29.477 [2024-11-20 12:43:35.143856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.477 [2024-11-20 12:43:35.143891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.477 qpair failed and we were unable to recover it. 00:29:29.477 [2024-11-20 12:43:35.144078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.477 [2024-11-20 12:43:35.144114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.477 qpair failed and we were unable to recover it. 00:29:29.477 [2024-11-20 12:43:35.144303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.477 [2024-11-20 12:43:35.144339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.477 qpair failed and we were unable to recover it. 00:29:29.477 [2024-11-20 12:43:35.144549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.477 [2024-11-20 12:43:35.144583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.477 qpair failed and we were unable to recover it. 00:29:29.477 [2024-11-20 12:43:35.144769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.477 [2024-11-20 12:43:35.144802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.477 qpair failed and we were unable to recover it. 00:29:29.477 [2024-11-20 12:43:35.145004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.477 [2024-11-20 12:43:35.145038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.477 qpair failed and we were unable to recover it. 00:29:29.477 [2024-11-20 12:43:35.145315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.477 [2024-11-20 12:43:35.145350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.477 qpair failed and we were unable to recover it. 00:29:29.477 [2024-11-20 12:43:35.145549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.477 [2024-11-20 12:43:35.145583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.477 qpair failed and we were unable to recover it. 00:29:29.477 [2024-11-20 12:43:35.145864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.477 [2024-11-20 12:43:35.145898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.477 qpair failed and we were unable to recover it. 00:29:29.477 [2024-11-20 12:43:35.146104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.477 [2024-11-20 12:43:35.146138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.477 qpair failed and we were unable to recover it. 00:29:29.477 [2024-11-20 12:43:35.146407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.477 [2024-11-20 12:43:35.146442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.477 qpair failed and we were unable to recover it. 00:29:29.477 [2024-11-20 12:43:35.146760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.477 [2024-11-20 12:43:35.146794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.477 qpair failed and we were unable to recover it. 00:29:29.477 [2024-11-20 12:43:35.147034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.477 [2024-11-20 12:43:35.147068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.477 qpair failed and we were unable to recover it. 00:29:29.477 [2024-11-20 12:43:35.147386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.477 [2024-11-20 12:43:35.147424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.477 qpair failed and we were unable to recover it. 00:29:29.477 [2024-11-20 12:43:35.147557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.477 [2024-11-20 12:43:35.147593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.477 qpair failed and we were unable to recover it. 00:29:29.477 [2024-11-20 12:43:35.147741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.477 [2024-11-20 12:43:35.147776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.477 qpair failed and we were unable to recover it. 00:29:29.477 [2024-11-20 12:43:35.148058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.477 [2024-11-20 12:43:35.148098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.477 qpair failed and we were unable to recover it. 00:29:29.477 [2024-11-20 12:43:35.148362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.477 [2024-11-20 12:43:35.148400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.477 qpair failed and we were unable to recover it. 00:29:29.477 [2024-11-20 12:43:35.148690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.477 [2024-11-20 12:43:35.148725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.477 qpair failed and we were unable to recover it. 00:29:29.477 [2024-11-20 12:43:35.148936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.478 [2024-11-20 12:43:35.148971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.478 qpair failed and we were unable to recover it. 00:29:29.478 [2024-11-20 12:43:35.149099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.478 [2024-11-20 12:43:35.149134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.478 qpair failed and we were unable to recover it. 00:29:29.478 [2024-11-20 12:43:35.149338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.478 [2024-11-20 12:43:35.149374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.478 qpair failed and we were unable to recover it. 00:29:29.478 [2024-11-20 12:43:35.149657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.478 [2024-11-20 12:43:35.149692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.478 qpair failed and we were unable to recover it. 00:29:29.478 [2024-11-20 12:43:35.149907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.478 [2024-11-20 12:43:35.149942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.478 qpair failed and we were unable to recover it. 00:29:29.478 [2024-11-20 12:43:35.150169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.478 [2024-11-20 12:43:35.150214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.478 qpair failed and we were unable to recover it. 00:29:29.478 [2024-11-20 12:43:35.150484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.478 [2024-11-20 12:43:35.150519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.478 qpair failed and we were unable to recover it. 00:29:29.478 [2024-11-20 12:43:35.150735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.478 [2024-11-20 12:43:35.150769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.478 qpair failed and we were unable to recover it. 00:29:29.478 [2024-11-20 12:43:35.150967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.478 [2024-11-20 12:43:35.151002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.478 qpair failed and we were unable to recover it. 00:29:29.478 [2024-11-20 12:43:35.151308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.478 [2024-11-20 12:43:35.151346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.478 qpair failed and we were unable to recover it. 00:29:29.478 [2024-11-20 12:43:35.151630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.478 [2024-11-20 12:43:35.151666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.478 qpair failed and we were unable to recover it. 00:29:29.478 [2024-11-20 12:43:35.151876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.478 [2024-11-20 12:43:35.151911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.478 qpair failed and we were unable to recover it. 00:29:29.478 [2024-11-20 12:43:35.152175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.478 [2024-11-20 12:43:35.152236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.478 qpair failed and we were unable to recover it. 00:29:29.478 [2024-11-20 12:43:35.152517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.478 [2024-11-20 12:43:35.152553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.478 qpair failed and we were unable to recover it. 00:29:29.478 [2024-11-20 12:43:35.152739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.478 [2024-11-20 12:43:35.152774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.478 qpair failed and we were unable to recover it. 00:29:29.478 [2024-11-20 12:43:35.153081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.478 [2024-11-20 12:43:35.153116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.478 qpair failed and we were unable to recover it. 00:29:29.478 [2024-11-20 12:43:35.153401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.478 [2024-11-20 12:43:35.153438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.478 qpair failed and we were unable to recover it. 00:29:29.478 [2024-11-20 12:43:35.153710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.478 [2024-11-20 12:43:35.153746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.478 qpair failed and we were unable to recover it. 00:29:29.478 [2024-11-20 12:43:35.154006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.478 [2024-11-20 12:43:35.154041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.478 qpair failed and we were unable to recover it. 00:29:29.478 [2024-11-20 12:43:35.154273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.478 [2024-11-20 12:43:35.154308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.478 qpair failed and we were unable to recover it. 00:29:29.478 [2024-11-20 12:43:35.154591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.478 [2024-11-20 12:43:35.154627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.478 qpair failed and we were unable to recover it. 00:29:29.478 [2024-11-20 12:43:35.154903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.478 [2024-11-20 12:43:35.154938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.478 qpair failed and we were unable to recover it. 00:29:29.478 [2024-11-20 12:43:35.155168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.478 [2024-11-20 12:43:35.155215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.478 qpair failed and we were unable to recover it. 00:29:29.478 [2024-11-20 12:43:35.155427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.478 [2024-11-20 12:43:35.155463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.478 qpair failed and we were unable to recover it. 00:29:29.478 [2024-11-20 12:43:35.155688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.478 [2024-11-20 12:43:35.155722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.478 qpair failed and we were unable to recover it. 00:29:29.478 [2024-11-20 12:43:35.155986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.478 [2024-11-20 12:43:35.156020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.478 qpair failed and we were unable to recover it. 00:29:29.478 [2024-11-20 12:43:35.156280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.478 [2024-11-20 12:43:35.156317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.478 qpair failed and we were unable to recover it. 00:29:29.478 [2024-11-20 12:43:35.156522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.478 [2024-11-20 12:43:35.156556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.478 qpair failed and we were unable to recover it. 00:29:29.478 [2024-11-20 12:43:35.156844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.478 [2024-11-20 12:43:35.156879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.478 qpair failed and we were unable to recover it. 00:29:29.478 [2024-11-20 12:43:35.157093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.478 [2024-11-20 12:43:35.157128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.478 qpair failed and we were unable to recover it. 00:29:29.478 [2024-11-20 12:43:35.157399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.478 [2024-11-20 12:43:35.157435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.478 qpair failed and we were unable to recover it. 00:29:29.478 [2024-11-20 12:43:35.157697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.478 [2024-11-20 12:43:35.157731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.478 qpair failed and we were unable to recover it. 00:29:29.478 [2024-11-20 12:43:35.157932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.478 [2024-11-20 12:43:35.157966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.478 qpair failed and we were unable to recover it. 00:29:29.478 [2024-11-20 12:43:35.158152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.478 [2024-11-20 12:43:35.158186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.478 qpair failed and we were unable to recover it. 00:29:29.478 [2024-11-20 12:43:35.158480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.478 [2024-11-20 12:43:35.158515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.478 qpair failed and we were unable to recover it. 00:29:29.478 [2024-11-20 12:43:35.158785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.478 [2024-11-20 12:43:35.158820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.478 qpair failed and we were unable to recover it. 00:29:29.478 [2024-11-20 12:43:35.159113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.478 [2024-11-20 12:43:35.159147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.478 qpair failed and we were unable to recover it. 00:29:29.478 [2024-11-20 12:43:35.159359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.478 [2024-11-20 12:43:35.159405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.478 qpair failed and we were unable to recover it. 00:29:29.479 [2024-11-20 12:43:35.159691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.479 [2024-11-20 12:43:35.159726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.479 qpair failed and we were unable to recover it. 00:29:29.479 [2024-11-20 12:43:35.160015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.479 [2024-11-20 12:43:35.160050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.479 qpair failed and we were unable to recover it. 00:29:29.479 [2024-11-20 12:43:35.160324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.479 [2024-11-20 12:43:35.160360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.479 qpair failed and we were unable to recover it. 00:29:29.479 [2024-11-20 12:43:35.160490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.479 [2024-11-20 12:43:35.160525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.479 qpair failed and we were unable to recover it. 00:29:29.479 [2024-11-20 12:43:35.160726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.479 [2024-11-20 12:43:35.160760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.479 qpair failed and we were unable to recover it. 00:29:29.479 [2024-11-20 12:43:35.161019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.479 [2024-11-20 12:43:35.161053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.479 qpair failed and we were unable to recover it. 00:29:29.479 [2024-11-20 12:43:35.161253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.479 [2024-11-20 12:43:35.161290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.479 qpair failed and we were unable to recover it. 00:29:29.479 [2024-11-20 12:43:35.161567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.479 [2024-11-20 12:43:35.161601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.479 qpair failed and we were unable to recover it. 00:29:29.479 [2024-11-20 12:43:35.161885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.479 [2024-11-20 12:43:35.161919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.479 qpair failed and we were unable to recover it. 00:29:29.479 [2024-11-20 12:43:35.162166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.479 [2024-11-20 12:43:35.162226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.479 qpair failed and we were unable to recover it. 00:29:29.479 [2024-11-20 12:43:35.162527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.479 [2024-11-20 12:43:35.162561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.479 qpair failed and we were unable to recover it. 00:29:29.479 [2024-11-20 12:43:35.162843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.479 [2024-11-20 12:43:35.162878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.479 qpair failed and we were unable to recover it. 00:29:29.479 [2024-11-20 12:43:35.163086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.479 [2024-11-20 12:43:35.163120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.479 qpair failed and we were unable to recover it. 00:29:29.479 [2024-11-20 12:43:35.163392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.479 [2024-11-20 12:43:35.163431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.479 qpair failed and we were unable to recover it. 00:29:29.479 [2024-11-20 12:43:35.163648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.479 [2024-11-20 12:43:35.163683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.479 qpair failed and we were unable to recover it. 00:29:29.479 [2024-11-20 12:43:35.163940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.479 [2024-11-20 12:43:35.163975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.479 qpair failed and we were unable to recover it. 00:29:29.479 [2024-11-20 12:43:35.164163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.479 [2024-11-20 12:43:35.164198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.479 qpair failed and we were unable to recover it. 00:29:29.479 [2024-11-20 12:43:35.164491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.479 [2024-11-20 12:43:35.164528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.479 qpair failed and we were unable to recover it. 00:29:29.479 [2024-11-20 12:43:35.164753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.479 [2024-11-20 12:43:35.164787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.479 qpair failed and we were unable to recover it. 00:29:29.479 [2024-11-20 12:43:35.164976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.479 [2024-11-20 12:43:35.165011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.479 qpair failed and we were unable to recover it. 00:29:29.479 [2024-11-20 12:43:35.165275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.479 [2024-11-20 12:43:35.165311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.479 qpair failed and we were unable to recover it. 00:29:29.479 [2024-11-20 12:43:35.165513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.479 [2024-11-20 12:43:35.165549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.479 qpair failed and we were unable to recover it. 00:29:29.479 [2024-11-20 12:43:35.165815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.479 [2024-11-20 12:43:35.165849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.479 qpair failed and we were unable to recover it. 00:29:29.479 [2024-11-20 12:43:35.166137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.479 [2024-11-20 12:43:35.166171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.479 qpair failed and we were unable to recover it. 00:29:29.479 [2024-11-20 12:43:35.166449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.479 [2024-11-20 12:43:35.166485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.479 qpair failed and we were unable to recover it. 00:29:29.479 [2024-11-20 12:43:35.166637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.479 [2024-11-20 12:43:35.166671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.479 qpair failed and we were unable to recover it. 00:29:29.479 [2024-11-20 12:43:35.166956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.479 [2024-11-20 12:43:35.166992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.479 qpair failed and we were unable to recover it. 00:29:29.479 [2024-11-20 12:43:35.167179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.479 [2024-11-20 12:43:35.167224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.479 qpair failed and we were unable to recover it. 00:29:29.479 [2024-11-20 12:43:35.167380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.479 [2024-11-20 12:43:35.167413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.479 qpair failed and we were unable to recover it. 00:29:29.479 [2024-11-20 12:43:35.167671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.479 [2024-11-20 12:43:35.167706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.479 qpair failed and we were unable to recover it. 00:29:29.479 [2024-11-20 12:43:35.167914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.479 [2024-11-20 12:43:35.167949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.479 qpair failed and we were unable to recover it. 00:29:29.479 [2024-11-20 12:43:35.168136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.479 [2024-11-20 12:43:35.168171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.479 qpair failed and we were unable to recover it. 00:29:29.479 [2024-11-20 12:43:35.168318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.479 [2024-11-20 12:43:35.168355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.479 qpair failed and we were unable to recover it. 00:29:29.479 [2024-11-20 12:43:35.168659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.479 [2024-11-20 12:43:35.168693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.479 qpair failed and we were unable to recover it. 00:29:29.479 [2024-11-20 12:43:35.168954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.480 [2024-11-20 12:43:35.168989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.480 qpair failed and we were unable to recover it. 00:29:29.480 [2024-11-20 12:43:35.169200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.480 [2024-11-20 12:43:35.169256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.480 qpair failed and we were unable to recover it. 00:29:29.480 [2024-11-20 12:43:35.169541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.480 [2024-11-20 12:43:35.169575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.480 qpair failed and we were unable to recover it. 00:29:29.480 [2024-11-20 12:43:35.169797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.480 [2024-11-20 12:43:35.169831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.480 qpair failed and we were unable to recover it. 00:29:29.480 [2024-11-20 12:43:35.170057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.480 [2024-11-20 12:43:35.170092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.480 qpair failed and we were unable to recover it. 00:29:29.480 [2024-11-20 12:43:35.170295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.480 [2024-11-20 12:43:35.170336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.480 qpair failed and we were unable to recover it. 00:29:29.480 [2024-11-20 12:43:35.170596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.480 [2024-11-20 12:43:35.170632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.480 qpair failed and we were unable to recover it. 00:29:29.480 [2024-11-20 12:43:35.170840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.480 [2024-11-20 12:43:35.170875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.480 qpair failed and we were unable to recover it. 00:29:29.480 [2024-11-20 12:43:35.171140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.480 [2024-11-20 12:43:35.171174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.480 qpair failed and we were unable to recover it. 00:29:29.480 [2024-11-20 12:43:35.171317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.480 [2024-11-20 12:43:35.171353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.480 qpair failed and we were unable to recover it. 00:29:29.480 [2024-11-20 12:43:35.171610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.480 [2024-11-20 12:43:35.171644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.480 qpair failed and we were unable to recover it. 00:29:29.480 [2024-11-20 12:43:35.171830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.480 [2024-11-20 12:43:35.171862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.480 qpair failed and we were unable to recover it. 00:29:29.480 [2024-11-20 12:43:35.172149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.480 [2024-11-20 12:43:35.172183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.480 qpair failed and we were unable to recover it. 00:29:29.480 [2024-11-20 12:43:35.172466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.480 [2024-11-20 12:43:35.172503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.480 qpair failed and we were unable to recover it. 00:29:29.480 [2024-11-20 12:43:35.172784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.480 [2024-11-20 12:43:35.172818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.480 qpair failed and we were unable to recover it. 00:29:29.480 [2024-11-20 12:43:35.173126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.480 [2024-11-20 12:43:35.173160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.480 qpair failed and we were unable to recover it. 00:29:29.480 [2024-11-20 12:43:35.173473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.480 [2024-11-20 12:43:35.173510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.480 qpair failed and we were unable to recover it. 00:29:29.480 [2024-11-20 12:43:35.173774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.480 [2024-11-20 12:43:35.173808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.480 qpair failed and we were unable to recover it. 00:29:29.480 [2024-11-20 12:43:35.174102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.480 [2024-11-20 12:43:35.174136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.480 qpair failed and we were unable to recover it. 00:29:29.480 [2024-11-20 12:43:35.174380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.480 [2024-11-20 12:43:35.174417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.480 qpair failed and we were unable to recover it. 00:29:29.480 [2024-11-20 12:43:35.174679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.480 [2024-11-20 12:43:35.174714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.480 qpair failed and we were unable to recover it. 00:29:29.480 [2024-11-20 12:43:35.174844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.480 [2024-11-20 12:43:35.174878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.480 qpair failed and we were unable to recover it. 00:29:29.480 [2024-11-20 12:43:35.175164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.480 [2024-11-20 12:43:35.175200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.480 qpair failed and we were unable to recover it. 00:29:29.480 [2024-11-20 12:43:35.175495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.480 [2024-11-20 12:43:35.175531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.480 qpair failed and we were unable to recover it. 00:29:29.480 [2024-11-20 12:43:35.175740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.480 [2024-11-20 12:43:35.175775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.480 qpair failed and we were unable to recover it. 00:29:29.480 [2024-11-20 12:43:35.175967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.480 [2024-11-20 12:43:35.176001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.480 qpair failed and we were unable to recover it. 00:29:29.480 [2024-11-20 12:43:35.176283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.480 [2024-11-20 12:43:35.176320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.480 qpair failed and we were unable to recover it. 00:29:29.480 [2024-11-20 12:43:35.176601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.480 [2024-11-20 12:43:35.176636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.480 qpair failed and we were unable to recover it. 00:29:29.480 [2024-11-20 12:43:35.176835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.480 [2024-11-20 12:43:35.176869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.480 qpair failed and we were unable to recover it. 00:29:29.480 [2024-11-20 12:43:35.176988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.480 [2024-11-20 12:43:35.177022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.480 qpair failed and we were unable to recover it. 00:29:29.480 [2024-11-20 12:43:35.177347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.480 [2024-11-20 12:43:35.177382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.480 qpair failed and we were unable to recover it. 00:29:29.480 [2024-11-20 12:43:35.177655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.480 [2024-11-20 12:43:35.177690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.480 qpair failed and we were unable to recover it. 00:29:29.480 [2024-11-20 12:43:35.177981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.480 [2024-11-20 12:43:35.178015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.480 qpair failed and we were unable to recover it. 00:29:29.480 [2024-11-20 12:43:35.178286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.480 [2024-11-20 12:43:35.178322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.480 qpair failed and we were unable to recover it. 00:29:29.480 [2024-11-20 12:43:35.178594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.480 [2024-11-20 12:43:35.178628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.480 qpair failed and we were unable to recover it. 00:29:29.480 [2024-11-20 12:43:35.178919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.480 [2024-11-20 12:43:35.178953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.480 qpair failed and we were unable to recover it. 00:29:29.480 [2024-11-20 12:43:35.179234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.480 [2024-11-20 12:43:35.179278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.480 qpair failed and we were unable to recover it. 00:29:29.480 [2024-11-20 12:43:35.179470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.480 [2024-11-20 12:43:35.179505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.480 qpair failed and we were unable to recover it. 00:29:29.481 [2024-11-20 12:43:35.179790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.481 [2024-11-20 12:43:35.179825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.481 qpair failed and we were unable to recover it. 00:29:29.481 [2024-11-20 12:43:35.180085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.481 [2024-11-20 12:43:35.180119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.481 qpair failed and we were unable to recover it. 00:29:29.481 [2024-11-20 12:43:35.180356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.481 [2024-11-20 12:43:35.180393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.481 qpair failed and we were unable to recover it. 00:29:29.481 [2024-11-20 12:43:35.180661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.481 [2024-11-20 12:43:35.180694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.481 qpair failed and we were unable to recover it. 00:29:29.481 [2024-11-20 12:43:35.180952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.481 [2024-11-20 12:43:35.180986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.481 qpair failed and we were unable to recover it. 00:29:29.481 [2024-11-20 12:43:35.181196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.481 [2024-11-20 12:43:35.181242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.481 qpair failed and we were unable to recover it. 00:29:29.481 [2024-11-20 12:43:35.181520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.481 [2024-11-20 12:43:35.181554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.481 qpair failed and we were unable to recover it. 00:29:29.481 [2024-11-20 12:43:35.181741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.481 [2024-11-20 12:43:35.181781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.481 qpair failed and we were unable to recover it. 00:29:29.481 [2024-11-20 12:43:35.181928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.481 [2024-11-20 12:43:35.181963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.481 qpair failed and we were unable to recover it. 00:29:29.481 [2024-11-20 12:43:35.182082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.481 [2024-11-20 12:43:35.182115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.481 qpair failed and we were unable to recover it. 00:29:29.481 [2024-11-20 12:43:35.182374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.481 [2024-11-20 12:43:35.182410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.481 qpair failed and we were unable to recover it. 00:29:29.481 [2024-11-20 12:43:35.182691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.481 [2024-11-20 12:43:35.182725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.481 qpair failed and we were unable to recover it. 00:29:29.481 [2024-11-20 12:43:35.183010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.481 [2024-11-20 12:43:35.183044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.481 qpair failed and we were unable to recover it. 00:29:29.481 [2024-11-20 12:43:35.183327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.481 [2024-11-20 12:43:35.183366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.481 qpair failed and we were unable to recover it. 00:29:29.481 [2024-11-20 12:43:35.183642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.481 [2024-11-20 12:43:35.183676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.481 qpair failed and we were unable to recover it. 00:29:29.481 [2024-11-20 12:43:35.183960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.481 [2024-11-20 12:43:35.183995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.481 qpair failed and we were unable to recover it. 00:29:29.481 [2024-11-20 12:43:35.184274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.481 [2024-11-20 12:43:35.184311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.481 qpair failed and we were unable to recover it. 00:29:29.481 [2024-11-20 12:43:35.184462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.481 [2024-11-20 12:43:35.184497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.481 qpair failed and we were unable to recover it. 00:29:29.481 [2024-11-20 12:43:35.184775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.481 [2024-11-20 12:43:35.184810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.481 qpair failed and we were unable to recover it. 00:29:29.481 [2024-11-20 12:43:35.185014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.481 [2024-11-20 12:43:35.185049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.481 qpair failed and we were unable to recover it. 00:29:29.481 [2024-11-20 12:43:35.185306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.481 [2024-11-20 12:43:35.185343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.481 qpair failed and we were unable to recover it. 00:29:29.481 [2024-11-20 12:43:35.185630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.481 [2024-11-20 12:43:35.185665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.481 qpair failed and we were unable to recover it. 00:29:29.481 [2024-11-20 12:43:35.185881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.481 [2024-11-20 12:43:35.185917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.481 qpair failed and we were unable to recover it. 00:29:29.481 [2024-11-20 12:43:35.186173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.481 [2024-11-20 12:43:35.186216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.481 qpair failed and we were unable to recover it. 00:29:29.481 [2024-11-20 12:43:35.186502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.481 [2024-11-20 12:43:35.186536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.481 qpair failed and we were unable to recover it. 00:29:29.481 [2024-11-20 12:43:35.186806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.481 [2024-11-20 12:43:35.186841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.481 qpair failed and we were unable to recover it. 00:29:29.481 [2024-11-20 12:43:35.187061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.481 [2024-11-20 12:43:35.187095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.481 qpair failed and we were unable to recover it. 00:29:29.481 [2024-11-20 12:43:35.187282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.481 [2024-11-20 12:43:35.187320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.481 qpair failed and we were unable to recover it. 00:29:29.481 [2024-11-20 12:43:35.187631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.481 [2024-11-20 12:43:35.187667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.481 qpair failed and we were unable to recover it. 00:29:29.481 [2024-11-20 12:43:35.187946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.481 [2024-11-20 12:43:35.187980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.481 qpair failed and we were unable to recover it. 00:29:29.481 [2024-11-20 12:43:35.188232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.481 [2024-11-20 12:43:35.188269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.481 qpair failed and we were unable to recover it. 00:29:29.481 [2024-11-20 12:43:35.188498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.481 [2024-11-20 12:43:35.188532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.481 qpair failed and we were unable to recover it. 00:29:29.481 [2024-11-20 12:43:35.188659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.481 [2024-11-20 12:43:35.188695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.481 qpair failed and we were unable to recover it. 00:29:29.481 [2024-11-20 12:43:35.188899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.481 [2024-11-20 12:43:35.188934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.481 qpair failed and we were unable to recover it. 00:29:29.481 [2024-11-20 12:43:35.189149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.481 [2024-11-20 12:43:35.189183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.482 qpair failed and we were unable to recover it. 00:29:29.482 [2024-11-20 12:43:35.189425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.482 [2024-11-20 12:43:35.189461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.482 qpair failed and we were unable to recover it. 00:29:29.482 [2024-11-20 12:43:35.189729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.482 [2024-11-20 12:43:35.189764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.482 qpair failed and we were unable to recover it. 00:29:29.482 [2024-11-20 12:43:35.190064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.482 [2024-11-20 12:43:35.190099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.482 qpair failed and we were unable to recover it. 00:29:29.482 [2024-11-20 12:43:35.190392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.482 [2024-11-20 12:43:35.190427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.482 qpair failed and we were unable to recover it. 00:29:29.482 [2024-11-20 12:43:35.190649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.482 [2024-11-20 12:43:35.190685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.482 qpair failed and we were unable to recover it. 00:29:29.482 [2024-11-20 12:43:35.190839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.482 [2024-11-20 12:43:35.190874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.482 qpair failed and we were unable to recover it. 00:29:29.482 [2024-11-20 12:43:35.191073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.482 [2024-11-20 12:43:35.191108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.482 qpair failed and we were unable to recover it. 00:29:29.482 [2024-11-20 12:43:35.191372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.482 [2024-11-20 12:43:35.191411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.482 qpair failed and we were unable to recover it. 00:29:29.482 [2024-11-20 12:43:35.191552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.482 [2024-11-20 12:43:35.191587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.482 qpair failed and we were unable to recover it. 00:29:29.482 [2024-11-20 12:43:35.191869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.482 [2024-11-20 12:43:35.191904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.482 qpair failed and we were unable to recover it. 00:29:29.482 [2024-11-20 12:43:35.192124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.482 [2024-11-20 12:43:35.192160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.482 qpair failed and we were unable to recover it. 00:29:29.482 [2024-11-20 12:43:35.192454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.482 [2024-11-20 12:43:35.192490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.482 qpair failed and we were unable to recover it. 00:29:29.482 [2024-11-20 12:43:35.192705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.482 [2024-11-20 12:43:35.192746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.482 qpair failed and we were unable to recover it. 00:29:29.760 [2024-11-20 12:43:35.193004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-11-20 12:43:35.193040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-11-20 12:43:35.193243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-11-20 12:43:35.193283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-11-20 12:43:35.193484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-11-20 12:43:35.193519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-11-20 12:43:35.193816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-11-20 12:43:35.193852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-11-20 12:43:35.194063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-11-20 12:43:35.194098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-11-20 12:43:35.194305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-11-20 12:43:35.194342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-11-20 12:43:35.194551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-11-20 12:43:35.194585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-11-20 12:43:35.194804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-11-20 12:43:35.194837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-11-20 12:43:35.195069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-11-20 12:43:35.195104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-11-20 12:43:35.195303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-11-20 12:43:35.195340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-11-20 12:43:35.195495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-11-20 12:43:35.195529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-11-20 12:43:35.195757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-11-20 12:43:35.195792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-11-20 12:43:35.196077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-11-20 12:43:35.196110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-11-20 12:43:35.196336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-11-20 12:43:35.196372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-11-20 12:43:35.196655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-11-20 12:43:35.196692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-11-20 12:43:35.196966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-11-20 12:43:35.197001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-11-20 12:43:35.197194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-11-20 12:43:35.197241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-11-20 12:43:35.197495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-11-20 12:43:35.197529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-11-20 12:43:35.197719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-11-20 12:43:35.197754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-11-20 12:43:35.197887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-11-20 12:43:35.197922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-11-20 12:43:35.198197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-11-20 12:43:35.198243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-11-20 12:43:35.198513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-11-20 12:43:35.198548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-11-20 12:43:35.198682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-11-20 12:43:35.198717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-11-20 12:43:35.198920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-11-20 12:43:35.198954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-11-20 12:43:35.199236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-11-20 12:43:35.199277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-11-20 12:43:35.199576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-11-20 12:43:35.199611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-11-20 12:43:35.199874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-11-20 12:43:35.199909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-11-20 12:43:35.200147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-11-20 12:43:35.200181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-11-20 12:43:35.200346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-11-20 12:43:35.200381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-11-20 12:43:35.200651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-11-20 12:43:35.200685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-11-20 12:43:35.200947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-11-20 12:43:35.200982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-11-20 12:43:35.201283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-11-20 12:43:35.201318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-11-20 12:43:35.201525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-11-20 12:43:35.201560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-11-20 12:43:35.201839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-11-20 12:43:35.201873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-11-20 12:43:35.202177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-11-20 12:43:35.202220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-11-20 12:43:35.202444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-11-20 12:43:35.202479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-11-20 12:43:35.202758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-11-20 12:43:35.202792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-11-20 12:43:35.203074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-11-20 12:43:35.203108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-11-20 12:43:35.203320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-11-20 12:43:35.203358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-11-20 12:43:35.203574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-11-20 12:43:35.203614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-11-20 12:43:35.203841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-11-20 12:43:35.203876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-11-20 12:43:35.204156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-11-20 12:43:35.204191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-11-20 12:43:35.204441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-11-20 12:43:35.204476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-11-20 12:43:35.204779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-11-20 12:43:35.204814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-11-20 12:43:35.205098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-11-20 12:43:35.205133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-11-20 12:43:35.205412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-11-20 12:43:35.205449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-11-20 12:43:35.205676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-11-20 12:43:35.205710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-11-20 12:43:35.205987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-11-20 12:43:35.206022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-11-20 12:43:35.206267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-11-20 12:43:35.206303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-11-20 12:43:35.206504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-11-20 12:43:35.206539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-11-20 12:43:35.206724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-11-20 12:43:35.206759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-11-20 12:43:35.207011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-11-20 12:43:35.207045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-11-20 12:43:35.207302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-11-20 12:43:35.207341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-11-20 12:43:35.207467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-11-20 12:43:35.207503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-11-20 12:43:35.207788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-11-20 12:43:35.207822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-11-20 12:43:35.208049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-11-20 12:43:35.208083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-11-20 12:43:35.208368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-11-20 12:43:35.208404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-11-20 12:43:35.208543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-11-20 12:43:35.208577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-11-20 12:43:35.208775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-11-20 12:43:35.208810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-11-20 12:43:35.209060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-11-20 12:43:35.209093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-11-20 12:43:35.209401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-11-20 12:43:35.209438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-11-20 12:43:35.209725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-11-20 12:43:35.209760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-11-20 12:43:35.210057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-11-20 12:43:35.210091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-11-20 12:43:35.210360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-11-20 12:43:35.210397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-11-20 12:43:35.210597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-11-20 12:43:35.210633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-11-20 12:43:35.210916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-11-20 12:43:35.210951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-11-20 12:43:35.211228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-11-20 12:43:35.211272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-11-20 12:43:35.211548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-11-20 12:43:35.211585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-11-20 12:43:35.211867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-11-20 12:43:35.211901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-11-20 12:43:35.212017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-11-20 12:43:35.212051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-11-20 12:43:35.212339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-11-20 12:43:35.212377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-11-20 12:43:35.212661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-11-20 12:43:35.212696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-11-20 12:43:35.212971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-11-20 12:43:35.213004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-11-20 12:43:35.213218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-11-20 12:43:35.213253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-11-20 12:43:35.213479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-11-20 12:43:35.213513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-11-20 12:43:35.213799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-11-20 12:43:35.213833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-11-20 12:43:35.214092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-11-20 12:43:35.214127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-11-20 12:43:35.214400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-11-20 12:43:35.214435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-11-20 12:43:35.214627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-11-20 12:43:35.214660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-11-20 12:43:35.214942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-11-20 12:43:35.214981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-11-20 12:43:35.215262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-11-20 12:43:35.215300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-11-20 12:43:35.215575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-11-20 12:43:35.215610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-11-20 12:43:35.215893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-11-20 12:43:35.215927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-11-20 12:43:35.216239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-11-20 12:43:35.216275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-11-20 12:43:35.216556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-11-20 12:43:35.216591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-11-20 12:43:35.216797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-11-20 12:43:35.216831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-11-20 12:43:35.217033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-11-20 12:43:35.217068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-11-20 12:43:35.217345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-11-20 12:43:35.217380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-11-20 12:43:35.217520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-11-20 12:43:35.217555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-11-20 12:43:35.217769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-11-20 12:43:35.217803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-11-20 12:43:35.217999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-11-20 12:43:35.218033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-11-20 12:43:35.218262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-11-20 12:43:35.218297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-11-20 12:43:35.218448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-11-20 12:43:35.218483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-11-20 12:43:35.218771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-11-20 12:43:35.218807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-11-20 12:43:35.218996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-11-20 12:43:35.219031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-11-20 12:43:35.219279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-11-20 12:43:35.219317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-11-20 12:43:35.219525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-11-20 12:43:35.219561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-11-20 12:43:35.219764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-11-20 12:43:35.219799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-11-20 12:43:35.219934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-11-20 12:43:35.219968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-11-20 12:43:35.220264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-11-20 12:43:35.220301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-11-20 12:43:35.220576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-11-20 12:43:35.220611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-11-20 12:43:35.220922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-11-20 12:43:35.220957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-11-20 12:43:35.221236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-11-20 12:43:35.221272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-11-20 12:43:35.221551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-11-20 12:43:35.221587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-11-20 12:43:35.221786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-11-20 12:43:35.221820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-11-20 12:43:35.222082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-11-20 12:43:35.222117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-11-20 12:43:35.222330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-11-20 12:43:35.222367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-11-20 12:43:35.222603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-11-20 12:43:35.222637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-11-20 12:43:35.222947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-11-20 12:43:35.222981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-11-20 12:43:35.223265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-11-20 12:43:35.223302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-11-20 12:43:35.223612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-11-20 12:43:35.223648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-11-20 12:43:35.223846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-11-20 12:43:35.223881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-11-20 12:43:35.224140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-11-20 12:43:35.224175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-11-20 12:43:35.224482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-11-20 12:43:35.224518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-11-20 12:43:35.224796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-11-20 12:43:35.224829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-11-20 12:43:35.225036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-11-20 12:43:35.225071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-11-20 12:43:35.225351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-11-20 12:43:35.225388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-11-20 12:43:35.225614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-11-20 12:43:35.225649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-11-20 12:43:35.225837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-11-20 12:43:35.225873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-11-20 12:43:35.226130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-11-20 12:43:35.226172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-11-20 12:43:35.226452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-11-20 12:43:35.226487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-11-20 12:43:35.226788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-11-20 12:43:35.226822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-11-20 12:43:35.227088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-11-20 12:43:35.227123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-11-20 12:43:35.227409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-11-20 12:43:35.227447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-11-20 12:43:35.227722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-11-20 12:43:35.227756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-11-20 12:43:35.228037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-11-20 12:43:35.228072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-11-20 12:43:35.228354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-11-20 12:43:35.228391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-11-20 12:43:35.228642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-11-20 12:43:35.228677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-11-20 12:43:35.228882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-11-20 12:43:35.228917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-11-20 12:43:35.229173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-11-20 12:43:35.229222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-11-20 12:43:35.229481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-11-20 12:43:35.229517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-11-20 12:43:35.229774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-11-20 12:43:35.229809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-11-20 12:43:35.230067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-11-20 12:43:35.230101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-11-20 12:43:35.230381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-11-20 12:43:35.230418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-11-20 12:43:35.230634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-11-20 12:43:35.230668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-11-20 12:43:35.230879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-11-20 12:43:35.230913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-11-20 12:43:35.231192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-11-20 12:43:35.231249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-11-20 12:43:35.231560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-11-20 12:43:35.231595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-11-20 12:43:35.231811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-11-20 12:43:35.231845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-11-20 12:43:35.232129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-11-20 12:43:35.232163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-11-20 12:43:35.232384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-11-20 12:43:35.232420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-11-20 12:43:35.232632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-11-20 12:43:35.232665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-11-20 12:43:35.232935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-11-20 12:43:35.232969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-11-20 12:43:35.233256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-11-20 12:43:35.233292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-11-20 12:43:35.233585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-11-20 12:43:35.233619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-11-20 12:43:35.233844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-11-20 12:43:35.233879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-11-20 12:43:35.234166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-11-20 12:43:35.234213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-11-20 12:43:35.234502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-11-20 12:43:35.234537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-11-20 12:43:35.234799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-11-20 12:43:35.234834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-11-20 12:43:35.235040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-11-20 12:43:35.235075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-11-20 12:43:35.235263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-11-20 12:43:35.235300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-11-20 12:43:35.235502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-11-20 12:43:35.235538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-11-20 12:43:35.235795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-11-20 12:43:35.235830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-11-20 12:43:35.236087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-11-20 12:43:35.236120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-11-20 12:43:35.236377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-11-20 12:43:35.236413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-11-20 12:43:35.236602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-11-20 12:43:35.236637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-11-20 12:43:35.236837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-11-20 12:43:35.236870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-11-20 12:43:35.237153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-11-20 12:43:35.237187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-11-20 12:43:35.237406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-11-20 12:43:35.237441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-11-20 12:43:35.237631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-11-20 12:43:35.237673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-11-20 12:43:35.237979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-11-20 12:43:35.238014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-11-20 12:43:35.238272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-11-20 12:43:35.238308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-11-20 12:43:35.238532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-11-20 12:43:35.238567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-11-20 12:43:35.238850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-11-20 12:43:35.238884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-11-20 12:43:35.239168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-11-20 12:43:35.239222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-11-20 12:43:35.239539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-11-20 12:43:35.239574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-11-20 12:43:35.239861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-11-20 12:43:35.239895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-11-20 12:43:35.240172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-11-20 12:43:35.240233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-11-20 12:43:35.240518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-11-20 12:43:35.240553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-11-20 12:43:35.240813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-11-20 12:43:35.240847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-11-20 12:43:35.241149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-11-20 12:43:35.241183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.765 [2024-11-20 12:43:35.241436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-11-20 12:43:35.241472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-11-20 12:43:35.241677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-11-20 12:43:35.241712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-11-20 12:43:35.242021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-11-20 12:43:35.242055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-11-20 12:43:35.242356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-11-20 12:43:35.242394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-11-20 12:43:35.242592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-11-20 12:43:35.242627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-11-20 12:43:35.242911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-11-20 12:43:35.242947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-11-20 12:43:35.243240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-11-20 12:43:35.243281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-11-20 12:43:35.243542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-11-20 12:43:35.243576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-11-20 12:43:35.243913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-11-20 12:43:35.243949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-11-20 12:43:35.244170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-11-20 12:43:35.244218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-11-20 12:43:35.244370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-11-20 12:43:35.244405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-11-20 12:43:35.244663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-11-20 12:43:35.244698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-11-20 12:43:35.244925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-11-20 12:43:35.244959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-11-20 12:43:35.245163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-11-20 12:43:35.245198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-11-20 12:43:35.245425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-11-20 12:43:35.245460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-11-20 12:43:35.245674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-11-20 12:43:35.245709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-11-20 12:43:35.245840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-11-20 12:43:35.245874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-11-20 12:43:35.246024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-11-20 12:43:35.246059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-11-20 12:43:35.246286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-11-20 12:43:35.246322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-11-20 12:43:35.246557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-11-20 12:43:35.246591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-11-20 12:43:35.246823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-11-20 12:43:35.246858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-11-20 12:43:35.247073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-11-20 12:43:35.247108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-11-20 12:43:35.247389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-11-20 12:43:35.247427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-11-20 12:43:35.247597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-11-20 12:43:35.247632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-11-20 12:43:35.247931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-11-20 12:43:35.247966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-11-20 12:43:35.248184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-11-20 12:43:35.248239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-11-20 12:43:35.248383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-11-20 12:43:35.248418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-11-20 12:43:35.248676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-11-20 12:43:35.248711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-11-20 12:43:35.249017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-11-20 12:43:35.249058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-11-20 12:43:35.249264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-11-20 12:43:35.249301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-11-20 12:43:35.249540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-11-20 12:43:35.249574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-11-20 12:43:35.249770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-11-20 12:43:35.249804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-11-20 12:43:35.250010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-11-20 12:43:35.250045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-11-20 12:43:35.250231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-11-20 12:43:35.250268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-11-20 12:43:35.250476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-11-20 12:43:35.250511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-11-20 12:43:35.250772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-11-20 12:43:35.250806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-11-20 12:43:35.251112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-11-20 12:43:35.251147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-11-20 12:43:35.251457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-11-20 12:43:35.251496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-11-20 12:43:35.251761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-11-20 12:43:35.251797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-11-20 12:43:35.252092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-11-20 12:43:35.252127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-11-20 12:43:35.252356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-11-20 12:43:35.252392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-11-20 12:43:35.252590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-11-20 12:43:35.252624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-11-20 12:43:35.252887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-11-20 12:43:35.252922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-11-20 12:43:35.253131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-11-20 12:43:35.253165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-11-20 12:43:35.253389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-11-20 12:43:35.253426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-11-20 12:43:35.253710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-11-20 12:43:35.253744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-11-20 12:43:35.254010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-11-20 12:43:35.254044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-11-20 12:43:35.254255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-11-20 12:43:35.254292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-11-20 12:43:35.254483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-11-20 12:43:35.254518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-11-20 12:43:35.254724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-11-20 12:43:35.254759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-11-20 12:43:35.255078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-11-20 12:43:35.255112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-11-20 12:43:35.255373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-11-20 12:43:35.255412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-11-20 12:43:35.255658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-11-20 12:43:35.255692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-11-20 12:43:35.255976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-11-20 12:43:35.256011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-11-20 12:43:35.256293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-11-20 12:43:35.256329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-11-20 12:43:35.256479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-11-20 12:43:35.256515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-11-20 12:43:35.256770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-11-20 12:43:35.256804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-11-20 12:43:35.257085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-11-20 12:43:35.257120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-11-20 12:43:35.257322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-11-20 12:43:35.257363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-11-20 12:43:35.257567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-11-20 12:43:35.257603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-11-20 12:43:35.257788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-11-20 12:43:35.257825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-11-20 12:43:35.258040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-11-20 12:43:35.258075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-11-20 12:43:35.258270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-11-20 12:43:35.258308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-11-20 12:43:35.258542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-11-20 12:43:35.258577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-11-20 12:43:35.258696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-11-20 12:43:35.258731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-11-20 12:43:35.259010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-11-20 12:43:35.259045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-11-20 12:43:35.259177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-11-20 12:43:35.259230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-11-20 12:43:35.259521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-11-20 12:43:35.259556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-11-20 12:43:35.259741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-11-20 12:43:35.259781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-11-20 12:43:35.259991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-11-20 12:43:35.260026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-11-20 12:43:35.260287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-11-20 12:43:35.260323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-11-20 12:43:35.260524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-11-20 12:43:35.260559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-11-20 12:43:35.260762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-11-20 12:43:35.260796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-11-20 12:43:35.261076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-11-20 12:43:35.261111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-11-20 12:43:35.261396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-11-20 12:43:35.261433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-11-20 12:43:35.261579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-11-20 12:43:35.261612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-11-20 12:43:35.261823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-11-20 12:43:35.261857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-11-20 12:43:35.262136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-11-20 12:43:35.262170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-11-20 12:43:35.262481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-11-20 12:43:35.262516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-11-20 12:43:35.262721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-11-20 12:43:35.262757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-11-20 12:43:35.262965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-11-20 12:43:35.262999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-11-20 12:43:35.263220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-11-20 12:43:35.263266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-11-20 12:43:35.263480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-11-20 12:43:35.263516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-11-20 12:43:35.263772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-11-20 12:43:35.263807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-11-20 12:43:35.264031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-11-20 12:43:35.264066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-11-20 12:43:35.264336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-11-20 12:43:35.264373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-11-20 12:43:35.264590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-11-20 12:43:35.264624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-11-20 12:43:35.264813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-11-20 12:43:35.264846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-11-20 12:43:35.265141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-11-20 12:43:35.265175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-11-20 12:43:35.265423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-11-20 12:43:35.265458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-11-20 12:43:35.265750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-11-20 12:43:35.265784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-11-20 12:43:35.266003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-11-20 12:43:35.266038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-11-20 12:43:35.266295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-11-20 12:43:35.266331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-11-20 12:43:35.266590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-11-20 12:43:35.266626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-11-20 12:43:35.266931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-11-20 12:43:35.266965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-11-20 12:43:35.267101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-11-20 12:43:35.267137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-11-20 12:43:35.267281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-11-20 12:43:35.267320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-11-20 12:43:35.267571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-11-20 12:43:35.267606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-11-20 12:43:35.267893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-11-20 12:43:35.267927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-11-20 12:43:35.268075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-11-20 12:43:35.268111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-11-20 12:43:35.268399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-11-20 12:43:35.268436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-11-20 12:43:35.268707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-11-20 12:43:35.268743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-11-20 12:43:35.268971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-11-20 12:43:35.269005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-11-20 12:43:35.269238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-11-20 12:43:35.269273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-11-20 12:43:35.269529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-11-20 12:43:35.269563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-11-20 12:43:35.269780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-11-20 12:43:35.269815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-11-20 12:43:35.270095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-11-20 12:43:35.270129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-11-20 12:43:35.270316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-11-20 12:43:35.270350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-11-20 12:43:35.270624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-11-20 12:43:35.270659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-11-20 12:43:35.270882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-11-20 12:43:35.270917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-11-20 12:43:35.271176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-11-20 12:43:35.271228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-11-20 12:43:35.271378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-11-20 12:43:35.271413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-11-20 12:43:35.271603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-11-20 12:43:35.271637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-11-20 12:43:35.271883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-11-20 12:43:35.271918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-11-20 12:43:35.272032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-11-20 12:43:35.272068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-11-20 12:43:35.272273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-11-20 12:43:35.272310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-11-20 12:43:35.272523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-11-20 12:43:35.272558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-11-20 12:43:35.272840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-11-20 12:43:35.272874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-11-20 12:43:35.273078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-11-20 12:43:35.273112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-11-20 12:43:35.273393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-11-20 12:43:35.273428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-11-20 12:43:35.273585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-11-20 12:43:35.273620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-11-20 12:43:35.273910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-11-20 12:43:35.273946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-11-20 12:43:35.274174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-11-20 12:43:35.274220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-11-20 12:43:35.274477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-11-20 12:43:35.274511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-11-20 12:43:35.274831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-11-20 12:43:35.274866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-11-20 12:43:35.275055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-11-20 12:43:35.275090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-11-20 12:43:35.275355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-11-20 12:43:35.275393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-11-20 12:43:35.275682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-11-20 12:43:35.275717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-11-20 12:43:35.275973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-11-20 12:43:35.276008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-11-20 12:43:35.276331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-11-20 12:43:35.276368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-11-20 12:43:35.276558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-11-20 12:43:35.276599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-11-20 12:43:35.276813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-11-20 12:43:35.276852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-11-20 12:43:35.277053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-11-20 12:43:35.277088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-11-20 12:43:35.277304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-11-20 12:43:35.277340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-11-20 12:43:35.277546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-11-20 12:43:35.277581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-11-20 12:43:35.277767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-11-20 12:43:35.277809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-11-20 12:43:35.278015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-11-20 12:43:35.278050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-11-20 12:43:35.278306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-11-20 12:43:35.278346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-11-20 12:43:35.278483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-11-20 12:43:35.278521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-11-20 12:43:35.278726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-11-20 12:43:35.278761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-11-20 12:43:35.279071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-11-20 12:43:35.279106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-11-20 12:43:35.279392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-11-20 12:43:35.279431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-11-20 12:43:35.279731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-11-20 12:43:35.279766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-11-20 12:43:35.279993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-11-20 12:43:35.280028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-11-20 12:43:35.280241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-11-20 12:43:35.280280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-11-20 12:43:35.280488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-11-20 12:43:35.280522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-11-20 12:43:35.280781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-11-20 12:43:35.280814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-11-20 12:43:35.281001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-11-20 12:43:35.281036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-11-20 12:43:35.281256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-11-20 12:43:35.281292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-11-20 12:43:35.281502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-11-20 12:43:35.281537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-11-20 12:43:35.281746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-11-20 12:43:35.281781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.769 [2024-11-20 12:43:35.281985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-11-20 12:43:35.282020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-11-20 12:43:35.282216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-11-20 12:43:35.282253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-11-20 12:43:35.282474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-11-20 12:43:35.282509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-11-20 12:43:35.282719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-11-20 12:43:35.282752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-11-20 12:43:35.282886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-11-20 12:43:35.282921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-11-20 12:43:35.283195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-11-20 12:43:35.283244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-11-20 12:43:35.283518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-11-20 12:43:35.283552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-11-20 12:43:35.283706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-11-20 12:43:35.283740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-11-20 12:43:35.283946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-11-20 12:43:35.283980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-11-20 12:43:35.284260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-11-20 12:43:35.284297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-11-20 12:43:35.284538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-11-20 12:43:35.284573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-11-20 12:43:35.284786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-11-20 12:43:35.284821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-11-20 12:43:35.285114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-11-20 12:43:35.285148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-11-20 12:43:35.285445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-11-20 12:43:35.285482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-11-20 12:43:35.285626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-11-20 12:43:35.285661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-11-20 12:43:35.285814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-11-20 12:43:35.285849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-11-20 12:43:35.286111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-11-20 12:43:35.286146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-11-20 12:43:35.286341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-11-20 12:43:35.286378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-11-20 12:43:35.286661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-11-20 12:43:35.286696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-11-20 12:43:35.286978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-11-20 12:43:35.287013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-11-20 12:43:35.287150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-11-20 12:43:35.287185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-11-20 12:43:35.287488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-11-20 12:43:35.287524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-11-20 12:43:35.287664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-11-20 12:43:35.287699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-11-20 12:43:35.287907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-11-20 12:43:35.287942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-11-20 12:43:35.288234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-11-20 12:43:35.288279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-11-20 12:43:35.288490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-11-20 12:43:35.288524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-11-20 12:43:35.288837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-11-20 12:43:35.288872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-11-20 12:43:35.289143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-11-20 12:43:35.289177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-11-20 12:43:35.289414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-11-20 12:43:35.289450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-11-20 12:43:35.289655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-11-20 12:43:35.289691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-11-20 12:43:35.289891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-11-20 12:43:35.289926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-11-20 12:43:35.290134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-11-20 12:43:35.290169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-11-20 12:43:35.290413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-11-20 12:43:35.290447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-11-20 12:43:35.290653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-11-20 12:43:35.290687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-11-20 12:43:35.290993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-11-20 12:43:35.291029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-11-20 12:43:35.291233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-11-20 12:43:35.291269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-11-20 12:43:35.291405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-11-20 12:43:35.291439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-11-20 12:43:35.291643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-11-20 12:43:35.291677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-11-20 12:43:35.291871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-11-20 12:43:35.291905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-11-20 12:43:35.292092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-11-20 12:43:35.292126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-11-20 12:43:35.292425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-11-20 12:43:35.292464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-11-20 12:43:35.292708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-11-20 12:43:35.292743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-11-20 12:43:35.292964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-11-20 12:43:35.292999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-11-20 12:43:35.293256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-11-20 12:43:35.293302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-11-20 12:43:35.293594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-11-20 12:43:35.293628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-11-20 12:43:35.293892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-11-20 12:43:35.293926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-11-20 12:43:35.294169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-11-20 12:43:35.294224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-11-20 12:43:35.294522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-11-20 12:43:35.294557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-11-20 12:43:35.294760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-11-20 12:43:35.294795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-11-20 12:43:35.295077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-11-20 12:43:35.295111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-11-20 12:43:35.295313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-11-20 12:43:35.295348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-11-20 12:43:35.295543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-11-20 12:43:35.295577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-11-20 12:43:35.295796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-11-20 12:43:35.295830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-11-20 12:43:35.296087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-11-20 12:43:35.296121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-11-20 12:43:35.296428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-11-20 12:43:35.296465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-11-20 12:43:35.296786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-11-20 12:43:35.296820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-11-20 12:43:35.297042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-11-20 12:43:35.297076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-11-20 12:43:35.297277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-11-20 12:43:35.297315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-11-20 12:43:35.297503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-11-20 12:43:35.297538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-11-20 12:43:35.297742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-11-20 12:43:35.297777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-11-20 12:43:35.298034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-11-20 12:43:35.298069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-11-20 12:43:35.298257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-11-20 12:43:35.298294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-11-20 12:43:35.298422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-11-20 12:43:35.298457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-11-20 12:43:35.298734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-11-20 12:43:35.298769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-11-20 12:43:35.299028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-11-20 12:43:35.299068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-11-20 12:43:35.299356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-11-20 12:43:35.299393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-11-20 12:43:35.299657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-11-20 12:43:35.299692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-11-20 12:43:35.299975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-11-20 12:43:35.300009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-11-20 12:43:35.300292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-11-20 12:43:35.300330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-11-20 12:43:35.300523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-11-20 12:43:35.300558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.771 [2024-11-20 12:43:35.300751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-11-20 12:43:35.300786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-11-20 12:43:35.301064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-11-20 12:43:35.301100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-11-20 12:43:35.301385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-11-20 12:43:35.301425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-11-20 12:43:35.301626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-11-20 12:43:35.301662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-11-20 12:43:35.301944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-11-20 12:43:35.301979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-11-20 12:43:35.302252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-11-20 12:43:35.302289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-11-20 12:43:35.302496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-11-20 12:43:35.302531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-11-20 12:43:35.302716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-11-20 12:43:35.302751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-11-20 12:43:35.303041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-11-20 12:43:35.303077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-11-20 12:43:35.303266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-11-20 12:43:35.303302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-11-20 12:43:35.303441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-11-20 12:43:35.303475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-11-20 12:43:35.303614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-11-20 12:43:35.303648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-11-20 12:43:35.303833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-11-20 12:43:35.303868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-11-20 12:43:35.304130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-11-20 12:43:35.304164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-11-20 12:43:35.304370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-11-20 12:43:35.304407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-11-20 12:43:35.304606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-11-20 12:43:35.304641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-11-20 12:43:35.304897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-11-20 12:43:35.304931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-11-20 12:43:35.305246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-11-20 12:43:35.305285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-11-20 12:43:35.305478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-11-20 12:43:35.305514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-11-20 12:43:35.305790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-11-20 12:43:35.305825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-11-20 12:43:35.306010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-11-20 12:43:35.306044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-11-20 12:43:35.306316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-11-20 12:43:35.306353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-11-20 12:43:35.306478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-11-20 12:43:35.306511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-11-20 12:43:35.306792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-11-20 12:43:35.306826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-11-20 12:43:35.307013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-11-20 12:43:35.307048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-11-20 12:43:35.307326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-11-20 12:43:35.307361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-11-20 12:43:35.307569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-11-20 12:43:35.307603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-11-20 12:43:35.307864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-11-20 12:43:35.307899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-11-20 12:43:35.308108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-11-20 12:43:35.308142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-11-20 12:43:35.308409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-11-20 12:43:35.308446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-11-20 12:43:35.308642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-11-20 12:43:35.308676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-11-20 12:43:35.308883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-11-20 12:43:35.308918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-11-20 12:43:35.309150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-11-20 12:43:35.309185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-11-20 12:43:35.309497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-11-20 12:43:35.309534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-11-20 12:43:35.309838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-11-20 12:43:35.309879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-11-20 12:43:35.310116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-11-20 12:43:35.310150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-11-20 12:43:35.310396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-11-20 12:43:35.310434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-11-20 12:43:35.310642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-11-20 12:43:35.310677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.772 [2024-11-20 12:43:35.310925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-11-20 12:43:35.310960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-11-20 12:43:35.311245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-11-20 12:43:35.311282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-11-20 12:43:35.311510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-11-20 12:43:35.311545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-11-20 12:43:35.311839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-11-20 12:43:35.311874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-11-20 12:43:35.312074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-11-20 12:43:35.312110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-11-20 12:43:35.312295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-11-20 12:43:35.312330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-11-20 12:43:35.312545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-11-20 12:43:35.312579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-11-20 12:43:35.312765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-11-20 12:43:35.312799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-11-20 12:43:35.313086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-11-20 12:43:35.313119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-11-20 12:43:35.313310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-11-20 12:43:35.313348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-11-20 12:43:35.313499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-11-20 12:43:35.313535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-11-20 12:43:35.313751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-11-20 12:43:35.313785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-11-20 12:43:35.314041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-11-20 12:43:35.314075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-11-20 12:43:35.314349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-11-20 12:43:35.314386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-11-20 12:43:35.314645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-11-20 12:43:35.314679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-11-20 12:43:35.314897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-11-20 12:43:35.314932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-11-20 12:43:35.315193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-11-20 12:43:35.315241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-11-20 12:43:35.315531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-11-20 12:43:35.315565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-11-20 12:43:35.315826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-11-20 12:43:35.315863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-11-20 12:43:35.316157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-11-20 12:43:35.316192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-11-20 12:43:35.316503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-11-20 12:43:35.316539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-11-20 12:43:35.316817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-11-20 12:43:35.316852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-11-20 12:43:35.317135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-11-20 12:43:35.317170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-11-20 12:43:35.317481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-11-20 12:43:35.317519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-11-20 12:43:35.317718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-11-20 12:43:35.317752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-11-20 12:43:35.318012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-11-20 12:43:35.318047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-11-20 12:43:35.318172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-11-20 12:43:35.318235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-11-20 12:43:35.318436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-11-20 12:43:35.318471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-11-20 12:43:35.318671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-11-20 12:43:35.318706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-11-20 12:43:35.318921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-11-20 12:43:35.318956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-11-20 12:43:35.319102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-11-20 12:43:35.319137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-11-20 12:43:35.319416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-11-20 12:43:35.319451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-11-20 12:43:35.319592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-11-20 12:43:35.319627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-11-20 12:43:35.319781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-11-20 12:43:35.319815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-11-20 12:43:35.320099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-11-20 12:43:35.320133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-11-20 12:43:35.320339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-11-20 12:43:35.320376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-11-20 12:43:35.320664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-11-20 12:43:35.320704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-11-20 12:43:35.320895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-11-20 12:43:35.320929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-11-20 12:43:35.321076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-11-20 12:43:35.321110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-11-20 12:43:35.321242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-11-20 12:43:35.321283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-11-20 12:43:35.321423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-11-20 12:43:35.321458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-11-20 12:43:35.321606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-11-20 12:43:35.321641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-11-20 12:43:35.321825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-11-20 12:43:35.321861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-11-20 12:43:35.322004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-11-20 12:43:35.322040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-11-20 12:43:35.322284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-11-20 12:43:35.322321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-11-20 12:43:35.322441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-11-20 12:43:35.322476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-11-20 12:43:35.322592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-11-20 12:43:35.322626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-11-20 12:43:35.322817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-11-20 12:43:35.322851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-11-20 12:43:35.323045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-11-20 12:43:35.323080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-11-20 12:43:35.323289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-11-20 12:43:35.323324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-11-20 12:43:35.323618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-11-20 12:43:35.323653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-11-20 12:43:35.323911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-11-20 12:43:35.323945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-11-20 12:43:35.324200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-11-20 12:43:35.324258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-11-20 12:43:35.324395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-11-20 12:43:35.324430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-11-20 12:43:35.324659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-11-20 12:43:35.324693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-11-20 12:43:35.324908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-11-20 12:43:35.324942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-11-20 12:43:35.325088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-11-20 12:43:35.325122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-11-20 12:43:35.325279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-11-20 12:43:35.325315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-11-20 12:43:35.325574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-11-20 12:43:35.325609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-11-20 12:43:35.325836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-11-20 12:43:35.325871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-11-20 12:43:35.325992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-11-20 12:43:35.326027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-11-20 12:43:35.326234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-11-20 12:43:35.326269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-11-20 12:43:35.326387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-11-20 12:43:35.326421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-11-20 12:43:35.326694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-11-20 12:43:35.326729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-11-20 12:43:35.326854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-11-20 12:43:35.326888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-11-20 12:43:35.327092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-11-20 12:43:35.327126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-11-20 12:43:35.327388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-11-20 12:43:35.327424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-11-20 12:43:35.327630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-11-20 12:43:35.327664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-11-20 12:43:35.327868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-11-20 12:43:35.327902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-11-20 12:43:35.328136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-11-20 12:43:35.328169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-11-20 12:43:35.328233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7c7af0 (9): Bad file descriptor 00:29:29.773 [2024-11-20 12:43:35.328493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-11-20 12:43:35.328573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-11-20 12:43:35.328751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-11-20 12:43:35.328789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-11-20 12:43:35.329082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-11-20 12:43:35.329118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-11-20 12:43:35.329326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-11-20 12:43:35.329364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-11-20 12:43:35.329570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-11-20 12:43:35.329607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-11-20 12:43:35.329797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-11-20 12:43:35.329831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-11-20 12:43:35.330078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-11-20 12:43:35.330114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-11-20 12:43:35.330372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-11-20 12:43:35.330409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-11-20 12:43:35.330689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-11-20 12:43:35.330723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-11-20 12:43:35.330844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-11-20 12:43:35.330880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-11-20 12:43:35.331070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-11-20 12:43:35.331105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-11-20 12:43:35.331315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-11-20 12:43:35.331351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-11-20 12:43:35.331548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-11-20 12:43:35.331583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-11-20 12:43:35.331788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-11-20 12:43:35.331822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-11-20 12:43:35.332011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-11-20 12:43:35.332046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-11-20 12:43:35.332249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-11-20 12:43:35.332285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-11-20 12:43:35.332566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-11-20 12:43:35.332602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-11-20 12:43:35.332860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-11-20 12:43:35.332895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-11-20 12:43:35.333100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-11-20 12:43:35.333133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-11-20 12:43:35.333296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-11-20 12:43:35.333340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-11-20 12:43:35.333572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-11-20 12:43:35.333607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-11-20 12:43:35.333817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-11-20 12:43:35.333851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-11-20 12:43:35.334003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-11-20 12:43:35.334039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-11-20 12:43:35.334293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-11-20 12:43:35.334330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-11-20 12:43:35.334476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-11-20 12:43:35.334511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-11-20 12:43:35.334815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-11-20 12:43:35.334849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-11-20 12:43:35.335123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-11-20 12:43:35.335157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-11-20 12:43:35.335316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-11-20 12:43:35.335352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-11-20 12:43:35.335607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-11-20 12:43:35.335641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-11-20 12:43:35.335860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-11-20 12:43:35.335895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-11-20 12:43:35.336027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-11-20 12:43:35.336061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-11-20 12:43:35.336268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-11-20 12:43:35.336304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-11-20 12:43:35.336503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-11-20 12:43:35.336537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-11-20 12:43:35.336741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-11-20 12:43:35.336775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-11-20 12:43:35.337035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-11-20 12:43:35.337070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-11-20 12:43:35.337327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-11-20 12:43:35.337363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-11-20 12:43:35.337560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-11-20 12:43:35.337595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-11-20 12:43:35.337738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-11-20 12:43:35.337773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-11-20 12:43:35.337956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-11-20 12:43:35.337990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.775 qpair failed and we were unable to recover it. 00:29:29.775 [2024-11-20 12:43:35.338123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.775 [2024-11-20 12:43:35.338158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.775 qpair failed and we were unable to recover it. 00:29:29.775 [2024-11-20 12:43:35.338430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.775 [2024-11-20 12:43:35.338465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.775 qpair failed and we were unable to recover it. 00:29:29.775 [2024-11-20 12:43:35.338723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.775 [2024-11-20 12:43:35.338757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.775 qpair failed and we were unable to recover it. 00:29:29.775 [2024-11-20 12:43:35.338902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.775 [2024-11-20 12:43:35.338936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.775 qpair failed and we were unable to recover it. 00:29:29.775 [2024-11-20 12:43:35.339145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.775 [2024-11-20 12:43:35.339179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.775 qpair failed and we were unable to recover it. 00:29:29.775 [2024-11-20 12:43:35.339408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.775 [2024-11-20 12:43:35.339443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.775 qpair failed and we were unable to recover it. 00:29:29.775 [2024-11-20 12:43:35.339576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.775 [2024-11-20 12:43:35.339611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.775 qpair failed and we were unable to recover it. 00:29:29.775 [2024-11-20 12:43:35.339840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.775 [2024-11-20 12:43:35.339875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.775 qpair failed and we were unable to recover it. 00:29:29.775 [2024-11-20 12:43:35.340009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.775 [2024-11-20 12:43:35.340043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.775 qpair failed and we were unable to recover it. 00:29:29.775 [2024-11-20 12:43:35.340264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.775 [2024-11-20 12:43:35.340301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.775 qpair failed and we were unable to recover it. 00:29:29.775 [2024-11-20 12:43:35.340585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.775 [2024-11-20 12:43:35.340624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.775 qpair failed and we were unable to recover it. 00:29:29.775 [2024-11-20 12:43:35.340813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.775 [2024-11-20 12:43:35.340846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.775 qpair failed and we were unable to recover it. 00:29:29.775 [2024-11-20 12:43:35.341141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.775 [2024-11-20 12:43:35.341176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.775 qpair failed and we were unable to recover it. 00:29:29.775 [2024-11-20 12:43:35.341392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.775 [2024-11-20 12:43:35.341427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.775 qpair failed and we were unable to recover it. 00:29:29.775 [2024-11-20 12:43:35.341551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.775 [2024-11-20 12:43:35.341585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.775 qpair failed and we were unable to recover it. 00:29:29.775 [2024-11-20 12:43:35.341712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.775 [2024-11-20 12:43:35.341745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.775 qpair failed and we were unable to recover it. 00:29:29.775 [2024-11-20 12:43:35.341875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.775 [2024-11-20 12:43:35.341909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.775 qpair failed and we were unable to recover it. 00:29:29.775 [2024-11-20 12:43:35.342114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.775 [2024-11-20 12:43:35.342147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.775 qpair failed and we were unable to recover it. 00:29:29.775 [2024-11-20 12:43:35.342273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.775 [2024-11-20 12:43:35.342308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.775 qpair failed and we were unable to recover it. 00:29:29.775 [2024-11-20 12:43:35.342531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.775 [2024-11-20 12:43:35.342565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.775 qpair failed and we were unable to recover it. 00:29:29.775 [2024-11-20 12:43:35.342827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.775 [2024-11-20 12:43:35.342867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.775 qpair failed and we were unable to recover it. 00:29:29.775 [2024-11-20 12:43:35.343063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.775 [2024-11-20 12:43:35.343096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.775 qpair failed and we were unable to recover it. 00:29:29.775 [2024-11-20 12:43:35.343336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.775 [2024-11-20 12:43:35.343372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.775 qpair failed and we were unable to recover it. 00:29:29.775 [2024-11-20 12:43:35.343669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.775 [2024-11-20 12:43:35.343703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.775 qpair failed and we were unable to recover it. 00:29:29.775 [2024-11-20 12:43:35.343909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.775 [2024-11-20 12:43:35.343943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.775 qpair failed and we were unable to recover it. 00:29:29.775 [2024-11-20 12:43:35.344129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.775 [2024-11-20 12:43:35.344163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.775 qpair failed and we were unable to recover it. 00:29:29.775 [2024-11-20 12:43:35.344334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.775 [2024-11-20 12:43:35.344370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.775 qpair failed and we were unable to recover it. 00:29:29.775 [2024-11-20 12:43:35.344651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.775 [2024-11-20 12:43:35.344685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.775 qpair failed and we were unable to recover it. 00:29:29.775 [2024-11-20 12:43:35.344878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.775 [2024-11-20 12:43:35.344911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.775 qpair failed and we were unable to recover it. 00:29:29.775 [2024-11-20 12:43:35.345106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.775 [2024-11-20 12:43:35.345140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.775 qpair failed and we were unable to recover it. 00:29:29.775 [2024-11-20 12:43:35.345440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.775 [2024-11-20 12:43:35.345476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.775 qpair failed and we were unable to recover it. 00:29:29.775 [2024-11-20 12:43:35.345674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.775 [2024-11-20 12:43:35.345708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.775 qpair failed and we were unable to recover it. 00:29:29.775 [2024-11-20 12:43:35.345904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.775 [2024-11-20 12:43:35.345937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.775 qpair failed and we were unable to recover it. 00:29:29.775 [2024-11-20 12:43:35.346100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.775 [2024-11-20 12:43:35.346135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.775 qpair failed and we were unable to recover it. 00:29:29.775 [2024-11-20 12:43:35.346339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.775 [2024-11-20 12:43:35.346375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.775 qpair failed and we were unable to recover it. 00:29:29.775 [2024-11-20 12:43:35.346578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.775 [2024-11-20 12:43:35.346612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.775 qpair failed and we were unable to recover it. 00:29:29.775 [2024-11-20 12:43:35.346816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.775 [2024-11-20 12:43:35.346851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.775 qpair failed and we were unable to recover it. 00:29:29.775 [2024-11-20 12:43:35.347064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.775 [2024-11-20 12:43:35.347099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.776 qpair failed and we were unable to recover it. 00:29:29.776 [2024-11-20 12:43:35.347284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.776 [2024-11-20 12:43:35.347319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.776 qpair failed and we were unable to recover it. 00:29:29.776 [2024-11-20 12:43:35.347456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.776 [2024-11-20 12:43:35.347492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.776 qpair failed and we were unable to recover it. 00:29:29.776 [2024-11-20 12:43:35.347747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.776 [2024-11-20 12:43:35.347781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.776 qpair failed and we were unable to recover it. 00:29:29.776 [2024-11-20 12:43:35.348038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.776 [2024-11-20 12:43:35.348073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.776 qpair failed and we were unable to recover it. 00:29:29.776 [2024-11-20 12:43:35.348291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.776 [2024-11-20 12:43:35.348326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.776 qpair failed and we were unable to recover it. 00:29:29.776 [2024-11-20 12:43:35.348535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.776 [2024-11-20 12:43:35.348569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.776 qpair failed and we were unable to recover it. 00:29:29.776 [2024-11-20 12:43:35.348719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.776 [2024-11-20 12:43:35.348753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.776 qpair failed and we were unable to recover it. 00:29:29.776 [2024-11-20 12:43:35.348957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.776 [2024-11-20 12:43:35.348991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.776 qpair failed and we were unable to recover it. 00:29:29.776 [2024-11-20 12:43:35.349250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.776 [2024-11-20 12:43:35.349286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.776 qpair failed and we were unable to recover it. 00:29:29.776 [2024-11-20 12:43:35.349492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.776 [2024-11-20 12:43:35.349526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.776 qpair failed and we were unable to recover it. 00:29:29.776 [2024-11-20 12:43:35.349677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.776 [2024-11-20 12:43:35.349711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.776 qpair failed and we were unable to recover it. 00:29:29.776 [2024-11-20 12:43:35.349894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.776 [2024-11-20 12:43:35.349929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.776 qpair failed and we were unable to recover it. 00:29:29.776 [2024-11-20 12:43:35.350066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.776 [2024-11-20 12:43:35.350101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.776 qpair failed and we were unable to recover it. 00:29:29.776 [2024-11-20 12:43:35.350359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.776 [2024-11-20 12:43:35.350394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.776 qpair failed and we were unable to recover it. 00:29:29.776 [2024-11-20 12:43:35.350619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.776 [2024-11-20 12:43:35.350652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.776 qpair failed and we were unable to recover it. 00:29:29.776 [2024-11-20 12:43:35.350785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.776 [2024-11-20 12:43:35.350819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.776 qpair failed and we were unable to recover it. 00:29:29.776 [2024-11-20 12:43:35.351076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.776 [2024-11-20 12:43:35.351110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.776 qpair failed and we were unable to recover it. 00:29:29.776 [2024-11-20 12:43:35.351312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.776 [2024-11-20 12:43:35.351348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.776 qpair failed and we were unable to recover it. 00:29:29.776 [2024-11-20 12:43:35.351530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.776 [2024-11-20 12:43:35.351564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.776 qpair failed and we were unable to recover it. 00:29:29.776 [2024-11-20 12:43:35.351780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.776 [2024-11-20 12:43:35.351814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.776 qpair failed and we were unable to recover it. 00:29:29.776 [2024-11-20 12:43:35.352009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.776 [2024-11-20 12:43:35.352043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.776 qpair failed and we were unable to recover it. 00:29:29.776 [2024-11-20 12:43:35.352263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.776 [2024-11-20 12:43:35.352299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.776 qpair failed and we were unable to recover it. 00:29:29.776 [2024-11-20 12:43:35.352484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.776 [2024-11-20 12:43:35.352524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.776 qpair failed and we were unable to recover it. 00:29:29.776 [2024-11-20 12:43:35.352777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.776 [2024-11-20 12:43:35.352810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.776 qpair failed and we were unable to recover it. 00:29:29.776 [2024-11-20 12:43:35.353038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.776 [2024-11-20 12:43:35.353073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.776 qpair failed and we were unable to recover it. 00:29:29.776 [2024-11-20 12:43:35.353275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.776 [2024-11-20 12:43:35.353311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.776 qpair failed and we were unable to recover it. 00:29:29.776 [2024-11-20 12:43:35.353505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.776 [2024-11-20 12:43:35.353539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.776 qpair failed and we were unable to recover it. 00:29:29.776 [2024-11-20 12:43:35.353663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.776 [2024-11-20 12:43:35.353697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.776 qpair failed and we were unable to recover it. 00:29:29.776 [2024-11-20 12:43:35.353959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.776 [2024-11-20 12:43:35.353994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.776 qpair failed and we were unable to recover it. 00:29:29.776 [2024-11-20 12:43:35.354196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.776 [2024-11-20 12:43:35.354243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.776 qpair failed and we were unable to recover it. 00:29:29.776 [2024-11-20 12:43:35.354359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.776 [2024-11-20 12:43:35.354393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.776 qpair failed and we were unable to recover it. 00:29:29.776 [2024-11-20 12:43:35.354643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.776 [2024-11-20 12:43:35.354676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.776 qpair failed and we were unable to recover it. 00:29:29.776 [2024-11-20 12:43:35.354814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.776 [2024-11-20 12:43:35.354849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.776 qpair failed and we were unable to recover it. 00:29:29.776 [2024-11-20 12:43:35.355047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.776 [2024-11-20 12:43:35.355080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.776 qpair failed and we were unable to recover it. 00:29:29.776 [2024-11-20 12:43:35.355357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.776 [2024-11-20 12:43:35.355393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.776 qpair failed and we were unable to recover it. 00:29:29.776 [2024-11-20 12:43:35.355600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.776 [2024-11-20 12:43:35.355633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.776 qpair failed and we were unable to recover it. 00:29:29.776 [2024-11-20 12:43:35.355927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.776 [2024-11-20 12:43:35.355961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.776 qpair failed and we were unable to recover it. 00:29:29.776 [2024-11-20 12:43:35.356158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-11-20 12:43:35.356192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-11-20 12:43:35.356385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-11-20 12:43:35.356420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-11-20 12:43:35.356549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-11-20 12:43:35.356583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-11-20 12:43:35.356735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-11-20 12:43:35.356769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-11-20 12:43:35.356911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-11-20 12:43:35.356946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-11-20 12:43:35.357154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-11-20 12:43:35.357187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-11-20 12:43:35.357452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-11-20 12:43:35.357486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-11-20 12:43:35.357639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-11-20 12:43:35.357674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-11-20 12:43:35.357879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-11-20 12:43:35.357912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-11-20 12:43:35.358138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-11-20 12:43:35.358172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-11-20 12:43:35.358305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-11-20 12:43:35.358340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-11-20 12:43:35.358594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-11-20 12:43:35.358628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-11-20 12:43:35.358906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-11-20 12:43:35.358941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-11-20 12:43:35.359222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-11-20 12:43:35.359258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-11-20 12:43:35.359394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-11-20 12:43:35.359429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-11-20 12:43:35.359565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-11-20 12:43:35.359600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-11-20 12:43:35.359851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-11-20 12:43:35.359884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-11-20 12:43:35.360090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-11-20 12:43:35.360123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-11-20 12:43:35.360378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-11-20 12:43:35.360415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-11-20 12:43:35.360608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-11-20 12:43:35.360641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-11-20 12:43:35.360850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-11-20 12:43:35.360883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-11-20 12:43:35.361066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-11-20 12:43:35.361099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-11-20 12:43:35.361377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-11-20 12:43:35.361411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-11-20 12:43:35.361527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-11-20 12:43:35.361560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-11-20 12:43:35.361842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-11-20 12:43:35.361877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-11-20 12:43:35.362127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-11-20 12:43:35.362167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-11-20 12:43:35.362375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-11-20 12:43:35.362410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-11-20 12:43:35.362558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-11-20 12:43:35.362592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-11-20 12:43:35.362813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-11-20 12:43:35.362847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-11-20 12:43:35.363124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-11-20 12:43:35.363158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-11-20 12:43:35.363350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-11-20 12:43:35.363385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-11-20 12:43:35.363599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-11-20 12:43:35.363634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-11-20 12:43:35.363913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-11-20 12:43:35.363947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-11-20 12:43:35.364100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-11-20 12:43:35.364133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-11-20 12:43:35.364338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-11-20 12:43:35.364373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-11-20 12:43:35.364561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-11-20 12:43:35.364595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-11-20 12:43:35.364845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-11-20 12:43:35.364879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-11-20 12:43:35.365156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-11-20 12:43:35.365190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-11-20 12:43:35.365395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.778 [2024-11-20 12:43:35.365430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.778 qpair failed and we were unable to recover it. 00:29:29.778 [2024-11-20 12:43:35.365567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.778 [2024-11-20 12:43:35.365601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.778 qpair failed and we were unable to recover it. 00:29:29.778 [2024-11-20 12:43:35.365817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.778 [2024-11-20 12:43:35.365851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.778 qpair failed and we were unable to recover it. 00:29:29.778 [2024-11-20 12:43:35.366110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.778 [2024-11-20 12:43:35.366144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.778 qpair failed and we were unable to recover it. 00:29:29.778 [2024-11-20 12:43:35.366312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.778 [2024-11-20 12:43:35.366348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.778 qpair failed and we were unable to recover it. 00:29:29.778 [2024-11-20 12:43:35.366555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.778 [2024-11-20 12:43:35.366588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.778 qpair failed and we were unable to recover it. 00:29:29.778 [2024-11-20 12:43:35.366781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.778 [2024-11-20 12:43:35.366814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.778 qpair failed and we were unable to recover it. 00:29:29.778 [2024-11-20 12:43:35.367065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.778 [2024-11-20 12:43:35.367100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.778 qpair failed and we were unable to recover it. 00:29:29.778 [2024-11-20 12:43:35.367353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.778 [2024-11-20 12:43:35.367388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.778 qpair failed and we were unable to recover it. 00:29:29.778 [2024-11-20 12:43:35.367661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.778 [2024-11-20 12:43:35.367694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.778 qpair failed and we were unable to recover it. 00:29:29.778 [2024-11-20 12:43:35.367831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.778 [2024-11-20 12:43:35.367866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.778 qpair failed and we were unable to recover it. 00:29:29.778 [2024-11-20 12:43:35.367993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.778 [2024-11-20 12:43:35.368026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.778 qpair failed and we were unable to recover it. 00:29:29.778 [2024-11-20 12:43:35.368152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.778 [2024-11-20 12:43:35.368187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.778 qpair failed and we were unable to recover it. 00:29:29.778 [2024-11-20 12:43:35.368494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.778 [2024-11-20 12:43:35.368530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.778 qpair failed and we were unable to recover it. 00:29:29.778 [2024-11-20 12:43:35.368661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.778 [2024-11-20 12:43:35.368695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.778 qpair failed and we were unable to recover it. 00:29:29.778 [2024-11-20 12:43:35.368817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.778 [2024-11-20 12:43:35.368851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.778 qpair failed and we were unable to recover it. 00:29:29.778 [2024-11-20 12:43:35.368982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.778 [2024-11-20 12:43:35.369016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.778 qpair failed and we were unable to recover it. 00:29:29.778 [2024-11-20 12:43:35.369144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.778 [2024-11-20 12:43:35.369179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.778 qpair failed and we were unable to recover it. 00:29:29.778 [2024-11-20 12:43:35.369376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.778 [2024-11-20 12:43:35.369410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.778 qpair failed and we were unable to recover it. 00:29:29.778 [2024-11-20 12:43:35.369523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.778 [2024-11-20 12:43:35.369556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.778 qpair failed and we were unable to recover it. 00:29:29.778 [2024-11-20 12:43:35.369684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.778 [2024-11-20 12:43:35.369725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.778 qpair failed and we were unable to recover it. 00:29:29.778 [2024-11-20 12:43:35.369972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.778 [2024-11-20 12:43:35.370006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.778 qpair failed and we were unable to recover it. 00:29:29.778 [2024-11-20 12:43:35.370258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.778 [2024-11-20 12:43:35.370293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.778 qpair failed and we were unable to recover it. 00:29:29.778 [2024-11-20 12:43:35.370478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.778 [2024-11-20 12:43:35.370511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.778 qpair failed and we were unable to recover it. 00:29:29.778 [2024-11-20 12:43:35.370693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.778 [2024-11-20 12:43:35.370728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.778 qpair failed and we were unable to recover it. 00:29:29.778 [2024-11-20 12:43:35.370918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.778 [2024-11-20 12:43:35.370951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.778 qpair failed and we were unable to recover it. 00:29:29.778 [2024-11-20 12:43:35.371092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.778 [2024-11-20 12:43:35.371126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.778 qpair failed and we were unable to recover it. 00:29:29.778 [2024-11-20 12:43:35.371319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.778 [2024-11-20 12:43:35.371360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.778 qpair failed and we were unable to recover it. 00:29:29.778 [2024-11-20 12:43:35.371500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.778 [2024-11-20 12:43:35.371533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.778 qpair failed and we were unable to recover it. 00:29:29.778 [2024-11-20 12:43:35.371730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.778 [2024-11-20 12:43:35.371763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.778 qpair failed and we were unable to recover it. 00:29:29.778 [2024-11-20 12:43:35.372013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.778 [2024-11-20 12:43:35.372048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.778 qpair failed and we were unable to recover it. 00:29:29.778 [2024-11-20 12:43:35.372231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.778 [2024-11-20 12:43:35.372265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.778 qpair failed and we were unable to recover it. 00:29:29.778 [2024-11-20 12:43:35.372459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.778 [2024-11-20 12:43:35.372492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.779 qpair failed and we were unable to recover it. 00:29:29.779 [2024-11-20 12:43:35.372690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.779 [2024-11-20 12:43:35.372724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.779 qpair failed and we were unable to recover it. 00:29:29.779 [2024-11-20 12:43:35.372864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.779 [2024-11-20 12:43:35.372898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.779 qpair failed and we were unable to recover it. 00:29:29.779 [2024-11-20 12:43:35.373034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.779 [2024-11-20 12:43:35.373067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.779 qpair failed and we were unable to recover it. 00:29:29.779 [2024-11-20 12:43:35.373255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.779 [2024-11-20 12:43:35.373291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.779 qpair failed and we were unable to recover it. 00:29:29.779 [2024-11-20 12:43:35.373491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.779 [2024-11-20 12:43:35.373524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.779 qpair failed and we were unable to recover it. 00:29:29.779 [2024-11-20 12:43:35.373719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.779 [2024-11-20 12:43:35.373753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.779 qpair failed and we were unable to recover it. 00:29:29.779 [2024-11-20 12:43:35.373865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.779 [2024-11-20 12:43:35.373898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.779 qpair failed and we were unable to recover it. 00:29:29.779 [2024-11-20 12:43:35.374030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.779 [2024-11-20 12:43:35.374063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.779 qpair failed and we were unable to recover it. 00:29:29.779 [2024-11-20 12:43:35.374270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.779 [2024-11-20 12:43:35.374306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.779 qpair failed and we were unable to recover it. 00:29:29.779 [2024-11-20 12:43:35.374487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.779 [2024-11-20 12:43:35.374520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.779 qpair failed and we were unable to recover it. 00:29:29.779 [2024-11-20 12:43:35.374697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.779 [2024-11-20 12:43:35.374731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.779 qpair failed and we were unable to recover it. 00:29:29.779 [2024-11-20 12:43:35.374929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.779 [2024-11-20 12:43:35.374963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.779 qpair failed and we were unable to recover it. 00:29:29.779 [2024-11-20 12:43:35.375159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.779 [2024-11-20 12:43:35.375193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.779 qpair failed and we were unable to recover it. 00:29:29.779 [2024-11-20 12:43:35.375481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.779 [2024-11-20 12:43:35.375515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.779 qpair failed and we were unable to recover it. 00:29:29.779 [2024-11-20 12:43:35.375711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.779 [2024-11-20 12:43:35.375745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.779 qpair failed and we were unable to recover it. 00:29:29.779 [2024-11-20 12:43:35.376033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.779 [2024-11-20 12:43:35.376067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.779 qpair failed and we were unable to recover it. 00:29:29.779 [2024-11-20 12:43:35.376215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.779 [2024-11-20 12:43:35.376250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.779 qpair failed and we were unable to recover it. 00:29:29.779 [2024-11-20 12:43:35.376470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.779 [2024-11-20 12:43:35.376503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.779 qpair failed and we were unable to recover it. 00:29:29.779 [2024-11-20 12:43:35.376694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.779 [2024-11-20 12:43:35.376729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.779 qpair failed and we were unable to recover it. 00:29:29.779 [2024-11-20 12:43:35.377005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.779 [2024-11-20 12:43:35.377039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.779 qpair failed and we were unable to recover it. 00:29:29.779 [2024-11-20 12:43:35.377173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.779 [2024-11-20 12:43:35.377215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.779 qpair failed and we were unable to recover it. 00:29:29.779 [2024-11-20 12:43:35.377409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.779 [2024-11-20 12:43:35.377444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.779 qpair failed and we were unable to recover it. 00:29:29.779 [2024-11-20 12:43:35.377556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.779 [2024-11-20 12:43:35.377589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.779 qpair failed and we were unable to recover it. 00:29:29.779 [2024-11-20 12:43:35.377841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.779 [2024-11-20 12:43:35.377874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.779 qpair failed and we were unable to recover it. 00:29:29.779 [2024-11-20 12:43:35.378053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.779 [2024-11-20 12:43:35.378087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.779 qpair failed and we were unable to recover it. 00:29:29.779 [2024-11-20 12:43:35.378257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.779 [2024-11-20 12:43:35.378293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.779 qpair failed and we were unable to recover it. 00:29:29.779 [2024-11-20 12:43:35.378505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.779 [2024-11-20 12:43:35.378540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.779 qpair failed and we were unable to recover it. 00:29:29.779 [2024-11-20 12:43:35.378837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.779 [2024-11-20 12:43:35.378872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.779 qpair failed and we were unable to recover it. 00:29:29.779 [2024-11-20 12:43:35.379125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.779 [2024-11-20 12:43:35.379158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.779 qpair failed and we were unable to recover it. 00:29:29.779 [2024-11-20 12:43:35.379417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.779 [2024-11-20 12:43:35.379452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.779 qpair failed and we were unable to recover it. 00:29:29.779 [2024-11-20 12:43:35.379654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.779 [2024-11-20 12:43:35.379687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.779 qpair failed and we were unable to recover it. 00:29:29.779 [2024-11-20 12:43:35.379866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.779 [2024-11-20 12:43:35.379900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.779 qpair failed and we were unable to recover it. 00:29:29.779 [2024-11-20 12:43:35.380041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.779 [2024-11-20 12:43:35.380074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.779 qpair failed and we were unable to recover it. 00:29:29.779 [2024-11-20 12:43:35.380265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.779 [2024-11-20 12:43:35.380301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.779 qpair failed and we were unable to recover it. 00:29:29.779 [2024-11-20 12:43:35.380420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.779 [2024-11-20 12:43:35.380461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.779 qpair failed and we were unable to recover it. 00:29:29.779 [2024-11-20 12:43:35.380644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.779 [2024-11-20 12:43:35.380677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.779 qpair failed and we were unable to recover it. 00:29:29.779 [2024-11-20 12:43:35.380928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.779 [2024-11-20 12:43:35.380961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.779 qpair failed and we were unable to recover it. 00:29:29.779 [2024-11-20 12:43:35.381116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.780 [2024-11-20 12:43:35.381150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.780 qpair failed and we were unable to recover it. 00:29:29.780 [2024-11-20 12:43:35.381371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.780 [2024-11-20 12:43:35.381407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.780 qpair failed and we were unable to recover it. 00:29:29.780 [2024-11-20 12:43:35.381544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.780 [2024-11-20 12:43:35.381578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.780 qpair failed and we were unable to recover it. 00:29:29.780 [2024-11-20 12:43:35.381763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.780 [2024-11-20 12:43:35.381797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.780 qpair failed and we were unable to recover it. 00:29:29.780 [2024-11-20 12:43:35.382074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.780 [2024-11-20 12:43:35.382107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.780 qpair failed and we were unable to recover it. 00:29:29.780 [2024-11-20 12:43:35.382286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.780 [2024-11-20 12:43:35.382321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.780 qpair failed and we were unable to recover it. 00:29:29.780 [2024-11-20 12:43:35.382515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.780 [2024-11-20 12:43:35.382550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.780 qpair failed and we were unable to recover it. 00:29:29.780 [2024-11-20 12:43:35.382664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.780 [2024-11-20 12:43:35.382697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.780 qpair failed and we were unable to recover it. 00:29:29.780 [2024-11-20 12:43:35.382917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.780 [2024-11-20 12:43:35.382950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.780 qpair failed and we were unable to recover it. 00:29:29.780 [2024-11-20 12:43:35.383143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.780 [2024-11-20 12:43:35.383178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.780 qpair failed and we were unable to recover it. 00:29:29.780 [2024-11-20 12:43:35.383454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.780 [2024-11-20 12:43:35.383489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.780 qpair failed and we were unable to recover it. 00:29:29.780 [2024-11-20 12:43:35.383762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.780 [2024-11-20 12:43:35.383796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.780 qpair failed and we were unable to recover it. 00:29:29.780 [2024-11-20 12:43:35.383923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.780 [2024-11-20 12:43:35.383958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.780 qpair failed and we were unable to recover it. 00:29:29.780 [2024-11-20 12:43:35.384138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.780 [2024-11-20 12:43:35.384172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.780 qpair failed and we were unable to recover it. 00:29:29.780 [2024-11-20 12:43:35.384324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.780 [2024-11-20 12:43:35.384358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.780 qpair failed and we were unable to recover it. 00:29:29.780 [2024-11-20 12:43:35.384628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.780 [2024-11-20 12:43:35.384662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.780 qpair failed and we were unable to recover it. 00:29:29.780 [2024-11-20 12:43:35.384957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.780 [2024-11-20 12:43:35.384991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.780 qpair failed and we were unable to recover it. 00:29:29.780 [2024-11-20 12:43:35.385216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.780 [2024-11-20 12:43:35.385249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.780 qpair failed and we were unable to recover it. 00:29:29.780 [2024-11-20 12:43:35.385441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.780 [2024-11-20 12:43:35.385475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.780 qpair failed and we were unable to recover it. 00:29:29.780 [2024-11-20 12:43:35.385670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.780 [2024-11-20 12:43:35.385703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.780 qpair failed and we were unable to recover it. 00:29:29.780 [2024-11-20 12:43:35.385915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.780 [2024-11-20 12:43:35.385950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.780 qpair failed and we were unable to recover it. 00:29:29.780 [2024-11-20 12:43:35.386194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.780 [2024-11-20 12:43:35.386257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.780 qpair failed and we were unable to recover it. 00:29:29.780 [2024-11-20 12:43:35.386448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.780 [2024-11-20 12:43:35.386482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.780 qpair failed and we were unable to recover it. 00:29:29.780 [2024-11-20 12:43:35.386592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.780 [2024-11-20 12:43:35.386626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.780 qpair failed and we were unable to recover it. 00:29:29.780 [2024-11-20 12:43:35.386877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.780 [2024-11-20 12:43:35.386912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.780 qpair failed and we were unable to recover it. 00:29:29.780 [2024-11-20 12:43:35.387089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.780 [2024-11-20 12:43:35.387122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.780 qpair failed and we were unable to recover it. 00:29:29.780 [2024-11-20 12:43:35.387237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.780 [2024-11-20 12:43:35.387273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.780 qpair failed and we were unable to recover it. 00:29:29.780 [2024-11-20 12:43:35.387521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.780 [2024-11-20 12:43:35.387555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.780 qpair failed and we were unable to recover it. 00:29:29.780 [2024-11-20 12:43:35.387801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.780 [2024-11-20 12:43:35.387833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.780 qpair failed and we were unable to recover it. 00:29:29.780 [2024-11-20 12:43:35.388083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.780 [2024-11-20 12:43:35.388118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.780 qpair failed and we were unable to recover it. 00:29:29.780 [2024-11-20 12:43:35.388362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.780 [2024-11-20 12:43:35.388397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.780 qpair failed and we were unable to recover it. 00:29:29.780 [2024-11-20 12:43:35.388584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.780 [2024-11-20 12:43:35.388617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.780 qpair failed and we were unable to recover it. 00:29:29.780 [2024-11-20 12:43:35.388738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.780 [2024-11-20 12:43:35.388771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.780 qpair failed and we were unable to recover it. 00:29:29.780 [2024-11-20 12:43:35.388951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.780 [2024-11-20 12:43:35.388985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.780 qpair failed and we were unable to recover it. 00:29:29.780 [2024-11-20 12:43:35.389176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.780 [2024-11-20 12:43:35.389217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.780 qpair failed and we were unable to recover it. 00:29:29.780 [2024-11-20 12:43:35.389535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.780 [2024-11-20 12:43:35.389573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.780 qpair failed and we were unable to recover it. 00:29:29.780 [2024-11-20 12:43:35.389707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.780 [2024-11-20 12:43:35.389742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.780 qpair failed and we were unable to recover it. 00:29:29.780 [2024-11-20 12:43:35.389930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.780 [2024-11-20 12:43:35.389970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.780 qpair failed and we were unable to recover it. 00:29:29.780 [2024-11-20 12:43:35.390129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.781 [2024-11-20 12:43:35.390163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.781 qpair failed and we were unable to recover it. 00:29:29.781 [2024-11-20 12:43:35.390309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.781 [2024-11-20 12:43:35.390345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.781 qpair failed and we were unable to recover it. 00:29:29.781 [2024-11-20 12:43:35.390504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.781 [2024-11-20 12:43:35.390536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.781 qpair failed and we were unable to recover it. 00:29:29.781 [2024-11-20 12:43:35.390711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.781 [2024-11-20 12:43:35.390744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.781 qpair failed and we were unable to recover it. 00:29:29.781 [2024-11-20 12:43:35.390937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.781 [2024-11-20 12:43:35.390970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.781 qpair failed and we were unable to recover it. 00:29:29.781 [2024-11-20 12:43:35.391145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.781 [2024-11-20 12:43:35.391179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.781 qpair failed and we were unable to recover it. 00:29:29.781 [2024-11-20 12:43:35.391323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.781 [2024-11-20 12:43:35.391356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.781 qpair failed and we were unable to recover it. 00:29:29.781 [2024-11-20 12:43:35.391599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.781 [2024-11-20 12:43:35.391634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.781 qpair failed and we were unable to recover it. 00:29:29.781 [2024-11-20 12:43:35.391812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.781 [2024-11-20 12:43:35.391845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.781 qpair failed and we were unable to recover it. 00:29:29.781 [2024-11-20 12:43:35.392116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.781 [2024-11-20 12:43:35.392149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.781 qpair failed and we were unable to recover it. 00:29:29.781 [2024-11-20 12:43:35.392427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.781 [2024-11-20 12:43:35.392462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.781 qpair failed and we were unable to recover it. 00:29:29.781 [2024-11-20 12:43:35.392708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.781 [2024-11-20 12:43:35.392742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.781 qpair failed and we were unable to recover it. 00:29:29.781 [2024-11-20 12:43:35.392987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.781 [2024-11-20 12:43:35.393020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.781 qpair failed and we were unable to recover it. 00:29:29.781 [2024-11-20 12:43:35.393227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.781 [2024-11-20 12:43:35.393263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.781 qpair failed and we were unable to recover it. 00:29:29.781 [2024-11-20 12:43:35.393555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.781 [2024-11-20 12:43:35.393589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.781 qpair failed and we were unable to recover it. 00:29:29.781 [2024-11-20 12:43:35.393787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.781 [2024-11-20 12:43:35.393820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.781 qpair failed and we were unable to recover it. 00:29:29.781 [2024-11-20 12:43:35.393957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.781 [2024-11-20 12:43:35.393991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.781 qpair failed and we were unable to recover it. 00:29:29.781 [2024-11-20 12:43:35.394235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.781 [2024-11-20 12:43:35.394271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.781 qpair failed and we were unable to recover it. 00:29:29.781 [2024-11-20 12:43:35.394525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.781 [2024-11-20 12:43:35.394558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.781 qpair failed and we were unable to recover it. 00:29:29.781 [2024-11-20 12:43:35.394744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.781 [2024-11-20 12:43:35.394778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.781 qpair failed and we were unable to recover it. 00:29:29.781 [2024-11-20 12:43:35.394890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.781 [2024-11-20 12:43:35.394923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.781 qpair failed and we were unable to recover it. 00:29:29.781 [2024-11-20 12:43:35.395118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.781 [2024-11-20 12:43:35.395152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.781 qpair failed and we were unable to recover it. 00:29:29.781 [2024-11-20 12:43:35.395270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.781 [2024-11-20 12:43:35.395304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.781 qpair failed and we were unable to recover it. 00:29:29.781 [2024-11-20 12:43:35.395443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.781 [2024-11-20 12:43:35.395476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.781 qpair failed and we were unable to recover it. 00:29:29.781 [2024-11-20 12:43:35.395616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.781 [2024-11-20 12:43:35.395648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.781 qpair failed and we were unable to recover it. 00:29:29.781 [2024-11-20 12:43:35.395856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.781 [2024-11-20 12:43:35.395890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.781 qpair failed and we were unable to recover it. 00:29:29.781 [2024-11-20 12:43:35.396166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.781 [2024-11-20 12:43:35.396200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.781 qpair failed and we were unable to recover it. 00:29:29.781 [2024-11-20 12:43:35.396462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.781 [2024-11-20 12:43:35.396495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.781 qpair failed and we were unable to recover it. 00:29:29.781 [2024-11-20 12:43:35.396743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.781 [2024-11-20 12:43:35.396777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.781 qpair failed and we were unable to recover it. 00:29:29.781 [2024-11-20 12:43:35.396900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.781 [2024-11-20 12:43:35.396933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.781 qpair failed and we were unable to recover it. 00:29:29.781 [2024-11-20 12:43:35.397119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.781 [2024-11-20 12:43:35.397153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.781 qpair failed and we were unable to recover it. 00:29:29.781 [2024-11-20 12:43:35.397289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.781 [2024-11-20 12:43:35.397324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.781 qpair failed and we were unable to recover it. 00:29:29.781 [2024-11-20 12:43:35.397498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.781 [2024-11-20 12:43:35.397531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.781 qpair failed and we were unable to recover it. 00:29:29.781 [2024-11-20 12:43:35.397798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.781 [2024-11-20 12:43:35.397832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.781 qpair failed and we were unable to recover it. 00:29:29.781 [2024-11-20 12:43:35.397963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.781 [2024-11-20 12:43:35.397998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.781 qpair failed and we were unable to recover it. 00:29:29.781 [2024-11-20 12:43:35.398183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.781 [2024-11-20 12:43:35.398244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.781 qpair failed and we were unable to recover it. 00:29:29.781 [2024-11-20 12:43:35.398368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.781 [2024-11-20 12:43:35.398401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.781 qpair failed and we were unable to recover it. 00:29:29.781 [2024-11-20 12:43:35.398597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.781 [2024-11-20 12:43:35.398629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.781 qpair failed and we were unable to recover it. 00:29:29.781 [2024-11-20 12:43:35.398755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.782 [2024-11-20 12:43:35.398787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.782 qpair failed and we were unable to recover it. 00:29:29.782 [2024-11-20 12:43:35.398983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.782 [2024-11-20 12:43:35.399022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.782 qpair failed and we were unable to recover it. 00:29:29.782 [2024-11-20 12:43:35.399133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.782 [2024-11-20 12:43:35.399167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.782 qpair failed and we were unable to recover it. 00:29:29.782 [2024-11-20 12:43:35.399358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.782 [2024-11-20 12:43:35.399393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.782 qpair failed and we were unable to recover it. 00:29:29.782 [2024-11-20 12:43:35.399516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.782 [2024-11-20 12:43:35.399550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.782 qpair failed and we were unable to recover it. 00:29:29.782 [2024-11-20 12:43:35.399728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.782 [2024-11-20 12:43:35.399761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.782 qpair failed and we were unable to recover it. 00:29:29.782 [2024-11-20 12:43:35.399956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.782 [2024-11-20 12:43:35.399989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.782 qpair failed and we were unable to recover it. 00:29:29.782 [2024-11-20 12:43:35.400249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.782 [2024-11-20 12:43:35.400284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.782 qpair failed and we were unable to recover it. 00:29:29.782 [2024-11-20 12:43:35.400401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.782 [2024-11-20 12:43:35.400434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.782 qpair failed and we were unable to recover it. 00:29:29.782 [2024-11-20 12:43:35.400555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.782 [2024-11-20 12:43:35.400606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.782 qpair failed and we were unable to recover it. 00:29:29.782 [2024-11-20 12:43:35.400801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.782 [2024-11-20 12:43:35.400835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.782 qpair failed and we were unable to recover it. 00:29:29.782 [2024-11-20 12:43:35.401118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.782 [2024-11-20 12:43:35.401151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.782 qpair failed and we were unable to recover it. 00:29:29.782 [2024-11-20 12:43:35.401408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.782 [2024-11-20 12:43:35.401442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.782 qpair failed and we were unable to recover it. 00:29:29.782 [2024-11-20 12:43:35.401631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.782 [2024-11-20 12:43:35.401663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.782 qpair failed and we were unable to recover it. 00:29:29.782 [2024-11-20 12:43:35.401787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.782 [2024-11-20 12:43:35.401821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.782 qpair failed and we were unable to recover it. 00:29:29.782 [2024-11-20 12:43:35.402095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.782 [2024-11-20 12:43:35.402128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.782 qpair failed and we were unable to recover it. 00:29:29.782 [2024-11-20 12:43:35.402263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.782 [2024-11-20 12:43:35.402297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.782 qpair failed and we were unable to recover it. 00:29:29.782 [2024-11-20 12:43:35.402475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.782 [2024-11-20 12:43:35.402509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.782 qpair failed and we were unable to recover it. 00:29:29.782 [2024-11-20 12:43:35.402633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.782 [2024-11-20 12:43:35.402665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.782 qpair failed and we were unable to recover it. 00:29:29.782 [2024-11-20 12:43:35.402774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.782 [2024-11-20 12:43:35.402806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.782 qpair failed and we were unable to recover it. 00:29:29.782 [2024-11-20 12:43:35.403074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.782 [2024-11-20 12:43:35.403108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.782 qpair failed and we were unable to recover it. 00:29:29.782 [2024-11-20 12:43:35.403218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.782 [2024-11-20 12:43:35.403253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.782 qpair failed and we were unable to recover it. 00:29:29.782 [2024-11-20 12:43:35.403500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.782 [2024-11-20 12:43:35.403533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.782 qpair failed and we were unable to recover it. 00:29:29.782 [2024-11-20 12:43:35.403798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.782 [2024-11-20 12:43:35.403831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.782 qpair failed and we were unable to recover it. 00:29:29.782 [2024-11-20 12:43:35.404029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.782 [2024-11-20 12:43:35.404062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.782 qpair failed and we were unable to recover it. 00:29:29.782 [2024-11-20 12:43:35.404170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.782 [2024-11-20 12:43:35.404214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.782 qpair failed and we were unable to recover it. 00:29:29.782 [2024-11-20 12:43:35.404410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.782 [2024-11-20 12:43:35.404444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.782 qpair failed and we were unable to recover it. 00:29:29.782 [2024-11-20 12:43:35.404620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.782 [2024-11-20 12:43:35.404654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.782 qpair failed and we were unable to recover it. 00:29:29.782 [2024-11-20 12:43:35.404862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.782 [2024-11-20 12:43:35.404896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.782 qpair failed and we were unable to recover it. 00:29:29.782 [2024-11-20 12:43:35.405087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.782 [2024-11-20 12:43:35.405119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.782 qpair failed and we were unable to recover it. 00:29:29.782 [2024-11-20 12:43:35.405310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.782 [2024-11-20 12:43:35.405345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.782 qpair failed and we were unable to recover it. 00:29:29.782 [2024-11-20 12:43:35.405531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.782 [2024-11-20 12:43:35.405565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.782 qpair failed and we were unable to recover it. 00:29:29.782 [2024-11-20 12:43:35.405774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.782 [2024-11-20 12:43:35.405806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.782 qpair failed and we were unable to recover it. 00:29:29.782 [2024-11-20 12:43:35.405992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.782 [2024-11-20 12:43:35.406025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.782 qpair failed and we were unable to recover it. 00:29:29.782 [2024-11-20 12:43:35.406243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.782 [2024-11-20 12:43:35.406278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.782 qpair failed and we were unable to recover it. 00:29:29.782 [2024-11-20 12:43:35.406481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.782 [2024-11-20 12:43:35.406515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.782 qpair failed and we were unable to recover it. 00:29:29.782 [2024-11-20 12:43:35.406700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.782 [2024-11-20 12:43:35.406732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.782 qpair failed and we were unable to recover it. 00:29:29.782 [2024-11-20 12:43:35.406920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.782 [2024-11-20 12:43:35.406954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.782 qpair failed and we were unable to recover it. 00:29:29.783 [2024-11-20 12:43:35.407129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.783 [2024-11-20 12:43:35.407162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.783 qpair failed and we were unable to recover it. 00:29:29.783 [2024-11-20 12:43:35.407415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.783 [2024-11-20 12:43:35.407450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.783 qpair failed and we were unable to recover it. 00:29:29.783 [2024-11-20 12:43:35.407713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.783 [2024-11-20 12:43:35.407745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.783 qpair failed and we were unable to recover it. 00:29:29.783 [2024-11-20 12:43:35.407942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.783 [2024-11-20 12:43:35.407982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.783 qpair failed and we were unable to recover it. 00:29:29.783 [2024-11-20 12:43:35.408160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.783 [2024-11-20 12:43:35.408193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.783 qpair failed and we were unable to recover it. 00:29:29.783 [2024-11-20 12:43:35.408408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.783 [2024-11-20 12:43:35.408442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.783 qpair failed and we were unable to recover it. 00:29:29.783 [2024-11-20 12:43:35.408651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.783 [2024-11-20 12:43:35.408685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.783 qpair failed and we were unable to recover it. 00:29:29.783 [2024-11-20 12:43:35.408892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.783 [2024-11-20 12:43:35.408924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.783 qpair failed and we were unable to recover it. 00:29:29.783 [2024-11-20 12:43:35.409098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.783 [2024-11-20 12:43:35.409131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.783 qpair failed and we were unable to recover it. 00:29:29.783 [2024-11-20 12:43:35.409320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.783 [2024-11-20 12:43:35.409355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.783 qpair failed and we were unable to recover it. 00:29:29.783 [2024-11-20 12:43:35.409475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.783 [2024-11-20 12:43:35.409508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.783 qpair failed and we were unable to recover it. 00:29:29.783 [2024-11-20 12:43:35.409704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.783 [2024-11-20 12:43:35.409738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.783 qpair failed and we were unable to recover it. 00:29:29.783 [2024-11-20 12:43:35.409859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.783 [2024-11-20 12:43:35.409892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.783 qpair failed and we were unable to recover it. 00:29:29.783 [2024-11-20 12:43:35.410018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.783 [2024-11-20 12:43:35.410051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.783 qpair failed and we were unable to recover it. 00:29:29.783 [2024-11-20 12:43:35.410246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.783 [2024-11-20 12:43:35.410280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.783 qpair failed and we were unable to recover it. 00:29:29.783 [2024-11-20 12:43:35.410484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.783 [2024-11-20 12:43:35.410518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.783 qpair failed and we were unable to recover it. 00:29:29.783 [2024-11-20 12:43:35.410779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.783 [2024-11-20 12:43:35.410812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.783 qpair failed and we were unable to recover it. 00:29:29.783 [2024-11-20 12:43:35.410994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.783 [2024-11-20 12:43:35.411028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.783 qpair failed and we were unable to recover it. 00:29:29.783 [2024-11-20 12:43:35.411156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.783 [2024-11-20 12:43:35.411189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.783 qpair failed and we were unable to recover it. 00:29:29.783 [2024-11-20 12:43:35.411392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.783 [2024-11-20 12:43:35.411426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.783 qpair failed and we were unable to recover it. 00:29:29.783 [2024-11-20 12:43:35.411611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.783 [2024-11-20 12:43:35.411645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.783 qpair failed and we were unable to recover it. 00:29:29.783 [2024-11-20 12:43:35.411766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.783 [2024-11-20 12:43:35.411799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.783 qpair failed and we were unable to recover it. 00:29:29.783 [2024-11-20 12:43:35.411974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.783 [2024-11-20 12:43:35.412007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.783 qpair failed and we were unable to recover it. 00:29:29.783 [2024-11-20 12:43:35.412257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.783 [2024-11-20 12:43:35.412292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.783 qpair failed and we were unable to recover it. 00:29:29.783 [2024-11-20 12:43:35.412470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.783 [2024-11-20 12:43:35.412502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.783 qpair failed and we were unable to recover it. 00:29:29.783 [2024-11-20 12:43:35.412682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.783 [2024-11-20 12:43:35.412715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.783 qpair failed and we were unable to recover it. 00:29:29.783 [2024-11-20 12:43:35.412908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.783 [2024-11-20 12:43:35.412942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.783 qpair failed and we were unable to recover it. 00:29:29.783 [2024-11-20 12:43:35.413130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.783 [2024-11-20 12:43:35.413162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.783 qpair failed and we were unable to recover it. 00:29:29.783 [2024-11-20 12:43:35.413351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.783 [2024-11-20 12:43:35.413386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.783 qpair failed and we were unable to recover it. 00:29:29.783 [2024-11-20 12:43:35.413596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.783 [2024-11-20 12:43:35.413629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.783 qpair failed and we were unable to recover it. 00:29:29.783 [2024-11-20 12:43:35.413902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.783 [2024-11-20 12:43:35.413935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.783 qpair failed and we were unable to recover it. 00:29:29.783 [2024-11-20 12:43:35.414115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.783 [2024-11-20 12:43:35.414148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.783 qpair failed and we were unable to recover it. 00:29:29.783 [2024-11-20 12:43:35.414288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.783 [2024-11-20 12:43:35.414322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.783 qpair failed and we were unable to recover it. 00:29:29.783 [2024-11-20 12:43:35.414510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.784 [2024-11-20 12:43:35.414543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.784 qpair failed and we were unable to recover it. 00:29:29.784 [2024-11-20 12:43:35.414808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.784 [2024-11-20 12:43:35.414840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.784 qpair failed and we were unable to recover it. 00:29:29.784 [2024-11-20 12:43:35.414957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.784 [2024-11-20 12:43:35.414990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.784 qpair failed and we were unable to recover it. 00:29:29.784 [2024-11-20 12:43:35.415265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.784 [2024-11-20 12:43:35.415300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.784 qpair failed and we were unable to recover it. 00:29:29.784 [2024-11-20 12:43:35.415412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.784 [2024-11-20 12:43:35.415446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.784 qpair failed and we were unable to recover it. 00:29:29.784 [2024-11-20 12:43:35.415742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.784 [2024-11-20 12:43:35.415774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.784 qpair failed and we were unable to recover it. 00:29:29.784 [2024-11-20 12:43:35.415887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.784 [2024-11-20 12:43:35.415921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.784 qpair failed and we were unable to recover it. 00:29:29.784 [2024-11-20 12:43:35.416113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.784 [2024-11-20 12:43:35.416149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.784 qpair failed and we were unable to recover it. 00:29:29.784 [2024-11-20 12:43:35.416352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.784 [2024-11-20 12:43:35.416386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.784 qpair failed and we were unable to recover it. 00:29:29.784 [2024-11-20 12:43:35.416656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.784 [2024-11-20 12:43:35.416690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.784 qpair failed and we were unable to recover it. 00:29:29.784 [2024-11-20 12:43:35.416965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.784 [2024-11-20 12:43:35.417004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.784 qpair failed and we were unable to recover it. 00:29:29.784 [2024-11-20 12:43:35.417270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.784 [2024-11-20 12:43:35.417305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.784 qpair failed and we were unable to recover it. 00:29:29.784 [2024-11-20 12:43:35.417591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.784 [2024-11-20 12:43:35.417624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.784 qpair failed and we were unable to recover it. 00:29:29.784 [2024-11-20 12:43:35.417905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.784 [2024-11-20 12:43:35.417939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.784 qpair failed and we were unable to recover it. 00:29:29.784 [2024-11-20 12:43:35.418125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.784 [2024-11-20 12:43:35.418158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.784 qpair failed and we were unable to recover it. 00:29:29.784 [2024-11-20 12:43:35.418446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.784 [2024-11-20 12:43:35.418481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.784 qpair failed and we were unable to recover it. 00:29:29.784 [2024-11-20 12:43:35.418668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.784 [2024-11-20 12:43:35.418701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.784 qpair failed and we were unable to recover it. 00:29:29.784 [2024-11-20 12:43:35.418841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.784 [2024-11-20 12:43:35.418873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.784 qpair failed and we were unable to recover it. 00:29:29.784 [2024-11-20 12:43:35.419072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.784 [2024-11-20 12:43:35.419104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.784 qpair failed and we were unable to recover it. 00:29:29.784 [2024-11-20 12:43:35.419295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.784 [2024-11-20 12:43:35.419330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.784 qpair failed and we were unable to recover it. 00:29:29.784 [2024-11-20 12:43:35.419456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.784 [2024-11-20 12:43:35.419490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.784 qpair failed and we were unable to recover it. 00:29:29.784 [2024-11-20 12:43:35.419617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.784 [2024-11-20 12:43:35.419650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.784 qpair failed and we were unable to recover it. 00:29:29.784 [2024-11-20 12:43:35.419836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.784 [2024-11-20 12:43:35.419870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.784 qpair failed and we were unable to recover it. 00:29:29.784 [2024-11-20 12:43:35.420141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.784 [2024-11-20 12:43:35.420174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.784 qpair failed and we were unable to recover it. 00:29:29.784 [2024-11-20 12:43:35.420434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.784 [2024-11-20 12:43:35.420467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.784 qpair failed and we were unable to recover it. 00:29:29.784 [2024-11-20 12:43:35.420576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.784 [2024-11-20 12:43:35.420608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.784 qpair failed and we were unable to recover it. 00:29:29.784 [2024-11-20 12:43:35.420744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.784 [2024-11-20 12:43:35.420777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.784 qpair failed and we were unable to recover it. 00:29:29.784 [2024-11-20 12:43:35.421046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.784 [2024-11-20 12:43:35.421078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.784 qpair failed and we were unable to recover it. 00:29:29.784 [2024-11-20 12:43:35.421276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.784 [2024-11-20 12:43:35.421311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.784 qpair failed and we were unable to recover it. 00:29:29.784 [2024-11-20 12:43:35.421508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.784 [2024-11-20 12:43:35.421540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.784 qpair failed and we were unable to recover it. 00:29:29.784 [2024-11-20 12:43:35.421666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.784 [2024-11-20 12:43:35.421699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.784 qpair failed and we were unable to recover it. 00:29:29.784 [2024-11-20 12:43:35.421874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.784 [2024-11-20 12:43:35.421907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.784 qpair failed and we were unable to recover it. 00:29:29.784 [2024-11-20 12:43:35.422092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.784 [2024-11-20 12:43:35.422125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.784 qpair failed and we were unable to recover it. 00:29:29.784 [2024-11-20 12:43:35.422314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.784 [2024-11-20 12:43:35.422349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.784 qpair failed and we were unable to recover it. 00:29:29.784 [2024-11-20 12:43:35.422475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.784 [2024-11-20 12:43:35.422509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.784 qpair failed and we were unable to recover it. 00:29:29.784 [2024-11-20 12:43:35.422695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.784 [2024-11-20 12:43:35.422729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.784 qpair failed and we were unable to recover it. 00:29:29.784 [2024-11-20 12:43:35.422865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.784 [2024-11-20 12:43:35.422897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.784 qpair failed and we were unable to recover it. 00:29:29.784 [2024-11-20 12:43:35.423100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.784 [2024-11-20 12:43:35.423134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.784 qpair failed and we were unable to recover it. 00:29:29.784 [2024-11-20 12:43:35.423242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.785 [2024-11-20 12:43:35.423277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.785 qpair failed and we were unable to recover it. 00:29:29.785 [2024-11-20 12:43:35.423408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.785 [2024-11-20 12:43:35.423440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.785 qpair failed and we were unable to recover it. 00:29:29.785 [2024-11-20 12:43:35.423635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.785 [2024-11-20 12:43:35.423668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.785 qpair failed and we were unable to recover it. 00:29:29.785 [2024-11-20 12:43:35.423842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.785 [2024-11-20 12:43:35.423875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.785 qpair failed and we were unable to recover it. 00:29:29.785 [2024-11-20 12:43:35.424150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.785 [2024-11-20 12:43:35.424184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.785 qpair failed and we were unable to recover it. 00:29:29.785 [2024-11-20 12:43:35.424366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.785 [2024-11-20 12:43:35.424400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.785 qpair failed and we were unable to recover it. 00:29:29.785 [2024-11-20 12:43:35.424674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.785 [2024-11-20 12:43:35.424706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.785 qpair failed and we were unable to recover it. 00:29:29.785 [2024-11-20 12:43:35.424962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.785 [2024-11-20 12:43:35.424996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.785 qpair failed and we were unable to recover it. 00:29:29.785 [2024-11-20 12:43:35.425114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.785 [2024-11-20 12:43:35.425147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.785 qpair failed and we were unable to recover it. 00:29:29.785 [2024-11-20 12:43:35.425357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.785 [2024-11-20 12:43:35.425390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.785 qpair failed and we were unable to recover it. 00:29:29.785 [2024-11-20 12:43:35.425577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.785 [2024-11-20 12:43:35.425611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.785 qpair failed and we were unable to recover it. 00:29:29.785 [2024-11-20 12:43:35.425731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.785 [2024-11-20 12:43:35.425764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.785 qpair failed and we were unable to recover it. 00:29:29.785 [2024-11-20 12:43:35.426028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.785 [2024-11-20 12:43:35.426066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.785 qpair failed and we were unable to recover it. 00:29:29.785 [2024-11-20 12:43:35.426334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.785 [2024-11-20 12:43:35.426370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.785 qpair failed and we were unable to recover it. 00:29:29.785 [2024-11-20 12:43:35.426485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.785 [2024-11-20 12:43:35.426517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.785 qpair failed and we were unable to recover it. 00:29:29.785 [2024-11-20 12:43:35.426709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.785 [2024-11-20 12:43:35.426743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.785 qpair failed and we were unable to recover it. 00:29:29.785 [2024-11-20 12:43:35.426867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.785 [2024-11-20 12:43:35.426900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.785 qpair failed and we were unable to recover it. 00:29:29.785 [2024-11-20 12:43:35.427089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.785 [2024-11-20 12:43:35.427122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.785 qpair failed and we were unable to recover it. 00:29:29.785 [2024-11-20 12:43:35.427315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.785 [2024-11-20 12:43:35.427351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.785 qpair failed and we were unable to recover it. 00:29:29.785 [2024-11-20 12:43:35.427478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.785 [2024-11-20 12:43:35.427511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.785 qpair failed and we were unable to recover it. 00:29:29.785 [2024-11-20 12:43:35.427776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.785 [2024-11-20 12:43:35.427809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.785 qpair failed and we were unable to recover it. 00:29:29.785 [2024-11-20 12:43:35.427946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.785 [2024-11-20 12:43:35.427980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.785 qpair failed and we were unable to recover it. 00:29:29.785 [2024-11-20 12:43:35.428161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.785 [2024-11-20 12:43:35.428195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.785 qpair failed and we were unable to recover it. 00:29:29.785 [2024-11-20 12:43:35.428394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.785 [2024-11-20 12:43:35.428427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.785 qpair failed and we were unable to recover it. 00:29:29.785 [2024-11-20 12:43:35.428549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.785 [2024-11-20 12:43:35.428583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.785 qpair failed and we were unable to recover it. 00:29:29.785 [2024-11-20 12:43:35.428729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.785 [2024-11-20 12:43:35.428762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.785 qpair failed and we were unable to recover it. 00:29:29.785 [2024-11-20 12:43:35.429011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.785 [2024-11-20 12:43:35.429044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.785 qpair failed and we were unable to recover it. 00:29:29.785 [2024-11-20 12:43:35.429222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.785 [2024-11-20 12:43:35.429256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.785 qpair failed and we were unable to recover it. 00:29:29.785 [2024-11-20 12:43:35.429387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.785 [2024-11-20 12:43:35.429421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.785 qpair failed and we were unable to recover it. 00:29:29.785 [2024-11-20 12:43:35.429612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.785 [2024-11-20 12:43:35.429645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.785 qpair failed and we were unable to recover it. 00:29:29.785 [2024-11-20 12:43:35.429835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.785 [2024-11-20 12:43:35.429869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.785 qpair failed and we were unable to recover it. 00:29:29.785 [2024-11-20 12:43:35.430058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.785 [2024-11-20 12:43:35.430091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.785 qpair failed and we were unable to recover it. 00:29:29.785 [2024-11-20 12:43:35.430284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.785 [2024-11-20 12:43:35.430318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.785 qpair failed and we were unable to recover it. 00:29:29.785 [2024-11-20 12:43:35.430529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.785 [2024-11-20 12:43:35.430580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.785 qpair failed and we were unable to recover it. 00:29:29.785 [2024-11-20 12:43:35.430698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.785 [2024-11-20 12:43:35.430731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.785 qpair failed and we were unable to recover it. 00:29:29.785 [2024-11-20 12:43:35.430973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.785 [2024-11-20 12:43:35.431006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.785 qpair failed and we were unable to recover it. 00:29:29.785 [2024-11-20 12:43:35.431138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.785 [2024-11-20 12:43:35.431170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.785 qpair failed and we were unable to recover it. 00:29:29.785 [2024-11-20 12:43:35.431519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.785 [2024-11-20 12:43:35.431594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.785 qpair failed and we were unable to recover it. 00:29:29.785 [2024-11-20 12:43:35.431825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.786 [2024-11-20 12:43:35.431861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.786 qpair failed and we were unable to recover it. 00:29:29.786 [2024-11-20 12:43:35.432059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.786 [2024-11-20 12:43:35.432093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.786 qpair failed and we were unable to recover it. 00:29:29.786 [2024-11-20 12:43:35.432282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.786 [2024-11-20 12:43:35.432317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.786 qpair failed and we were unable to recover it. 00:29:29.786 [2024-11-20 12:43:35.432459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.786 [2024-11-20 12:43:35.432492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.786 qpair failed and we were unable to recover it. 00:29:29.786 [2024-11-20 12:43:35.432754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.786 [2024-11-20 12:43:35.432787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.786 qpair failed and we were unable to recover it. 00:29:29.786 [2024-11-20 12:43:35.432977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.786 [2024-11-20 12:43:35.433011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.786 qpair failed and we were unable to recover it. 00:29:29.786 [2024-11-20 12:43:35.433185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.786 [2024-11-20 12:43:35.433243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.786 qpair failed and we were unable to recover it. 00:29:29.786 [2024-11-20 12:43:35.433522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.786 [2024-11-20 12:43:35.433555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.786 qpair failed and we were unable to recover it. 00:29:29.786 [2024-11-20 12:43:35.433733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.786 [2024-11-20 12:43:35.433767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.786 qpair failed and we were unable to recover it. 00:29:29.786 [2024-11-20 12:43:35.433951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.786 [2024-11-20 12:43:35.433984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.786 qpair failed and we were unable to recover it. 00:29:29.786 [2024-11-20 12:43:35.434182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.786 [2024-11-20 12:43:35.434229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.786 qpair failed and we were unable to recover it. 00:29:29.786 [2024-11-20 12:43:35.434498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.786 [2024-11-20 12:43:35.434531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.786 qpair failed and we were unable to recover it. 00:29:29.786 [2024-11-20 12:43:35.434757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.786 [2024-11-20 12:43:35.434790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.786 qpair failed and we were unable to recover it. 00:29:29.786 [2024-11-20 12:43:35.434978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.786 [2024-11-20 12:43:35.435010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.786 qpair failed and we were unable to recover it. 00:29:29.786 [2024-11-20 12:43:35.435136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.786 [2024-11-20 12:43:35.435177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.786 qpair failed and we were unable to recover it. 00:29:29.786 [2024-11-20 12:43:35.435385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.786 [2024-11-20 12:43:35.435420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.786 qpair failed and we were unable to recover it. 00:29:29.786 [2024-11-20 12:43:35.435597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.786 [2024-11-20 12:43:35.435629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.786 qpair failed and we were unable to recover it. 00:29:29.786 [2024-11-20 12:43:35.435815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.786 [2024-11-20 12:43:35.435847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.786 qpair failed and we were unable to recover it. 00:29:29.786 [2024-11-20 12:43:35.436040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.786 [2024-11-20 12:43:35.436072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.786 qpair failed and we were unable to recover it. 00:29:29.786 [2024-11-20 12:43:35.436365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.786 [2024-11-20 12:43:35.436399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.786 qpair failed and we were unable to recover it. 00:29:29.786 [2024-11-20 12:43:35.436657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.786 [2024-11-20 12:43:35.436689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.786 qpair failed and we were unable to recover it. 00:29:29.786 [2024-11-20 12:43:35.436865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.786 [2024-11-20 12:43:35.436898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.786 qpair failed and we were unable to recover it. 00:29:29.786 [2024-11-20 12:43:35.437137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.786 [2024-11-20 12:43:35.437170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.786 qpair failed and we were unable to recover it. 00:29:29.786 [2024-11-20 12:43:35.437369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.786 [2024-11-20 12:43:35.437406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.786 qpair failed and we were unable to recover it. 00:29:29.786 [2024-11-20 12:43:35.437661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.786 [2024-11-20 12:43:35.437694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.786 qpair failed and we were unable to recover it. 00:29:29.786 [2024-11-20 12:43:35.437832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.786 [2024-11-20 12:43:35.437866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.786 qpair failed and we were unable to recover it. 00:29:29.786 [2024-11-20 12:43:35.437989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.786 [2024-11-20 12:43:35.438021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.786 qpair failed and we were unable to recover it. 00:29:29.786 [2024-11-20 12:43:35.438198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.786 [2024-11-20 12:43:35.438258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.786 qpair failed and we were unable to recover it. 00:29:29.786 [2024-11-20 12:43:35.438462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.786 [2024-11-20 12:43:35.438496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.786 qpair failed and we were unable to recover it. 00:29:29.786 [2024-11-20 12:43:35.438738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.786 [2024-11-20 12:43:35.438773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.786 qpair failed and we were unable to recover it. 00:29:29.786 [2024-11-20 12:43:35.439059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.786 [2024-11-20 12:43:35.439092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.786 qpair failed and we were unable to recover it. 00:29:29.786 [2024-11-20 12:43:35.439341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.786 [2024-11-20 12:43:35.439376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.786 qpair failed and we were unable to recover it. 00:29:29.786 [2024-11-20 12:43:35.439505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.786 [2024-11-20 12:43:35.439539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.786 qpair failed and we were unable to recover it. 00:29:29.786 [2024-11-20 12:43:35.439740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.786 [2024-11-20 12:43:35.439773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.786 qpair failed and we were unable to recover it. 00:29:29.786 [2024-11-20 12:43:35.439911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.786 [2024-11-20 12:43:35.439944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.786 qpair failed and we were unable to recover it. 00:29:29.786 [2024-11-20 12:43:35.440185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.786 [2024-11-20 12:43:35.440227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.786 qpair failed and we were unable to recover it. 00:29:29.786 [2024-11-20 12:43:35.440356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.786 [2024-11-20 12:43:35.440391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.786 qpair failed and we were unable to recover it. 00:29:29.786 [2024-11-20 12:43:35.440589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.787 [2024-11-20 12:43:35.440622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.787 qpair failed and we were unable to recover it. 00:29:29.787 [2024-11-20 12:43:35.440881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.787 [2024-11-20 12:43:35.440914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.787 qpair failed and we were unable to recover it. 00:29:29.787 [2024-11-20 12:43:35.441172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.787 [2024-11-20 12:43:35.441215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.787 qpair failed and we were unable to recover it. 00:29:29.787 [2024-11-20 12:43:35.441425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.787 [2024-11-20 12:43:35.441458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.787 qpair failed and we were unable to recover it. 00:29:29.787 [2024-11-20 12:43:35.441735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.787 [2024-11-20 12:43:35.441774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.787 qpair failed and we were unable to recover it. 00:29:29.787 [2024-11-20 12:43:35.441988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.787 [2024-11-20 12:43:35.442021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.787 qpair failed and we were unable to recover it. 00:29:29.787 [2024-11-20 12:43:35.442218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.787 [2024-11-20 12:43:35.442253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.787 qpair failed and we were unable to recover it. 00:29:29.787 [2024-11-20 12:43:35.442445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.787 [2024-11-20 12:43:35.442477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.787 qpair failed and we were unable to recover it. 00:29:29.787 [2024-11-20 12:43:35.442668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.787 [2024-11-20 12:43:35.442701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.787 qpair failed and we were unable to recover it. 00:29:29.787 [2024-11-20 12:43:35.442965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.787 [2024-11-20 12:43:35.442999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.787 qpair failed and we were unable to recover it. 00:29:29.787 [2024-11-20 12:43:35.443220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.787 [2024-11-20 12:43:35.443255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.787 qpair failed and we were unable to recover it. 00:29:29.787 [2024-11-20 12:43:35.443450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.787 [2024-11-20 12:43:35.443483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.787 qpair failed and we were unable to recover it. 00:29:29.787 [2024-11-20 12:43:35.443739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.787 [2024-11-20 12:43:35.443774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.787 qpair failed and we were unable to recover it. 00:29:29.787 [2024-11-20 12:43:35.444017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.787 [2024-11-20 12:43:35.444050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.787 qpair failed and we were unable to recover it. 00:29:29.787 [2024-11-20 12:43:35.444186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.787 [2024-11-20 12:43:35.444230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.787 qpair failed and we were unable to recover it. 00:29:29.787 [2024-11-20 12:43:35.444411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.787 [2024-11-20 12:43:35.444445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.787 qpair failed and we were unable to recover it. 00:29:29.787 [2024-11-20 12:43:35.444568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.787 [2024-11-20 12:43:35.444601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.787 qpair failed and we were unable to recover it. 00:29:29.787 [2024-11-20 12:43:35.444887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.787 [2024-11-20 12:43:35.444926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.787 qpair failed and we were unable to recover it. 00:29:29.787 [2024-11-20 12:43:35.445137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.787 [2024-11-20 12:43:35.445170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.787 qpair failed and we were unable to recover it. 00:29:29.787 [2024-11-20 12:43:35.445399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.787 [2024-11-20 12:43:35.445436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.787 qpair failed and we were unable to recover it. 00:29:29.787 [2024-11-20 12:43:35.445615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.787 [2024-11-20 12:43:35.445648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.787 qpair failed and we were unable to recover it. 00:29:29.787 [2024-11-20 12:43:35.445886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.787 [2024-11-20 12:43:35.445919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.787 qpair failed and we were unable to recover it. 00:29:29.787 [2024-11-20 12:43:35.446090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.787 [2024-11-20 12:43:35.446122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.787 qpair failed and we were unable to recover it. 00:29:29.787 [2024-11-20 12:43:35.446239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.787 [2024-11-20 12:43:35.446273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.787 qpair failed and we were unable to recover it. 00:29:29.787 [2024-11-20 12:43:35.446461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.787 [2024-11-20 12:43:35.446495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.787 qpair failed and we were unable to recover it. 00:29:29.787 [2024-11-20 12:43:35.446736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.787 [2024-11-20 12:43:35.446770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.787 qpair failed and we were unable to recover it. 00:29:29.787 [2024-11-20 12:43:35.446955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.787 [2024-11-20 12:43:35.446988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.787 qpair failed and we were unable to recover it. 00:29:29.787 [2024-11-20 12:43:35.447104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.787 [2024-11-20 12:43:35.447136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.787 qpair failed and we were unable to recover it. 00:29:29.787 [2024-11-20 12:43:35.447265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.787 [2024-11-20 12:43:35.447299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.787 qpair failed and we were unable to recover it. 00:29:29.787 [2024-11-20 12:43:35.447491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.787 [2024-11-20 12:43:35.447525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.787 qpair failed and we were unable to recover it. 00:29:29.787 [2024-11-20 12:43:35.447637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.787 [2024-11-20 12:43:35.447670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.787 qpair failed and we were unable to recover it. 00:29:29.787 [2024-11-20 12:43:35.447785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.787 [2024-11-20 12:43:35.447817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.787 qpair failed and we were unable to recover it. 00:29:29.787 [2024-11-20 12:43:35.447921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.787 [2024-11-20 12:43:35.447954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.787 qpair failed and we were unable to recover it. 00:29:29.787 [2024-11-20 12:43:35.448138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.787 [2024-11-20 12:43:35.448170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:29.788 qpair failed and we were unable to recover it. 00:29:29.788 [2024-11-20 12:43:35.448307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.788 [2024-11-20 12:43:35.448346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.788 qpair failed and we were unable to recover it. 00:29:29.788 [2024-11-20 12:43:35.448608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.788 [2024-11-20 12:43:35.448641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.788 qpair failed and we were unable to recover it. 00:29:29.788 [2024-11-20 12:43:35.448836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.788 [2024-11-20 12:43:35.448869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.788 qpair failed and we were unable to recover it. 00:29:29.788 [2024-11-20 12:43:35.449058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.788 [2024-11-20 12:43:35.449091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.788 qpair failed and we were unable to recover it. 00:29:29.788 [2024-11-20 12:43:35.449353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.788 [2024-11-20 12:43:35.449389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.788 qpair failed and we were unable to recover it. 00:29:29.788 [2024-11-20 12:43:35.449505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.788 [2024-11-20 12:43:35.449538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.788 qpair failed and we were unable to recover it. 00:29:29.788 [2024-11-20 12:43:35.449657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.788 [2024-11-20 12:43:35.449690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.788 qpair failed and we were unable to recover it. 00:29:29.788 [2024-11-20 12:43:35.449896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.788 [2024-11-20 12:43:35.449929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.788 qpair failed and we were unable to recover it. 00:29:29.788 [2024-11-20 12:43:35.450058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.788 [2024-11-20 12:43:35.450091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.788 qpair failed and we were unable to recover it. 00:29:29.788 [2024-11-20 12:43:35.450352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.788 [2024-11-20 12:43:35.450388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.788 qpair failed and we were unable to recover it. 00:29:29.788 [2024-11-20 12:43:35.450512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.788 [2024-11-20 12:43:35.450551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.788 qpair failed and we were unable to recover it. 00:29:29.788 [2024-11-20 12:43:35.450730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.788 [2024-11-20 12:43:35.450764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.788 qpair failed and we were unable to recover it. 00:29:29.788 [2024-11-20 12:43:35.450941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.788 [2024-11-20 12:43:35.450974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.788 qpair failed and we were unable to recover it. 00:29:29.788 [2024-11-20 12:43:35.451114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.788 [2024-11-20 12:43:35.451147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.788 qpair failed and we were unable to recover it. 00:29:29.788 [2024-11-20 12:43:35.451397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.788 [2024-11-20 12:43:35.451432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.788 qpair failed and we were unable to recover it. 00:29:29.788 [2024-11-20 12:43:35.451557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.788 [2024-11-20 12:43:35.451591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.788 qpair failed and we were unable to recover it. 00:29:29.788 [2024-11-20 12:43:35.451723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.788 [2024-11-20 12:43:35.451756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.788 qpair failed and we were unable to recover it. 00:29:29.788 [2024-11-20 12:43:35.452016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.788 [2024-11-20 12:43:35.452048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.788 qpair failed and we were unable to recover it. 00:29:29.788 [2024-11-20 12:43:35.452238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.788 [2024-11-20 12:43:35.452272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.788 qpair failed and we were unable to recover it. 00:29:29.788 [2024-11-20 12:43:35.452470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.788 [2024-11-20 12:43:35.452502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.788 qpair failed and we were unable to recover it. 00:29:29.788 [2024-11-20 12:43:35.452613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.788 [2024-11-20 12:43:35.452645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.788 qpair failed and we were unable to recover it. 00:29:29.788 [2024-11-20 12:43:35.452752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.788 [2024-11-20 12:43:35.452785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.788 qpair failed and we were unable to recover it. 00:29:29.788 [2024-11-20 12:43:35.453013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.788 [2024-11-20 12:43:35.453047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.788 qpair failed and we were unable to recover it. 00:29:29.788 [2024-11-20 12:43:35.453179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.788 [2024-11-20 12:43:35.453224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.788 qpair failed and we were unable to recover it. 00:29:29.788 [2024-11-20 12:43:35.453488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.788 [2024-11-20 12:43:35.453521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.788 qpair failed and we were unable to recover it. 00:29:29.788 [2024-11-20 12:43:35.453630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.788 [2024-11-20 12:43:35.453664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.788 qpair failed and we were unable to recover it. 00:29:29.788 [2024-11-20 12:43:35.453905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.788 [2024-11-20 12:43:35.453937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.788 qpair failed and we were unable to recover it. 00:29:29.788 [2024-11-20 12:43:35.454111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.788 [2024-11-20 12:43:35.454143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.788 qpair failed and we were unable to recover it. 00:29:29.788 [2024-11-20 12:43:35.454355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.788 [2024-11-20 12:43:35.454390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.788 qpair failed and we were unable to recover it. 00:29:29.788 [2024-11-20 12:43:35.454646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.788 [2024-11-20 12:43:35.454679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.788 qpair failed and we were unable to recover it. 00:29:29.788 [2024-11-20 12:43:35.454936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.788 [2024-11-20 12:43:35.454968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.788 qpair failed and we were unable to recover it. 00:29:29.788 [2024-11-20 12:43:35.455180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.788 [2024-11-20 12:43:35.455224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.788 qpair failed and we were unable to recover it. 00:29:29.788 [2024-11-20 12:43:35.455501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.788 [2024-11-20 12:43:35.455533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.788 qpair failed and we were unable to recover it. 00:29:29.788 [2024-11-20 12:43:35.455715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.788 [2024-11-20 12:43:35.455748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.788 qpair failed and we were unable to recover it. 00:29:29.788 [2024-11-20 12:43:35.456012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.788 [2024-11-20 12:43:35.456045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.788 qpair failed and we were unable to recover it. 00:29:29.788 [2024-11-20 12:43:35.456286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.788 [2024-11-20 12:43:35.456321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.788 qpair failed and we were unable to recover it. 00:29:29.788 [2024-11-20 12:43:35.456589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.788 [2024-11-20 12:43:35.456622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.788 qpair failed and we were unable to recover it. 00:29:29.788 [2024-11-20 12:43:35.456756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.789 [2024-11-20 12:43:35.456788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.789 qpair failed and we were unable to recover it. 00:29:29.789 [2024-11-20 12:43:35.456974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.789 [2024-11-20 12:43:35.457006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.789 qpair failed and we were unable to recover it. 00:29:29.789 [2024-11-20 12:43:35.457198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.789 [2024-11-20 12:43:35.457253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.789 qpair failed and we were unable to recover it. 00:29:29.789 [2024-11-20 12:43:35.457492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.789 [2024-11-20 12:43:35.457525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.789 qpair failed and we were unable to recover it. 00:29:29.789 [2024-11-20 12:43:35.457662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.789 [2024-11-20 12:43:35.457694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.789 qpair failed and we were unable to recover it. 00:29:29.789 [2024-11-20 12:43:35.457878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.789 [2024-11-20 12:43:35.457911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.789 qpair failed and we were unable to recover it. 00:29:29.789 [2024-11-20 12:43:35.458152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.789 [2024-11-20 12:43:35.458185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.789 qpair failed and we were unable to recover it. 00:29:29.789 [2024-11-20 12:43:35.458383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.789 [2024-11-20 12:43:35.458416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.789 qpair failed and we were unable to recover it. 00:29:29.789 [2024-11-20 12:43:35.458608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.789 [2024-11-20 12:43:35.458640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.789 qpair failed and we were unable to recover it. 00:29:29.789 [2024-11-20 12:43:35.458747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.789 [2024-11-20 12:43:35.458780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.789 qpair failed and we were unable to recover it. 00:29:29.789 [2024-11-20 12:43:35.458958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.789 [2024-11-20 12:43:35.458991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.789 qpair failed and we were unable to recover it. 00:29:29.789 [2024-11-20 12:43:35.459178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.789 [2024-11-20 12:43:35.459218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.789 qpair failed and we were unable to recover it. 00:29:29.789 [2024-11-20 12:43:35.459435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.789 [2024-11-20 12:43:35.459468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.789 qpair failed and we were unable to recover it. 00:29:29.789 [2024-11-20 12:43:35.459642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.789 [2024-11-20 12:43:35.459681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.789 qpair failed and we were unable to recover it. 00:29:29.789 [2024-11-20 12:43:35.459796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.789 [2024-11-20 12:43:35.459829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.789 qpair failed and we were unable to recover it. 00:29:29.789 [2024-11-20 12:43:35.460006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.789 [2024-11-20 12:43:35.460038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.789 qpair failed and we were unable to recover it. 00:29:29.789 [2024-11-20 12:43:35.460224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.789 [2024-11-20 12:43:35.460257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.789 qpair failed and we were unable to recover it. 00:29:29.789 [2024-11-20 12:43:35.460448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.789 [2024-11-20 12:43:35.460480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.789 qpair failed and we were unable to recover it. 00:29:29.789 [2024-11-20 12:43:35.460742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.789 [2024-11-20 12:43:35.460774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.789 qpair failed and we were unable to recover it. 00:29:29.789 [2024-11-20 12:43:35.460956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.789 [2024-11-20 12:43:35.460988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.789 qpair failed and we were unable to recover it. 00:29:29.789 [2024-11-20 12:43:35.461172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.789 [2024-11-20 12:43:35.461213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.789 qpair failed and we were unable to recover it. 00:29:29.789 [2024-11-20 12:43:35.461444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.789 [2024-11-20 12:43:35.461478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.789 qpair failed and we were unable to recover it. 00:29:29.789 [2024-11-20 12:43:35.461716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.789 [2024-11-20 12:43:35.461749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.789 qpair failed and we were unable to recover it. 00:29:29.789 [2024-11-20 12:43:35.461859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.789 [2024-11-20 12:43:35.461891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.789 qpair failed and we were unable to recover it. 00:29:29.789 [2024-11-20 12:43:35.462065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.789 [2024-11-20 12:43:35.462097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.789 qpair failed and we were unable to recover it. 00:29:29.789 [2024-11-20 12:43:35.462313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.789 [2024-11-20 12:43:35.462348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.789 qpair failed and we were unable to recover it. 00:29:29.789 [2024-11-20 12:43:35.462600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.789 [2024-11-20 12:43:35.462634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.789 qpair failed and we were unable to recover it. 00:29:29.789 [2024-11-20 12:43:35.462844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.789 [2024-11-20 12:43:35.462876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.789 qpair failed and we were unable to recover it. 00:29:29.789 [2024-11-20 12:43:35.463123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.789 [2024-11-20 12:43:35.463157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.789 qpair failed and we were unable to recover it. 00:29:29.789 [2024-11-20 12:43:35.463391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.789 [2024-11-20 12:43:35.463427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.789 qpair failed and we were unable to recover it. 00:29:29.789 [2024-11-20 12:43:35.463617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.789 [2024-11-20 12:43:35.463649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.789 qpair failed and we were unable to recover it. 00:29:29.789 [2024-11-20 12:43:35.463825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.789 [2024-11-20 12:43:35.463858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.789 qpair failed and we were unable to recover it. 00:29:29.789 [2024-11-20 12:43:35.464038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.789 [2024-11-20 12:43:35.464070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.789 qpair failed and we were unable to recover it. 00:29:29.789 [2024-11-20 12:43:35.464177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.789 [2024-11-20 12:43:35.464222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.789 qpair failed and we were unable to recover it. 00:29:29.789 [2024-11-20 12:43:35.464400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.789 [2024-11-20 12:43:35.464433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.789 qpair failed and we were unable to recover it. 00:29:29.789 [2024-11-20 12:43:35.464707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.789 [2024-11-20 12:43:35.464741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.789 qpair failed and we were unable to recover it. 00:29:29.789 [2024-11-20 12:43:35.464936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.789 [2024-11-20 12:43:35.464969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.789 qpair failed and we were unable to recover it. 00:29:29.789 [2024-11-20 12:43:35.465225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.789 [2024-11-20 12:43:35.465259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.789 qpair failed and we were unable to recover it. 00:29:29.789 [2024-11-20 12:43:35.465451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.790 [2024-11-20 12:43:35.465485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.790 qpair failed and we were unable to recover it. 00:29:29.790 [2024-11-20 12:43:35.465683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.790 [2024-11-20 12:43:35.465715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.790 qpair failed and we were unable to recover it. 00:29:29.790 [2024-11-20 12:43:35.465898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.790 [2024-11-20 12:43:35.465930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.790 qpair failed and we were unable to recover it. 00:29:29.790 [2024-11-20 12:43:35.466128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.790 [2024-11-20 12:43:35.466161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.790 qpair failed and we were unable to recover it. 00:29:29.790 [2024-11-20 12:43:35.466376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.790 [2024-11-20 12:43:35.466413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.790 qpair failed and we were unable to recover it. 00:29:29.790 [2024-11-20 12:43:35.466599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.790 [2024-11-20 12:43:35.466632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.790 qpair failed and we were unable to recover it. 00:29:29.790 [2024-11-20 12:43:35.466768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.790 [2024-11-20 12:43:35.466801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.790 qpair failed and we were unable to recover it. 00:29:29.790 [2024-11-20 12:43:35.466988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.790 [2024-11-20 12:43:35.467022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.790 qpair failed and we were unable to recover it. 00:29:29.790 [2024-11-20 12:43:35.467160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.790 [2024-11-20 12:43:35.467192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.790 qpair failed and we were unable to recover it. 00:29:29.790 [2024-11-20 12:43:35.467332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.790 [2024-11-20 12:43:35.467366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.790 qpair failed and we were unable to recover it. 00:29:29.790 [2024-11-20 12:43:35.467634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.790 [2024-11-20 12:43:35.467668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.790 qpair failed and we were unable to recover it. 00:29:29.790 [2024-11-20 12:43:35.467787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.790 [2024-11-20 12:43:35.467819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.790 qpair failed and we were unable to recover it. 00:29:29.790 [2024-11-20 12:43:35.467999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.790 [2024-11-20 12:43:35.468031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.790 qpair failed and we were unable to recover it. 00:29:29.790 [2024-11-20 12:43:35.468238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.790 [2024-11-20 12:43:35.468271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.790 qpair failed and we were unable to recover it. 00:29:29.790 [2024-11-20 12:43:35.468463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.790 [2024-11-20 12:43:35.468495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.790 qpair failed and we were unable to recover it. 00:29:29.790 [2024-11-20 12:43:35.468622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.790 [2024-11-20 12:43:35.468660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.790 qpair failed and we were unable to recover it. 00:29:29.790 [2024-11-20 12:43:35.468848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.790 [2024-11-20 12:43:35.468881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.790 qpair failed and we were unable to recover it. 00:29:29.790 [2024-11-20 12:43:35.469001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.790 [2024-11-20 12:43:35.469033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.790 qpair failed and we were unable to recover it. 00:29:29.790 [2024-11-20 12:43:35.469146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.790 [2024-11-20 12:43:35.469178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.790 qpair failed and we were unable to recover it. 00:29:29.790 [2024-11-20 12:43:35.469456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.790 [2024-11-20 12:43:35.469490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.790 qpair failed and we were unable to recover it. 00:29:29.790 [2024-11-20 12:43:35.469670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.790 [2024-11-20 12:43:35.469702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.790 qpair failed and we were unable to recover it. 00:29:29.790 [2024-11-20 12:43:35.469891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.790 [2024-11-20 12:43:35.469923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.790 qpair failed and we were unable to recover it. 00:29:29.790 [2024-11-20 12:43:35.470228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.790 [2024-11-20 12:43:35.470263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.790 qpair failed and we were unable to recover it. 00:29:29.790 [2024-11-20 12:43:35.470474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.790 [2024-11-20 12:43:35.470506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.790 qpair failed and we were unable to recover it. 00:29:29.790 [2024-11-20 12:43:35.470690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.790 [2024-11-20 12:43:35.470722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.790 qpair failed and we were unable to recover it. 00:29:29.790 [2024-11-20 12:43:35.470981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.790 [2024-11-20 12:43:35.471014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.790 qpair failed and we were unable to recover it. 00:29:29.790 [2024-11-20 12:43:35.471140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.790 [2024-11-20 12:43:35.471173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.790 qpair failed and we were unable to recover it. 00:29:29.790 [2024-11-20 12:43:35.471373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.790 [2024-11-20 12:43:35.471409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.790 qpair failed and we were unable to recover it. 00:29:29.790 [2024-11-20 12:43:35.471601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.790 [2024-11-20 12:43:35.471632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.790 qpair failed and we were unable to recover it. 00:29:29.790 [2024-11-20 12:43:35.471814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.790 [2024-11-20 12:43:35.471846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.790 qpair failed and we were unable to recover it. 00:29:29.790 [2024-11-20 12:43:35.472030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.790 [2024-11-20 12:43:35.472062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.790 qpair failed and we were unable to recover it. 00:29:29.790 [2024-11-20 12:43:35.472274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.790 [2024-11-20 12:43:35.472308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.790 qpair failed and we were unable to recover it. 00:29:29.790 [2024-11-20 12:43:35.472448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.790 [2024-11-20 12:43:35.472480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.790 qpair failed and we were unable to recover it. 00:29:29.790 [2024-11-20 12:43:35.472610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.790 [2024-11-20 12:43:35.472641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.790 qpair failed and we were unable to recover it. 00:29:29.790 [2024-11-20 12:43:35.472763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.790 [2024-11-20 12:43:35.472794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.790 qpair failed and we were unable to recover it. 00:29:29.790 [2024-11-20 12:43:35.472969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.790 [2024-11-20 12:43:35.473002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.790 qpair failed and we were unable to recover it. 00:29:29.790 [2024-11-20 12:43:35.473265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.790 [2024-11-20 12:43:35.473299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.790 qpair failed and we were unable to recover it. 00:29:29.790 [2024-11-20 12:43:35.473475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.790 [2024-11-20 12:43:35.473508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.790 qpair failed and we were unable to recover it. 00:29:29.790 [2024-11-20 12:43:35.473702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.791 [2024-11-20 12:43:35.473734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.791 [2024-11-20 12:43:35.474019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.791 [2024-11-20 12:43:35.474052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.791 [2024-11-20 12:43:35.474317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.791 [2024-11-20 12:43:35.474352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.791 [2024-11-20 12:43:35.474527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.791 [2024-11-20 12:43:35.474559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.791 [2024-11-20 12:43:35.474769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.791 [2024-11-20 12:43:35.474802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.791 [2024-11-20 12:43:35.474992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.791 [2024-11-20 12:43:35.475024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.791 [2024-11-20 12:43:35.475251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.791 [2024-11-20 12:43:35.475288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.791 [2024-11-20 12:43:35.475556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.791 [2024-11-20 12:43:35.475589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.791 [2024-11-20 12:43:35.475836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.791 [2024-11-20 12:43:35.475868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.791 [2024-11-20 12:43:35.476005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.791 [2024-11-20 12:43:35.476037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.791 [2024-11-20 12:43:35.476309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.791 [2024-11-20 12:43:35.476344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.791 [2024-11-20 12:43:35.476535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.791 [2024-11-20 12:43:35.476568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.791 [2024-11-20 12:43:35.476808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.791 [2024-11-20 12:43:35.476841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.791 [2024-11-20 12:43:35.477032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.791 [2024-11-20 12:43:35.477064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.791 [2024-11-20 12:43:35.477344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.791 [2024-11-20 12:43:35.477378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.791 [2024-11-20 12:43:35.477566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.791 [2024-11-20 12:43:35.477598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.791 [2024-11-20 12:43:35.477797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.791 [2024-11-20 12:43:35.477829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.791 [2024-11-20 12:43:35.478009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.791 [2024-11-20 12:43:35.478047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.791 [2024-11-20 12:43:35.478165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.791 [2024-11-20 12:43:35.478198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.791 [2024-11-20 12:43:35.478380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.791 [2024-11-20 12:43:35.478413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.791 [2024-11-20 12:43:35.478613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.791 [2024-11-20 12:43:35.478645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.791 [2024-11-20 12:43:35.478841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.791 [2024-11-20 12:43:35.478873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.791 [2024-11-20 12:43:35.479004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.791 [2024-11-20 12:43:35.479036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.791 [2024-11-20 12:43:35.479279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.791 [2024-11-20 12:43:35.479317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.791 [2024-11-20 12:43:35.479535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.791 [2024-11-20 12:43:35.479568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.791 [2024-11-20 12:43:35.479815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.791 [2024-11-20 12:43:35.479848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.791 [2024-11-20 12:43:35.480036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.791 [2024-11-20 12:43:35.480070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.791 [2024-11-20 12:43:35.480266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.791 [2024-11-20 12:43:35.480300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.791 [2024-11-20 12:43:35.480425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.791 [2024-11-20 12:43:35.480458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.791 [2024-11-20 12:43:35.480633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.791 [2024-11-20 12:43:35.480664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.791 [2024-11-20 12:43:35.480849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.791 [2024-11-20 12:43:35.480881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.791 [2024-11-20 12:43:35.481154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.791 [2024-11-20 12:43:35.481188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.791 [2024-11-20 12:43:35.481375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.791 [2024-11-20 12:43:35.481409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.791 [2024-11-20 12:43:35.481593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.791 [2024-11-20 12:43:35.481627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.791 [2024-11-20 12:43:35.481826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.791 [2024-11-20 12:43:35.481858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.791 [2024-11-20 12:43:35.482034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.791 [2024-11-20 12:43:35.482067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.791 [2024-11-20 12:43:35.482286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.791 [2024-11-20 12:43:35.482320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.791 [2024-11-20 12:43:35.482504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.791 [2024-11-20 12:43:35.482536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.791 qpair failed and we were unable to recover it. 00:29:29.792 [2024-11-20 12:43:35.482777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.792 [2024-11-20 12:43:35.482809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.792 qpair failed and we were unable to recover it. 00:29:29.792 [2024-11-20 12:43:35.482993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.792 [2024-11-20 12:43:35.483025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.792 qpair failed and we were unable to recover it. 00:29:29.792 [2024-11-20 12:43:35.483147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.792 [2024-11-20 12:43:35.483179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.792 qpair failed and we were unable to recover it. 00:29:29.792 [2024-11-20 12:43:35.483376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.792 [2024-11-20 12:43:35.483411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.792 qpair failed and we were unable to recover it. 00:29:29.792 [2024-11-20 12:43:35.483653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.792 [2024-11-20 12:43:35.483685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.792 qpair failed and we were unable to recover it. 00:29:29.792 [2024-11-20 12:43:35.483932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.792 [2024-11-20 12:43:35.483964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.792 qpair failed and we were unable to recover it. 00:29:29.792 [2024-11-20 12:43:35.484119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.792 [2024-11-20 12:43:35.484151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.792 qpair failed and we were unable to recover it. 00:29:29.792 [2024-11-20 12:43:35.484388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.792 [2024-11-20 12:43:35.484422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.792 qpair failed and we were unable to recover it. 00:29:29.792 [2024-11-20 12:43:35.484611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.792 [2024-11-20 12:43:35.484644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.792 qpair failed and we were unable to recover it. 00:29:29.792 [2024-11-20 12:43:35.484770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.792 [2024-11-20 12:43:35.484801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.792 qpair failed and we were unable to recover it. 00:29:29.792 [2024-11-20 12:43:35.484933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.792 [2024-11-20 12:43:35.484965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.792 qpair failed and we were unable to recover it. 00:29:29.792 [2024-11-20 12:43:35.485170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.792 [2024-11-20 12:43:35.485213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.792 qpair failed and we were unable to recover it. 00:29:29.792 [2024-11-20 12:43:35.485465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.792 [2024-11-20 12:43:35.485498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.792 qpair failed and we were unable to recover it. 00:29:29.792 [2024-11-20 12:43:35.485613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.792 [2024-11-20 12:43:35.485646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.792 qpair failed and we were unable to recover it. 00:29:29.792 [2024-11-20 12:43:35.485760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.792 [2024-11-20 12:43:35.485793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.792 qpair failed and we were unable to recover it. 00:29:29.792 [2024-11-20 12:43:35.485902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.792 [2024-11-20 12:43:35.485934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.792 qpair failed and we were unable to recover it. 00:29:29.792 [2024-11-20 12:43:35.486185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.792 [2024-11-20 12:43:35.486227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.792 qpair failed and we were unable to recover it. 00:29:29.792 [2024-11-20 12:43:35.486434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.792 [2024-11-20 12:43:35.486467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.792 qpair failed and we were unable to recover it. 00:29:29.792 [2024-11-20 12:43:35.486594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.792 [2024-11-20 12:43:35.486626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.792 qpair failed and we were unable to recover it. 00:29:29.792 [2024-11-20 12:43:35.486807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.792 [2024-11-20 12:43:35.486846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.792 qpair failed and we were unable to recover it. 00:29:29.792 [2024-11-20 12:43:35.487090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.792 [2024-11-20 12:43:35.487122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.792 qpair failed and we were unable to recover it. 00:29:29.792 [2024-11-20 12:43:35.487307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.792 [2024-11-20 12:43:35.487343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.792 qpair failed and we were unable to recover it. 00:29:29.792 [2024-11-20 12:43:35.487473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.792 [2024-11-20 12:43:35.487505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.792 qpair failed and we were unable to recover it. 00:29:29.792 [2024-11-20 12:43:35.487701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.792 [2024-11-20 12:43:35.487735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.792 qpair failed and we were unable to recover it. 00:29:29.792 [2024-11-20 12:43:35.488053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.792 [2024-11-20 12:43:35.488087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.792 qpair failed and we were unable to recover it. 00:29:29.792 [2024-11-20 12:43:35.488236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.792 [2024-11-20 12:43:35.488272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.792 qpair failed and we were unable to recover it. 00:29:29.792 [2024-11-20 12:43:35.488413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.792 [2024-11-20 12:43:35.488448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.792 qpair failed and we were unable to recover it. 00:29:29.792 [2024-11-20 12:43:35.488639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.792 [2024-11-20 12:43:35.488671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.792 qpair failed and we were unable to recover it. 00:29:29.792 [2024-11-20 12:43:35.488848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.792 [2024-11-20 12:43:35.488881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.792 qpair failed and we were unable to recover it. 00:29:29.792 [2024-11-20 12:43:35.489055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.792 [2024-11-20 12:43:35.489088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.792 qpair failed and we were unable to recover it. 00:29:29.792 [2024-11-20 12:43:35.489216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.792 [2024-11-20 12:43:35.489252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.792 qpair failed and we were unable to recover it. 00:29:29.792 [2024-11-20 12:43:35.489367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.792 [2024-11-20 12:43:35.489398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.792 qpair failed and we were unable to recover it. 00:29:29.792 [2024-11-20 12:43:35.489517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.792 [2024-11-20 12:43:35.489550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.792 qpair failed and we were unable to recover it. 00:29:29.792 [2024-11-20 12:43:35.489739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.793 [2024-11-20 12:43:35.489774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.793 qpair failed and we were unable to recover it. 00:29:29.793 [2024-11-20 12:43:35.489895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.793 [2024-11-20 12:43:35.489926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.793 qpair failed and we were unable to recover it. 00:29:29.793 [2024-11-20 12:43:35.490108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.793 [2024-11-20 12:43:35.490141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.793 qpair failed and we were unable to recover it. 00:29:29.793 [2024-11-20 12:43:35.490337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.793 [2024-11-20 12:43:35.490374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.793 qpair failed and we were unable to recover it. 00:29:29.793 [2024-11-20 12:43:35.490495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.793 [2024-11-20 12:43:35.490528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.793 qpair failed and we were unable to recover it. 00:29:29.793 [2024-11-20 12:43:35.490638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.793 [2024-11-20 12:43:35.490671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.793 qpair failed and we were unable to recover it. 00:29:29.793 [2024-11-20 12:43:35.490794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.793 [2024-11-20 12:43:35.490827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.793 qpair failed and we were unable to recover it. 00:29:29.793 [2024-11-20 12:43:35.491052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.793 [2024-11-20 12:43:35.491085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.793 qpair failed and we were unable to recover it. 00:29:29.793 [2024-11-20 12:43:35.491275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.793 [2024-11-20 12:43:35.491312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.793 qpair failed and we were unable to recover it. 00:29:29.793 [2024-11-20 12:43:35.491502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.793 [2024-11-20 12:43:35.491534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.793 qpair failed and we were unable to recover it. 00:29:29.793 [2024-11-20 12:43:35.491732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.793 [2024-11-20 12:43:35.491764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.793 qpair failed and we were unable to recover it. 00:29:29.793 [2024-11-20 12:43:35.491955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.793 [2024-11-20 12:43:35.491988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.793 qpair failed and we were unable to recover it. 00:29:29.793 [2024-11-20 12:43:35.492106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.793 [2024-11-20 12:43:35.492139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.793 qpair failed and we were unable to recover it. 00:29:29.793 [2024-11-20 12:43:35.492282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.793 [2024-11-20 12:43:35.492316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.793 qpair failed and we were unable to recover it. 00:29:29.793 [2024-11-20 12:43:35.492496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.793 [2024-11-20 12:43:35.492529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.793 qpair failed and we were unable to recover it. 00:29:29.793 [2024-11-20 12:43:35.492712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.793 [2024-11-20 12:43:35.492745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.793 qpair failed and we were unable to recover it. 00:29:29.793 [2024-11-20 12:43:35.492885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.793 [2024-11-20 12:43:35.492919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.793 qpair failed and we were unable to recover it. 00:29:29.793 [2024-11-20 12:43:35.493112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.793 [2024-11-20 12:43:35.493146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.793 qpair failed and we were unable to recover it. 00:29:29.793 [2024-11-20 12:43:35.493274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.793 [2024-11-20 12:43:35.493309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.793 qpair failed and we were unable to recover it. 00:29:29.793 [2024-11-20 12:43:35.493441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.793 [2024-11-20 12:43:35.493472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.793 qpair failed and we were unable to recover it. 00:29:29.793 [2024-11-20 12:43:35.493586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.793 [2024-11-20 12:43:35.493618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.793 qpair failed and we were unable to recover it. 00:29:29.793 [2024-11-20 12:43:35.493730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.793 [2024-11-20 12:43:35.493762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.793 qpair failed and we were unable to recover it. 00:29:29.793 [2024-11-20 12:43:35.493886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.793 [2024-11-20 12:43:35.493920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.793 qpair failed and we were unable to recover it. 00:29:29.793 [2024-11-20 12:43:35.494052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.793 [2024-11-20 12:43:35.494085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:29.793 qpair failed and we were unable to recover it. 00:29:29.793 [2024-11-20 12:43:35.494298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.793 [2024-11-20 12:43:35.494352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.793 qpair failed and we were unable to recover it. 00:29:29.793 [2024-11-20 12:43:35.494524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.793 [2024-11-20 12:43:35.494551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.793 qpair failed and we were unable to recover it. 00:29:29.793 [2024-11-20 12:43:35.494659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.793 [2024-11-20 12:43:35.494691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.793 qpair failed and we were unable to recover it. 00:29:29.793 [2024-11-20 12:43:35.494885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.793 [2024-11-20 12:43:35.494908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.793 qpair failed and we were unable to recover it. 00:29:29.793 [2024-11-20 12:43:35.495081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.793 [2024-11-20 12:43:35.495105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.793 qpair failed and we were unable to recover it. 00:29:29.793 [2024-11-20 12:43:35.495274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.793 [2024-11-20 12:43:35.495299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.793 qpair failed and we were unable to recover it. 00:29:29.793 [2024-11-20 12:43:35.495409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.793 [2024-11-20 12:43:35.495433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.793 qpair failed and we were unable to recover it. 00:29:29.793 [2024-11-20 12:43:35.495517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.793 [2024-11-20 12:43:35.495541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.793 qpair failed and we were unable to recover it. 00:29:29.793 [2024-11-20 12:43:35.495711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.793 [2024-11-20 12:43:35.495734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.793 qpair failed and we were unable to recover it. 00:29:29.793 [2024-11-20 12:43:35.495855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.793 [2024-11-20 12:43:35.495878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.793 qpair failed and we were unable to recover it. 00:29:29.793 [2024-11-20 12:43:35.495984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.793 [2024-11-20 12:43:35.496007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.793 qpair failed and we were unable to recover it. 00:29:29.793 [2024-11-20 12:43:35.496169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.793 [2024-11-20 12:43:35.496193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.793 qpair failed and we were unable to recover it. 00:29:29.793 [2024-11-20 12:43:35.496380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.793 [2024-11-20 12:43:35.496404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.793 qpair failed and we were unable to recover it. 00:29:29.793 [2024-11-20 12:43:35.496494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.793 [2024-11-20 12:43:35.496517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.793 qpair failed and we were unable to recover it. 00:29:29.793 [2024-11-20 12:43:35.496610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.794 [2024-11-20 12:43:35.496633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.794 qpair failed and we were unable to recover it. 00:29:29.794 [2024-11-20 12:43:35.496795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.794 [2024-11-20 12:43:35.496818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.794 qpair failed and we were unable to recover it. 00:29:29.794 [2024-11-20 12:43:35.496904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.794 [2024-11-20 12:43:35.496927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.794 qpair failed and we were unable to recover it. 00:29:29.794 [2024-11-20 12:43:35.497132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.794 [2024-11-20 12:43:35.497155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.794 qpair failed and we were unable to recover it. 00:29:29.794 [2024-11-20 12:43:35.497330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.794 [2024-11-20 12:43:35.497355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.794 qpair failed and we were unable to recover it. 00:29:29.794 [2024-11-20 12:43:35.497575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.794 [2024-11-20 12:43:35.497599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.794 qpair failed and we were unable to recover it. 00:29:29.794 [2024-11-20 12:43:35.497774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.794 [2024-11-20 12:43:35.497798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.794 qpair failed and we were unable to recover it. 00:29:29.794 [2024-11-20 12:43:35.498035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.794 [2024-11-20 12:43:35.498059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.794 qpair failed and we were unable to recover it. 00:29:29.794 [2024-11-20 12:43:35.498221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.794 [2024-11-20 12:43:35.498246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.794 qpair failed and we were unable to recover it. 00:29:29.794 [2024-11-20 12:43:35.498467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.794 [2024-11-20 12:43:35.498490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.794 qpair failed and we were unable to recover it. 00:29:29.794 [2024-11-20 12:43:35.498592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.794 [2024-11-20 12:43:35.498616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.794 qpair failed and we were unable to recover it. 00:29:29.794 [2024-11-20 12:43:35.498776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.794 [2024-11-20 12:43:35.498800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.794 qpair failed and we were unable to recover it. 00:29:29.794 [2024-11-20 12:43:35.498959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.794 [2024-11-20 12:43:35.498983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.794 qpair failed and we were unable to recover it. 00:29:29.794 [2024-11-20 12:43:35.499082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.794 [2024-11-20 12:43:35.499105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.794 qpair failed and we were unable to recover it. 00:29:29.794 [2024-11-20 12:43:35.499215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.794 [2024-11-20 12:43:35.499239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.794 qpair failed and we were unable to recover it. 00:29:29.794 [2024-11-20 12:43:35.499328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.794 [2024-11-20 12:43:35.499352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.794 qpair failed and we were unable to recover it. 00:29:29.794 [2024-11-20 12:43:35.499470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.794 [2024-11-20 12:43:35.499493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.794 qpair failed and we were unable to recover it. 00:29:29.794 [2024-11-20 12:43:35.499602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.794 [2024-11-20 12:43:35.499625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.794 qpair failed and we were unable to recover it. 00:29:29.794 [2024-11-20 12:43:35.499833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.794 [2024-11-20 12:43:35.499856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.794 qpair failed and we were unable to recover it. 00:29:29.794 [2024-11-20 12:43:35.500019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.794 [2024-11-20 12:43:35.500042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.794 qpair failed and we were unable to recover it. 00:29:29.794 [2024-11-20 12:43:35.500196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.794 [2024-11-20 12:43:35.500227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.794 qpair failed and we were unable to recover it. 00:29:29.794 [2024-11-20 12:43:35.500336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.794 [2024-11-20 12:43:35.500359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.794 qpair failed and we were unable to recover it. 00:29:29.794 [2024-11-20 12:43:35.500461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.794 [2024-11-20 12:43:35.500484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.794 qpair failed and we were unable to recover it. 00:29:29.794 [2024-11-20 12:43:35.500641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.794 [2024-11-20 12:43:35.500664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.794 qpair failed and we were unable to recover it. 00:29:29.794 [2024-11-20 12:43:35.500771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.794 [2024-11-20 12:43:35.500795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.794 qpair failed and we were unable to recover it. 00:29:29.794 [2024-11-20 12:43:35.500901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.794 [2024-11-20 12:43:35.500928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.794 qpair failed and we were unable to recover it. 00:29:29.794 [2024-11-20 12:43:35.501013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.794 [2024-11-20 12:43:35.501037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.794 qpair failed and we were unable to recover it. 00:29:29.794 [2024-11-20 12:43:35.501219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.794 [2024-11-20 12:43:35.501244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.794 qpair failed and we were unable to recover it. 00:29:29.794 [2024-11-20 12:43:35.501335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.794 [2024-11-20 12:43:35.501359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.794 qpair failed and we were unable to recover it. 00:29:29.794 [2024-11-20 12:43:35.501583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.794 [2024-11-20 12:43:35.501609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.794 qpair failed and we were unable to recover it. 00:29:29.794 [2024-11-20 12:43:35.501797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.794 [2024-11-20 12:43:35.501821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.794 qpair failed and we were unable to recover it. 00:29:29.794 [2024-11-20 12:43:35.501908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.794 [2024-11-20 12:43:35.501932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.794 qpair failed and we were unable to recover it. 00:29:29.794 [2024-11-20 12:43:35.502105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.794 [2024-11-20 12:43:35.502128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.794 qpair failed and we were unable to recover it. 00:29:29.794 [2024-11-20 12:43:35.502296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.794 [2024-11-20 12:43:35.502320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.794 qpair failed and we were unable to recover it. 00:29:29.794 [2024-11-20 12:43:35.502414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.794 [2024-11-20 12:43:35.502438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.794 qpair failed and we were unable to recover it. 00:29:29.794 [2024-11-20 12:43:35.502543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.794 [2024-11-20 12:43:35.502567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.794 qpair failed and we were unable to recover it. 00:29:29.794 [2024-11-20 12:43:35.502670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.794 [2024-11-20 12:43:35.502692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.794 qpair failed and we were unable to recover it. 00:29:29.794 [2024-11-20 12:43:35.502779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.794 [2024-11-20 12:43:35.502802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.794 qpair failed and we were unable to recover it. 00:29:29.794 [2024-11-20 12:43:35.502955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.795 [2024-11-20 12:43:35.502979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.795 qpair failed and we were unable to recover it. 00:29:29.795 [2024-11-20 12:43:35.503149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.795 [2024-11-20 12:43:35.503173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.795 qpair failed and we were unable to recover it. 00:29:29.795 [2024-11-20 12:43:35.503335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.795 [2024-11-20 12:43:35.503359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.795 qpair failed and we were unable to recover it. 00:29:29.795 [2024-11-20 12:43:35.503538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.795 [2024-11-20 12:43:35.503561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.795 qpair failed and we were unable to recover it. 00:29:29.795 [2024-11-20 12:43:35.503644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.795 [2024-11-20 12:43:35.503668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.795 qpair failed and we were unable to recover it. 00:29:29.795 [2024-11-20 12:43:35.503783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.795 [2024-11-20 12:43:35.503806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.795 qpair failed and we were unable to recover it. 00:29:29.795 [2024-11-20 12:43:35.503955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.795 [2024-11-20 12:43:35.503979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.795 qpair failed and we were unable to recover it. 00:29:29.795 [2024-11-20 12:43:35.504130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.795 [2024-11-20 12:43:35.504154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:29.795 qpair failed and we were unable to recover it. 00:29:29.795 [2024-11-20 12:43:35.504255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.795 [2024-11-20 12:43:35.504278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-11-20 12:43:35.504497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-11-20 12:43:35.504521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-11-20 12:43:35.504678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-11-20 12:43:35.504704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-11-20 12:43:35.504803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-11-20 12:43:35.504826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-11-20 12:43:35.504912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-11-20 12:43:35.504935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-11-20 12:43:35.505032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-11-20 12:43:35.505056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-11-20 12:43:35.505215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-11-20 12:43:35.505239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-11-20 12:43:35.505401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-11-20 12:43:35.505424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-11-20 12:43:35.505512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-11-20 12:43:35.505535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-11-20 12:43:35.505690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-11-20 12:43:35.505714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-11-20 12:43:35.505819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-11-20 12:43:35.505847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-11-20 12:43:35.505942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-11-20 12:43:35.505965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-11-20 12:43:35.506134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-11-20 12:43:35.506158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-11-20 12:43:35.506260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-11-20 12:43:35.506285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-11-20 12:43:35.506454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-11-20 12:43:35.506478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-11-20 12:43:35.506572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-11-20 12:43:35.506596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-11-20 12:43:35.506771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-11-20 12:43:35.506795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-11-20 12:43:35.506956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-11-20 12:43:35.506979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-11-20 12:43:35.507076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-11-20 12:43:35.507099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-11-20 12:43:35.507252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-11-20 12:43:35.507277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-11-20 12:43:35.507426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-11-20 12:43:35.507450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-11-20 12:43:35.507639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-11-20 12:43:35.507662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-11-20 12:43:35.507756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-11-20 12:43:35.507780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-11-20 12:43:35.507893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-11-20 12:43:35.507917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-11-20 12:43:35.508009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-11-20 12:43:35.508033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-11-20 12:43:35.508144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-11-20 12:43:35.508168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-11-20 12:43:35.508330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-11-20 12:43:35.508355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-11-20 12:43:35.508506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-11-20 12:43:35.508530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-11-20 12:43:35.508620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-11-20 12:43:35.508644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-11-20 12:43:35.508797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-11-20 12:43:35.508820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-11-20 12:43:35.508908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-11-20 12:43:35.508932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-11-20 12:43:35.509016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-11-20 12:43:35.509039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-11-20 12:43:35.509194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-11-20 12:43:35.509239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-11-20 12:43:35.509411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-11-20 12:43:35.509435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-11-20 12:43:35.509597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-11-20 12:43:35.509621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-11-20 12:43:35.509784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-11-20 12:43:35.509808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-11-20 12:43:35.509970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-11-20 12:43:35.509993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-11-20 12:43:35.510103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-11-20 12:43:35.510126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-11-20 12:43:35.510290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-11-20 12:43:35.510314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-11-20 12:43:35.510424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-11-20 12:43:35.510448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-11-20 12:43:35.510602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-11-20 12:43:35.510626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-11-20 12:43:35.510783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-11-20 12:43:35.510806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-11-20 12:43:35.510894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-11-20 12:43:35.510918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-11-20 12:43:35.511021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-11-20 12:43:35.511045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-11-20 12:43:35.511156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-11-20 12:43:35.511180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-11-20 12:43:35.511342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-11-20 12:43:35.511366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-11-20 12:43:35.511527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-11-20 12:43:35.511551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-11-20 12:43:35.511703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-11-20 12:43:35.511726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-11-20 12:43:35.511883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-11-20 12:43:35.511906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-11-20 12:43:35.512083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-11-20 12:43:35.512108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-11-20 12:43:35.512211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-11-20 12:43:35.512236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-11-20 12:43:35.512347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-11-20 12:43:35.512375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-11-20 12:43:35.512543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-11-20 12:43:35.512567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-11-20 12:43:35.512668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-11-20 12:43:35.512691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-11-20 12:43:35.512882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-11-20 12:43:35.512906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-11-20 12:43:35.513070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-11-20 12:43:35.513094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.081 [2024-11-20 12:43:35.513212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-11-20 12:43:35.513236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-11-20 12:43:35.513458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-11-20 12:43:35.513482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-11-20 12:43:35.513660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-11-20 12:43:35.513684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-11-20 12:43:35.513783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-11-20 12:43:35.513805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-11-20 12:43:35.513906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-11-20 12:43:35.513930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-11-20 12:43:35.514102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-11-20 12:43:35.514125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-11-20 12:43:35.514286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-11-20 12:43:35.514312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-11-20 12:43:35.514489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-11-20 12:43:35.514513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-11-20 12:43:35.514593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-11-20 12:43:35.514618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-11-20 12:43:35.514787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-11-20 12:43:35.514811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-11-20 12:43:35.515012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-11-20 12:43:35.515035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-11-20 12:43:35.515214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-11-20 12:43:35.515238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-11-20 12:43:35.515394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-11-20 12:43:35.515417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-11-20 12:43:35.515639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-11-20 12:43:35.515663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-11-20 12:43:35.515815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-11-20 12:43:35.515839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-11-20 12:43:35.515928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-11-20 12:43:35.515951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-11-20 12:43:35.516130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-11-20 12:43:35.516154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-11-20 12:43:35.516251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-11-20 12:43:35.516276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-11-20 12:43:35.516439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-11-20 12:43:35.516464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-11-20 12:43:35.516615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-11-20 12:43:35.516638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-11-20 12:43:35.516792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-11-20 12:43:35.516816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-11-20 12:43:35.517009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-11-20 12:43:35.517032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-11-20 12:43:35.517214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-11-20 12:43:35.517243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-11-20 12:43:35.517410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-11-20 12:43:35.517434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-11-20 12:43:35.517596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-11-20 12:43:35.517621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-11-20 12:43:35.517787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-11-20 12:43:35.517811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-11-20 12:43:35.517972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-11-20 12:43:35.517996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-11-20 12:43:35.518214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-11-20 12:43:35.518239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-11-20 12:43:35.518475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-11-20 12:43:35.518498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-11-20 12:43:35.518618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-11-20 12:43:35.518641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-11-20 12:43:35.518793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-11-20 12:43:35.518817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-11-20 12:43:35.518905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-11-20 12:43:35.518928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-11-20 12:43:35.519030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-11-20 12:43:35.519054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-11-20 12:43:35.519299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-11-20 12:43:35.519325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-11-20 12:43:35.519408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-11-20 12:43:35.519431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-11-20 12:43:35.519516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-11-20 12:43:35.519540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-11-20 12:43:35.519785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-11-20 12:43:35.519808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-11-20 12:43:35.519973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-11-20 12:43:35.519996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-11-20 12:43:35.520097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-11-20 12:43:35.520121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-11-20 12:43:35.520215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-11-20 12:43:35.520240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-11-20 12:43:35.520343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-11-20 12:43:35.520367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-11-20 12:43:35.520561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-11-20 12:43:35.520585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-11-20 12:43:35.520820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-11-20 12:43:35.520844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-11-20 12:43:35.520999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-11-20 12:43:35.521031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-11-20 12:43:35.521140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-11-20 12:43:35.521164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-11-20 12:43:35.521390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-11-20 12:43:35.521415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-11-20 12:43:35.521569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-11-20 12:43:35.521593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-11-20 12:43:35.521745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-11-20 12:43:35.521769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-11-20 12:43:35.521991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-11-20 12:43:35.522016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-11-20 12:43:35.522233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-11-20 12:43:35.522258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-11-20 12:43:35.522485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-11-20 12:43:35.522509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-11-20 12:43:35.522753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-11-20 12:43:35.522777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-11-20 12:43:35.522943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-11-20 12:43:35.522967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-11-20 12:43:35.523062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-11-20 12:43:35.523087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-11-20 12:43:35.523246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-11-20 12:43:35.523281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-11-20 12:43:35.523390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-11-20 12:43:35.523414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-11-20 12:43:35.523518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-11-20 12:43:35.523542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-11-20 12:43:35.523640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-11-20 12:43:35.523662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-11-20 12:43:35.523828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-11-20 12:43:35.523851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-11-20 12:43:35.524096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-11-20 12:43:35.524120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-11-20 12:43:35.524279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-11-20 12:43:35.524302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-11-20 12:43:35.524401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-11-20 12:43:35.524424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-11-20 12:43:35.524578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-11-20 12:43:35.524602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-11-20 12:43:35.524755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-11-20 12:43:35.524782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-11-20 12:43:35.524953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-11-20 12:43:35.524977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-11-20 12:43:35.525199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-11-20 12:43:35.525239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-11-20 12:43:35.525407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-11-20 12:43:35.525431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-11-20 12:43:35.525678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-11-20 12:43:35.525701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-11-20 12:43:35.525963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-11-20 12:43:35.525986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-11-20 12:43:35.526094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-11-20 12:43:35.526118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-11-20 12:43:35.526284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-11-20 12:43:35.526308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-11-20 12:43:35.526424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-11-20 12:43:35.526449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-11-20 12:43:35.526542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-11-20 12:43:35.526566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-11-20 12:43:35.526664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-11-20 12:43:35.526687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-11-20 12:43:35.526800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-11-20 12:43:35.526824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-11-20 12:43:35.526922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-11-20 12:43:35.526945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-11-20 12:43:35.527049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-11-20 12:43:35.527073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-11-20 12:43:35.527183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-11-20 12:43:35.527225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-11-20 12:43:35.527334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-11-20 12:43:35.527357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-11-20 12:43:35.527444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-11-20 12:43:35.527467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-11-20 12:43:35.527646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-11-20 12:43:35.527669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-11-20 12:43:35.527891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-11-20 12:43:35.527914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-11-20 12:43:35.528133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-11-20 12:43:35.528156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-11-20 12:43:35.528256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-11-20 12:43:35.528280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-11-20 12:43:35.528449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-11-20 12:43:35.528472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-11-20 12:43:35.528560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-11-20 12:43:35.528583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-11-20 12:43:35.528827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-11-20 12:43:35.528850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-11-20 12:43:35.528939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-11-20 12:43:35.528963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-11-20 12:43:35.529132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-11-20 12:43:35.529155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-11-20 12:43:35.529310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-11-20 12:43:35.529334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-11-20 12:43:35.529419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-11-20 12:43:35.529446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-11-20 12:43:35.529546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-11-20 12:43:35.529570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-11-20 12:43:35.529735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-11-20 12:43:35.529758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-11-20 12:43:35.529866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-11-20 12:43:35.529889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-11-20 12:43:35.530064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-11-20 12:43:35.530087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-11-20 12:43:35.530189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-11-20 12:43:35.530219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-11-20 12:43:35.530306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-11-20 12:43:35.530329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-11-20 12:43:35.530488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-11-20 12:43:35.530510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-11-20 12:43:35.530619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-11-20 12:43:35.530642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-11-20 12:43:35.530743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-11-20 12:43:35.530766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-11-20 12:43:35.531002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-11-20 12:43:35.531024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-11-20 12:43:35.531113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-11-20 12:43:35.531136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-11-20 12:43:35.531307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-11-20 12:43:35.531331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-11-20 12:43:35.531442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-11-20 12:43:35.531464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-11-20 12:43:35.531645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-11-20 12:43:35.531669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-11-20 12:43:35.531837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-11-20 12:43:35.531860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-11-20 12:43:35.531965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-11-20 12:43:35.531988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-11-20 12:43:35.532092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-11-20 12:43:35.532115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-11-20 12:43:35.532233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-11-20 12:43:35.532258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-11-20 12:43:35.532428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-11-20 12:43:35.532450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-11-20 12:43:35.532612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-11-20 12:43:35.532634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-11-20 12:43:35.532785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-11-20 12:43:35.532809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-11-20 12:43:35.532960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-11-20 12:43:35.532983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-11-20 12:43:35.533092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-11-20 12:43:35.533115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-11-20 12:43:35.533345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-11-20 12:43:35.533369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-11-20 12:43:35.533448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-11-20 12:43:35.533472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-11-20 12:43:35.533641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-11-20 12:43:35.533665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-11-20 12:43:35.533849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-11-20 12:43:35.533873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-11-20 12:43:35.533981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-11-20 12:43:35.534004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-11-20 12:43:35.534188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-11-20 12:43:35.534220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-11-20 12:43:35.534331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-11-20 12:43:35.534354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-11-20 12:43:35.534516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-11-20 12:43:35.534539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-11-20 12:43:35.534714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-11-20 12:43:35.534737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-11-20 12:43:35.534927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-11-20 12:43:35.534950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-11-20 12:43:35.535043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-11-20 12:43:35.535066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-11-20 12:43:35.535227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-11-20 12:43:35.535251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-11-20 12:43:35.535351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-11-20 12:43:35.535374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-11-20 12:43:35.535542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-11-20 12:43:35.535565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-11-20 12:43:35.535783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-11-20 12:43:35.535807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-11-20 12:43:35.535978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-11-20 12:43:35.536001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-11-20 12:43:35.536112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-11-20 12:43:35.536135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-11-20 12:43:35.536330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-11-20 12:43:35.536358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-11-20 12:43:35.536464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-11-20 12:43:35.536487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-11-20 12:43:35.536683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-11-20 12:43:35.536706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-11-20 12:43:35.536941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-11-20 12:43:35.536965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-11-20 12:43:35.537053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-11-20 12:43:35.537076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-11-20 12:43:35.537174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-11-20 12:43:35.537197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-11-20 12:43:35.537289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-11-20 12:43:35.537313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-11-20 12:43:35.537421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-11-20 12:43:35.537445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-11-20 12:43:35.537552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-11-20 12:43:35.537575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-11-20 12:43:35.537730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-11-20 12:43:35.537753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-11-20 12:43:35.537906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-11-20 12:43:35.537929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-11-20 12:43:35.538083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-11-20 12:43:35.538107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-11-20 12:43:35.538350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-11-20 12:43:35.538375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-11-20 12:43:35.538457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-11-20 12:43:35.538480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-11-20 12:43:35.538593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-11-20 12:43:35.538616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-11-20 12:43:35.538733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-11-20 12:43:35.538756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-11-20 12:43:35.538948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-11-20 12:43:35.538971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-11-20 12:43:35.539150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-11-20 12:43:35.539173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-11-20 12:43:35.539354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-11-20 12:43:35.539378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-11-20 12:43:35.539618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-11-20 12:43:35.539640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-11-20 12:43:35.539745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-11-20 12:43:35.539768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-11-20 12:43:35.540009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-11-20 12:43:35.540032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-11-20 12:43:35.540190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-11-20 12:43:35.540220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-11-20 12:43:35.540458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-11-20 12:43:35.540481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-11-20 12:43:35.540672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-11-20 12:43:35.540696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-11-20 12:43:35.540795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-11-20 12:43:35.540818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-11-20 12:43:35.540975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-11-20 12:43:35.540998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-11-20 12:43:35.541082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-11-20 12:43:35.541105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-11-20 12:43:35.541305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-11-20 12:43:35.541330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-11-20 12:43:35.541488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-11-20 12:43:35.541510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-11-20 12:43:35.541691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-11-20 12:43:35.541714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-11-20 12:43:35.541945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-11-20 12:43:35.541968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-11-20 12:43:35.542215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-11-20 12:43:35.542238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-11-20 12:43:35.542338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-11-20 12:43:35.542362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-11-20 12:43:35.542515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-11-20 12:43:35.542538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-11-20 12:43:35.542651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-11-20 12:43:35.542674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-11-20 12:43:35.542760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-11-20 12:43:35.542783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-11-20 12:43:35.542895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-11-20 12:43:35.542918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-11-20 12:43:35.543071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-11-20 12:43:35.543094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-11-20 12:43:35.543254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-11-20 12:43:35.543277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-11-20 12:43:35.543379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-11-20 12:43:35.543402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-11-20 12:43:35.543560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-11-20 12:43:35.543634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-11-20 12:43:35.543831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-11-20 12:43:35.543865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-11-20 12:43:35.544140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-11-20 12:43:35.544173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-11-20 12:43:35.544309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-11-20 12:43:35.544334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-11-20 12:43:35.544493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-11-20 12:43:35.544515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-11-20 12:43:35.544759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-11-20 12:43:35.544781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-11-20 12:43:35.544969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-11-20 12:43:35.544991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-11-20 12:43:35.545098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-11-20 12:43:35.545121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-11-20 12:43:35.545225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-11-20 12:43:35.545249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-11-20 12:43:35.545416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-11-20 12:43:35.545439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-11-20 12:43:35.545598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-11-20 12:43:35.545621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-11-20 12:43:35.545733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-11-20 12:43:35.545756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.086 [2024-11-20 12:43:35.545923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-11-20 12:43:35.545946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-11-20 12:43:35.546111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-11-20 12:43:35.546133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-11-20 12:43:35.546234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-11-20 12:43:35.546259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-11-20 12:43:35.546410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-11-20 12:43:35.546433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-11-20 12:43:35.546516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-11-20 12:43:35.546538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-11-20 12:43:35.546640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-11-20 12:43:35.546663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-11-20 12:43:35.546834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-11-20 12:43:35.546856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-11-20 12:43:35.547007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-11-20 12:43:35.547031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-11-20 12:43:35.547181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-11-20 12:43:35.547210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-11-20 12:43:35.547314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-11-20 12:43:35.547337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-11-20 12:43:35.547430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-11-20 12:43:35.547454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-11-20 12:43:35.547540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-11-20 12:43:35.547564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-11-20 12:43:35.547657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-11-20 12:43:35.547680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-11-20 12:43:35.547903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-11-20 12:43:35.547927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-11-20 12:43:35.548085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-11-20 12:43:35.548108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-11-20 12:43:35.548278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-11-20 12:43:35.548305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-11-20 12:43:35.548526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-11-20 12:43:35.548549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-11-20 12:43:35.548713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-11-20 12:43:35.548737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-11-20 12:43:35.548888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-11-20 12:43:35.548912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-11-20 12:43:35.549014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-11-20 12:43:35.549038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-11-20 12:43:35.549124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-11-20 12:43:35.549147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-11-20 12:43:35.549311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-11-20 12:43:35.549335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-11-20 12:43:35.549417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-11-20 12:43:35.549440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-11-20 12:43:35.549592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-11-20 12:43:35.549614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-11-20 12:43:35.549713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-11-20 12:43:35.549736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-11-20 12:43:35.549826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-11-20 12:43:35.549850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-11-20 12:43:35.550040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-11-20 12:43:35.550063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-11-20 12:43:35.550156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-11-20 12:43:35.550180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-11-20 12:43:35.550404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-11-20 12:43:35.550478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-11-20 12:43:35.550746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-11-20 12:43:35.550820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-11-20 12:43:35.551036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-11-20 12:43:35.551073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-11-20 12:43:35.551281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-11-20 12:43:35.551319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-11-20 12:43:35.551543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-11-20 12:43:35.551577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-11-20 12:43:35.551713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-11-20 12:43:35.551746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-11-20 12:43:35.551866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-11-20 12:43:35.551899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-11-20 12:43:35.552020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-11-20 12:43:35.552054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-11-20 12:43:35.552233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-11-20 12:43:35.552268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.087 [2024-11-20 12:43:35.552561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.087 [2024-11-20 12:43:35.552589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.087 qpair failed and we were unable to recover it. 00:29:30.087 [2024-11-20 12:43:35.552756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.087 [2024-11-20 12:43:35.552779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.087 qpair failed and we were unable to recover it. 00:29:30.087 [2024-11-20 12:43:35.553001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.087 [2024-11-20 12:43:35.553024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.087 qpair failed and we were unable to recover it. 00:29:30.087 [2024-11-20 12:43:35.553146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.087 [2024-11-20 12:43:35.553169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.087 qpair failed and we were unable to recover it. 00:29:30.087 [2024-11-20 12:43:35.553270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.087 [2024-11-20 12:43:35.553294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.087 qpair failed and we were unable to recover it. 00:29:30.087 [2024-11-20 12:43:35.553487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.087 [2024-11-20 12:43:35.553514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.087 qpair failed and we were unable to recover it. 00:29:30.087 [2024-11-20 12:43:35.553667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.087 [2024-11-20 12:43:35.553691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.087 qpair failed and we were unable to recover it. 00:29:30.087 [2024-11-20 12:43:35.553846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.087 [2024-11-20 12:43:35.553869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.087 qpair failed and we were unable to recover it. 00:29:30.087 [2024-11-20 12:43:35.553968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.087 [2024-11-20 12:43:35.553991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.087 qpair failed and we were unable to recover it. 00:29:30.087 [2024-11-20 12:43:35.554158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.087 [2024-11-20 12:43:35.554182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.087 qpair failed and we were unable to recover it. 00:29:30.087 [2024-11-20 12:43:35.554302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.087 [2024-11-20 12:43:35.554325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.087 qpair failed and we were unable to recover it. 00:29:30.087 [2024-11-20 12:43:35.554606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.087 [2024-11-20 12:43:35.554629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.087 qpair failed and we were unable to recover it. 00:29:30.087 [2024-11-20 12:43:35.554797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.087 [2024-11-20 12:43:35.554820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.087 qpair failed and we were unable to recover it. 00:29:30.087 [2024-11-20 12:43:35.554920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.087 [2024-11-20 12:43:35.554943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.087 qpair failed and we were unable to recover it. 00:29:30.087 [2024-11-20 12:43:35.555096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.087 [2024-11-20 12:43:35.555119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.087 qpair failed and we were unable to recover it. 00:29:30.087 [2024-11-20 12:43:35.555229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.087 [2024-11-20 12:43:35.555254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.087 qpair failed and we were unable to recover it. 00:29:30.087 [2024-11-20 12:43:35.555416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.087 [2024-11-20 12:43:35.555439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.087 qpair failed and we were unable to recover it. 00:29:30.087 [2024-11-20 12:43:35.555608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.087 [2024-11-20 12:43:35.555632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.087 qpair failed and we were unable to recover it. 00:29:30.087 [2024-11-20 12:43:35.555726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.087 [2024-11-20 12:43:35.555749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.087 qpair failed and we were unable to recover it. 00:29:30.087 [2024-11-20 12:43:35.555867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.087 [2024-11-20 12:43:35.555891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.087 qpair failed and we were unable to recover it. 00:29:30.087 [2024-11-20 12:43:35.556045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.087 [2024-11-20 12:43:35.556068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.087 qpair failed and we were unable to recover it. 00:29:30.087 [2024-11-20 12:43:35.556175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.087 [2024-11-20 12:43:35.556199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.087 qpair failed and we were unable to recover it. 00:29:30.087 [2024-11-20 12:43:35.556294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.087 [2024-11-20 12:43:35.556318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.087 qpair failed and we were unable to recover it. 00:29:30.087 [2024-11-20 12:43:35.556437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.087 [2024-11-20 12:43:35.556461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.087 qpair failed and we were unable to recover it. 00:29:30.087 [2024-11-20 12:43:35.556616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.087 [2024-11-20 12:43:35.556639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.087 qpair failed and we were unable to recover it. 00:29:30.087 [2024-11-20 12:43:35.556807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.087 [2024-11-20 12:43:35.556829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.087 qpair failed and we were unable to recover it. 00:29:30.087 [2024-11-20 12:43:35.556983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.087 [2024-11-20 12:43:35.557006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.087 qpair failed and we were unable to recover it. 00:29:30.087 [2024-11-20 12:43:35.557105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.087 [2024-11-20 12:43:35.557128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.087 qpair failed and we were unable to recover it. 00:29:30.087 [2024-11-20 12:43:35.557222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.087 [2024-11-20 12:43:35.557248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.087 qpair failed and we were unable to recover it. 00:29:30.087 [2024-11-20 12:43:35.557340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.087 [2024-11-20 12:43:35.557364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.087 qpair failed and we were unable to recover it. 00:29:30.087 [2024-11-20 12:43:35.557470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.087 [2024-11-20 12:43:35.557493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.087 qpair failed and we were unable to recover it. 00:29:30.087 [2024-11-20 12:43:35.557603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.087 [2024-11-20 12:43:35.557626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.087 qpair failed and we were unable to recover it. 00:29:30.087 [2024-11-20 12:43:35.557793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.087 [2024-11-20 12:43:35.557816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.087 qpair failed and we were unable to recover it. 00:29:30.087 [2024-11-20 12:43:35.557923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.087 [2024-11-20 12:43:35.557949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.087 qpair failed and we were unable to recover it. 00:29:30.087 [2024-11-20 12:43:35.558036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.087 [2024-11-20 12:43:35.558060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.087 qpair failed and we were unable to recover it. 00:29:30.087 [2024-11-20 12:43:35.558160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.087 [2024-11-20 12:43:35.558184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.087 qpair failed and we were unable to recover it. 00:29:30.087 [2024-11-20 12:43:35.558301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.087 [2024-11-20 12:43:35.558324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.087 qpair failed and we were unable to recover it. 00:29:30.087 [2024-11-20 12:43:35.558568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.087 [2024-11-20 12:43:35.558593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.087 qpair failed and we were unable to recover it. 00:29:30.087 [2024-11-20 12:43:35.558757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.088 [2024-11-20 12:43:35.558781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.088 qpair failed and we were unable to recover it. 00:29:30.088 [2024-11-20 12:43:35.559002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.088 [2024-11-20 12:43:35.559025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.088 qpair failed and we were unable to recover it. 00:29:30.088 [2024-11-20 12:43:35.559225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.088 [2024-11-20 12:43:35.559249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.088 qpair failed and we were unable to recover it. 00:29:30.088 [2024-11-20 12:43:35.559437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.088 [2024-11-20 12:43:35.559461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.088 qpair failed and we were unable to recover it. 00:29:30.088 [2024-11-20 12:43:35.559688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.088 [2024-11-20 12:43:35.559712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.088 qpair failed and we were unable to recover it. 00:29:30.088 [2024-11-20 12:43:35.559873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.088 [2024-11-20 12:43:35.559896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.088 qpair failed and we were unable to recover it. 00:29:30.088 [2024-11-20 12:43:35.559999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.088 [2024-11-20 12:43:35.560022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.088 qpair failed and we were unable to recover it. 00:29:30.088 [2024-11-20 12:43:35.560260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.088 [2024-11-20 12:43:35.560285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.088 qpair failed and we were unable to recover it. 00:29:30.088 [2024-11-20 12:43:35.560488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.088 [2024-11-20 12:43:35.560512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.088 qpair failed and we were unable to recover it. 00:29:30.088 [2024-11-20 12:43:35.560617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.088 [2024-11-20 12:43:35.560639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.088 qpair failed and we were unable to recover it. 00:29:30.088 [2024-11-20 12:43:35.560806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.088 [2024-11-20 12:43:35.560829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.088 qpair failed and we were unable to recover it. 00:29:30.088 [2024-11-20 12:43:35.560937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.088 [2024-11-20 12:43:35.560960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.088 qpair failed and we were unable to recover it. 00:29:30.088 [2024-11-20 12:43:35.561048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.088 [2024-11-20 12:43:35.561071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.088 qpair failed and we were unable to recover it. 00:29:30.088 [2024-11-20 12:43:35.561163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.088 [2024-11-20 12:43:35.561186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.088 qpair failed and we were unable to recover it. 00:29:30.088 [2024-11-20 12:43:35.561283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.088 [2024-11-20 12:43:35.561306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.088 qpair failed and we were unable to recover it. 00:29:30.088 [2024-11-20 12:43:35.561477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.088 [2024-11-20 12:43:35.561500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.088 qpair failed and we were unable to recover it. 00:29:30.088 [2024-11-20 12:43:35.561605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.088 [2024-11-20 12:43:35.561627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.088 qpair failed and we were unable to recover it. 00:29:30.088 [2024-11-20 12:43:35.561806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.088 [2024-11-20 12:43:35.561828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.088 qpair failed and we were unable to recover it. 00:29:30.088 [2024-11-20 12:43:35.561977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.088 [2024-11-20 12:43:35.562000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.088 qpair failed and we were unable to recover it. 00:29:30.088 [2024-11-20 12:43:35.562246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.088 [2024-11-20 12:43:35.562270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.088 qpair failed and we were unable to recover it. 00:29:30.088 [2024-11-20 12:43:35.562443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.088 [2024-11-20 12:43:35.562466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.088 qpair failed and we were unable to recover it. 00:29:30.088 [2024-11-20 12:43:35.562550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.088 [2024-11-20 12:43:35.562573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.088 qpair failed and we were unable to recover it. 00:29:30.088 [2024-11-20 12:43:35.562733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.088 [2024-11-20 12:43:35.562756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.088 qpair failed and we were unable to recover it. 00:29:30.088 [2024-11-20 12:43:35.562985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.088 [2024-11-20 12:43:35.563009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.088 qpair failed and we were unable to recover it. 00:29:30.088 [2024-11-20 12:43:35.563103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.088 [2024-11-20 12:43:35.563126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.088 qpair failed and we were unable to recover it. 00:29:30.088 [2024-11-20 12:43:35.563303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.088 [2024-11-20 12:43:35.563327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.088 qpair failed and we were unable to recover it. 00:29:30.088 [2024-11-20 12:43:35.563495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.088 [2024-11-20 12:43:35.563519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.088 qpair failed and we were unable to recover it. 00:29:30.088 [2024-11-20 12:43:35.563623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.088 [2024-11-20 12:43:35.563646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.088 qpair failed and we were unable to recover it. 00:29:30.088 [2024-11-20 12:43:35.563756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.088 [2024-11-20 12:43:35.563779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.088 qpair failed and we were unable to recover it. 00:29:30.088 [2024-11-20 12:43:35.563890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.088 [2024-11-20 12:43:35.563913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.088 qpair failed and we were unable to recover it. 00:29:30.088 [2024-11-20 12:43:35.564021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.088 [2024-11-20 12:43:35.564044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.088 qpair failed and we were unable to recover it. 00:29:30.088 [2024-11-20 12:43:35.564137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.088 [2024-11-20 12:43:35.564160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.088 qpair failed and we were unable to recover it. 00:29:30.088 [2024-11-20 12:43:35.564268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.088 [2024-11-20 12:43:35.564292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.088 qpair failed and we were unable to recover it. 00:29:30.088 [2024-11-20 12:43:35.564411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.089 [2024-11-20 12:43:35.564434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-11-20 12:43:35.564583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.089 [2024-11-20 12:43:35.564606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-11-20 12:43:35.564713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.089 [2024-11-20 12:43:35.564739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-11-20 12:43:35.564898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.089 [2024-11-20 12:43:35.564921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-11-20 12:43:35.565022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.089 [2024-11-20 12:43:35.565044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-11-20 12:43:35.565135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.089 [2024-11-20 12:43:35.565158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-11-20 12:43:35.565267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.089 [2024-11-20 12:43:35.565291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-11-20 12:43:35.565383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.089 [2024-11-20 12:43:35.565405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-11-20 12:43:35.565577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.089 [2024-11-20 12:43:35.565600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-11-20 12:43:35.565793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.089 [2024-11-20 12:43:35.565816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-11-20 12:43:35.565988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.089 [2024-11-20 12:43:35.566012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-11-20 12:43:35.566103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.089 [2024-11-20 12:43:35.566126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-11-20 12:43:35.566275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.089 [2024-11-20 12:43:35.566299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-11-20 12:43:35.566399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.089 [2024-11-20 12:43:35.566422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-11-20 12:43:35.566622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.089 [2024-11-20 12:43:35.566645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-11-20 12:43:35.566814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.089 [2024-11-20 12:43:35.566838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-11-20 12:43:35.566931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.089 [2024-11-20 12:43:35.566954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-11-20 12:43:35.567107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.089 [2024-11-20 12:43:35.567130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-11-20 12:43:35.567230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.089 [2024-11-20 12:43:35.567255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-11-20 12:43:35.567406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.089 [2024-11-20 12:43:35.567430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-11-20 12:43:35.567529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.089 [2024-11-20 12:43:35.567553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-11-20 12:43:35.567705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.089 [2024-11-20 12:43:35.567727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-11-20 12:43:35.567827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.089 [2024-11-20 12:43:35.567851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-11-20 12:43:35.568007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.089 [2024-11-20 12:43:35.568031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-11-20 12:43:35.568120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.089 [2024-11-20 12:43:35.568143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-11-20 12:43:35.568297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.089 [2024-11-20 12:43:35.568321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-11-20 12:43:35.568427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.089 [2024-11-20 12:43:35.568451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-11-20 12:43:35.568548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.089 [2024-11-20 12:43:35.568571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-11-20 12:43:35.568772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.089 [2024-11-20 12:43:35.568795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-11-20 12:43:35.568949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.089 [2024-11-20 12:43:35.568973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-11-20 12:43:35.569140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.089 [2024-11-20 12:43:35.569163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-11-20 12:43:35.569263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.089 [2024-11-20 12:43:35.569287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-11-20 12:43:35.569520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.089 [2024-11-20 12:43:35.569544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-11-20 12:43:35.569786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.089 [2024-11-20 12:43:35.569809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-11-20 12:43:35.570047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.089 [2024-11-20 12:43:35.570069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-11-20 12:43:35.570219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.089 [2024-11-20 12:43:35.570244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-11-20 12:43:35.570405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.089 [2024-11-20 12:43:35.570428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-11-20 12:43:35.570617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.089 [2024-11-20 12:43:35.570639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.089 [2024-11-20 12:43:35.570741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.089 [2024-11-20 12:43:35.570765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.089 qpair failed and we were unable to recover it. 00:29:30.090 [2024-11-20 12:43:35.570961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.090 [2024-11-20 12:43:35.570984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-11-20 12:43:35.571225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.090 [2024-11-20 12:43:35.571250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-11-20 12:43:35.571349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.090 [2024-11-20 12:43:35.571371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-11-20 12:43:35.571532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.090 [2024-11-20 12:43:35.571555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-11-20 12:43:35.571711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.090 [2024-11-20 12:43:35.571738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-11-20 12:43:35.571834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.090 [2024-11-20 12:43:35.571856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-11-20 12:43:35.571953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.090 [2024-11-20 12:43:35.571976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-11-20 12:43:35.572130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.090 [2024-11-20 12:43:35.572154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-11-20 12:43:35.572308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.090 [2024-11-20 12:43:35.572332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-11-20 12:43:35.572501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.090 [2024-11-20 12:43:35.572524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-11-20 12:43:35.572677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.090 [2024-11-20 12:43:35.572701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-11-20 12:43:35.572898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.090 [2024-11-20 12:43:35.572921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-11-20 12:43:35.573011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.090 [2024-11-20 12:43:35.573035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-11-20 12:43:35.573212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.090 [2024-11-20 12:43:35.573236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-11-20 12:43:35.573388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.090 [2024-11-20 12:43:35.573411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-11-20 12:43:35.573564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.090 [2024-11-20 12:43:35.573587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-11-20 12:43:35.573766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.090 [2024-11-20 12:43:35.573789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-11-20 12:43:35.573949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.090 [2024-11-20 12:43:35.573971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-11-20 12:43:35.574061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.090 [2024-11-20 12:43:35.574084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-11-20 12:43:35.574249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.090 [2024-11-20 12:43:35.574273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-11-20 12:43:35.574429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.090 [2024-11-20 12:43:35.574452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-11-20 12:43:35.574626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.090 [2024-11-20 12:43:35.574649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-11-20 12:43:35.574738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.090 [2024-11-20 12:43:35.574761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-11-20 12:43:35.574846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.090 [2024-11-20 12:43:35.574869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-11-20 12:43:35.575033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.090 [2024-11-20 12:43:35.575056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-11-20 12:43:35.575278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.090 [2024-11-20 12:43:35.575302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-11-20 12:43:35.575408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.090 [2024-11-20 12:43:35.575431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-11-20 12:43:35.575536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.090 [2024-11-20 12:43:35.575560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-11-20 12:43:35.575754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.090 [2024-11-20 12:43:35.575777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-11-20 12:43:35.575891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.090 [2024-11-20 12:43:35.575914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-11-20 12:43:35.576068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.090 [2024-11-20 12:43:35.576092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-11-20 12:43:35.576250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.090 [2024-11-20 12:43:35.576280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-11-20 12:43:35.576444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.090 [2024-11-20 12:43:35.576468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-11-20 12:43:35.576652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.090 [2024-11-20 12:43:35.576675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-11-20 12:43:35.576828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.090 [2024-11-20 12:43:35.576851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-11-20 12:43:35.576935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.090 [2024-11-20 12:43:35.576958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-11-20 12:43:35.577242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.090 [2024-11-20 12:43:35.577266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-11-20 12:43:35.577369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.090 [2024-11-20 12:43:35.577392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.090 qpair failed and we were unable to recover it. 00:29:30.090 [2024-11-20 12:43:35.577500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.091 [2024-11-20 12:43:35.577523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.091 qpair failed and we were unable to recover it. 00:29:30.091 [2024-11-20 12:43:35.577681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.091 [2024-11-20 12:43:35.577704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.091 qpair failed and we were unable to recover it. 00:29:30.091 [2024-11-20 12:43:35.577877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.091 [2024-11-20 12:43:35.577899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.091 qpair failed and we were unable to recover it. 00:29:30.091 [2024-11-20 12:43:35.578005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.091 [2024-11-20 12:43:35.578028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.091 qpair failed and we were unable to recover it. 00:29:30.091 [2024-11-20 12:43:35.578121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.091 [2024-11-20 12:43:35.578144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.091 qpair failed and we were unable to recover it. 00:29:30.091 [2024-11-20 12:43:35.578308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.091 [2024-11-20 12:43:35.578331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.091 qpair failed and we were unable to recover it. 00:29:30.091 [2024-11-20 12:43:35.578491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.091 [2024-11-20 12:43:35.578515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.091 qpair failed and we were unable to recover it. 00:29:30.091 [2024-11-20 12:43:35.578675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.091 [2024-11-20 12:43:35.578699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.091 qpair failed and we were unable to recover it. 00:29:30.091 [2024-11-20 12:43:35.578862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.091 [2024-11-20 12:43:35.578885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.091 qpair failed and we were unable to recover it. 00:29:30.091 [2024-11-20 12:43:35.579053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.091 [2024-11-20 12:43:35.579076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.091 qpair failed and we were unable to recover it. 00:29:30.091 [2024-11-20 12:43:35.579182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.091 [2024-11-20 12:43:35.579229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.091 qpair failed and we were unable to recover it. 00:29:30.091 [2024-11-20 12:43:35.579426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.091 [2024-11-20 12:43:35.579450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.091 qpair failed and we were unable to recover it. 00:29:30.091 [2024-11-20 12:43:35.579625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.091 [2024-11-20 12:43:35.579648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.091 qpair failed and we were unable to recover it. 00:29:30.091 [2024-11-20 12:43:35.579748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.091 [2024-11-20 12:43:35.579771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.091 qpair failed and we were unable to recover it. 00:29:30.091 [2024-11-20 12:43:35.579922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.091 [2024-11-20 12:43:35.579946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.091 qpair failed and we were unable to recover it. 00:29:30.091 [2024-11-20 12:43:35.580109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.091 [2024-11-20 12:43:35.580132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.091 qpair failed and we were unable to recover it. 00:29:30.091 [2024-11-20 12:43:35.580297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.091 [2024-11-20 12:43:35.580321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.091 qpair failed and we were unable to recover it. 00:29:30.091 [2024-11-20 12:43:35.580475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.091 [2024-11-20 12:43:35.580498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.091 qpair failed and we were unable to recover it. 00:29:30.091 [2024-11-20 12:43:35.580691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.091 [2024-11-20 12:43:35.580715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.091 qpair failed and we were unable to recover it. 00:29:30.091 [2024-11-20 12:43:35.580798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.091 [2024-11-20 12:43:35.580821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.091 qpair failed and we were unable to recover it. 00:29:30.091 [2024-11-20 12:43:35.580994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.091 [2024-11-20 12:43:35.581018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.091 qpair failed and we were unable to recover it. 00:29:30.091 [2024-11-20 12:43:35.581177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.091 [2024-11-20 12:43:35.581200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.091 qpair failed and we were unable to recover it. 00:29:30.091 [2024-11-20 12:43:35.581355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.091 [2024-11-20 12:43:35.581378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.091 qpair failed and we were unable to recover it. 00:29:30.091 [2024-11-20 12:43:35.581472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.091 [2024-11-20 12:43:35.581495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.091 qpair failed and we were unable to recover it. 00:29:30.091 [2024-11-20 12:43:35.581577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.091 [2024-11-20 12:43:35.581600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.091 qpair failed and we were unable to recover it. 00:29:30.091 [2024-11-20 12:43:35.581771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.091 [2024-11-20 12:43:35.581794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.091 qpair failed and we were unable to recover it. 00:29:30.091 [2024-11-20 12:43:35.581958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.091 [2024-11-20 12:43:35.581980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.091 qpair failed and we were unable to recover it. 00:29:30.091 [2024-11-20 12:43:35.582078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.091 [2024-11-20 12:43:35.582101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.091 qpair failed and we were unable to recover it. 00:29:30.091 [2024-11-20 12:43:35.582295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.091 [2024-11-20 12:43:35.582318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.091 qpair failed and we were unable to recover it. 00:29:30.091 [2024-11-20 12:43:35.582506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.091 [2024-11-20 12:43:35.582529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.091 qpair failed and we were unable to recover it. 00:29:30.091 [2024-11-20 12:43:35.582685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.091 [2024-11-20 12:43:35.582708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.091 qpair failed and we were unable to recover it. 00:29:30.091 [2024-11-20 12:43:35.582858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.091 [2024-11-20 12:43:35.582881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.091 qpair failed and we were unable to recover it. 00:29:30.091 [2024-11-20 12:43:35.582990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.091 [2024-11-20 12:43:35.583013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.091 qpair failed and we were unable to recover it. 00:29:30.091 [2024-11-20 12:43:35.583174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.091 [2024-11-20 12:43:35.583197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.091 qpair failed and we were unable to recover it. 00:29:30.091 [2024-11-20 12:43:35.583307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.091 [2024-11-20 12:43:35.583334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.091 qpair failed and we were unable to recover it. 00:29:30.091 [2024-11-20 12:43:35.583431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.091 [2024-11-20 12:43:35.583453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.091 qpair failed and we were unable to recover it. 00:29:30.091 [2024-11-20 12:43:35.583622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.091 [2024-11-20 12:43:35.583645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.091 qpair failed and we were unable to recover it. 00:29:30.091 [2024-11-20 12:43:35.583862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.091 [2024-11-20 12:43:35.583885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.091 qpair failed and we were unable to recover it. 00:29:30.091 [2024-11-20 12:43:35.584034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.091 [2024-11-20 12:43:35.584057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.091 qpair failed and we were unable to recover it. 00:29:30.092 [2024-11-20 12:43:35.584217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.092 [2024-11-20 12:43:35.584241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.092 qpair failed and we were unable to recover it. 00:29:30.092 [2024-11-20 12:43:35.584394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.092 [2024-11-20 12:43:35.584417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.092 qpair failed and we were unable to recover it. 00:29:30.092 [2024-11-20 12:43:35.584515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.092 [2024-11-20 12:43:35.584538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.092 qpair failed and we were unable to recover it. 00:29:30.092 [2024-11-20 12:43:35.584640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.092 [2024-11-20 12:43:35.584662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.092 qpair failed and we were unable to recover it. 00:29:30.092 [2024-11-20 12:43:35.584754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.092 [2024-11-20 12:43:35.584777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.092 qpair failed and we were unable to recover it. 00:29:30.092 [2024-11-20 12:43:35.584959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.092 [2024-11-20 12:43:35.584983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.092 qpair failed and we were unable to recover it. 00:29:30.092 [2024-11-20 12:43:35.585088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.092 [2024-11-20 12:43:35.585112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.092 qpair failed and we were unable to recover it. 00:29:30.092 [2024-11-20 12:43:35.585275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.092 [2024-11-20 12:43:35.585299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.092 qpair failed and we were unable to recover it. 00:29:30.092 [2024-11-20 12:43:35.585483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.092 [2024-11-20 12:43:35.585506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.092 qpair failed and we were unable to recover it. 00:29:30.092 [2024-11-20 12:43:35.585660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.092 [2024-11-20 12:43:35.585682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.092 qpair failed and we were unable to recover it. 00:29:30.092 [2024-11-20 12:43:35.585903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.092 [2024-11-20 12:43:35.585926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.092 qpair failed and we were unable to recover it. 00:29:30.092 [2024-11-20 12:43:35.586036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.092 [2024-11-20 12:43:35.586059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.092 qpair failed and we were unable to recover it. 00:29:30.092 [2024-11-20 12:43:35.586161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.092 [2024-11-20 12:43:35.586183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.092 qpair failed and we were unable to recover it. 00:29:30.092 [2024-11-20 12:43:35.586342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.092 [2024-11-20 12:43:35.586365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.092 qpair failed and we were unable to recover it. 00:29:30.092 [2024-11-20 12:43:35.586543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.092 [2024-11-20 12:43:35.586566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.092 qpair failed and we were unable to recover it. 00:29:30.092 [2024-11-20 12:43:35.586722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.092 [2024-11-20 12:43:35.586744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.092 qpair failed and we were unable to recover it. 00:29:30.092 [2024-11-20 12:43:35.586851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.092 [2024-11-20 12:43:35.586874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.092 qpair failed and we were unable to recover it. 00:29:30.092 [2024-11-20 12:43:35.586960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.092 [2024-11-20 12:43:35.586983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.092 qpair failed and we were unable to recover it. 00:29:30.092 [2024-11-20 12:43:35.587150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.092 [2024-11-20 12:43:35.587173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.092 qpair failed and we were unable to recover it. 00:29:30.092 [2024-11-20 12:43:35.587358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.092 [2024-11-20 12:43:35.587383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.092 qpair failed and we were unable to recover it. 00:29:30.092 [2024-11-20 12:43:35.587551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.092 [2024-11-20 12:43:35.587574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.092 qpair failed and we were unable to recover it. 00:29:30.092 [2024-11-20 12:43:35.587676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.092 [2024-11-20 12:43:35.587699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.092 qpair failed and we were unable to recover it. 00:29:30.092 [2024-11-20 12:43:35.587868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.092 [2024-11-20 12:43:35.587895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.092 qpair failed and we were unable to recover it. 00:29:30.092 [2024-11-20 12:43:35.587981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.092 [2024-11-20 12:43:35.588004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.092 qpair failed and we were unable to recover it. 00:29:30.092 [2024-11-20 12:43:35.588099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.092 [2024-11-20 12:43:35.588123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.092 qpair failed and we were unable to recover it. 00:29:30.092 [2024-11-20 12:43:35.588274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.092 [2024-11-20 12:43:35.588298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.092 qpair failed and we were unable to recover it. 00:29:30.092 [2024-11-20 12:43:35.588381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.092 [2024-11-20 12:43:35.588404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.092 qpair failed and we were unable to recover it. 00:29:30.092 [2024-11-20 12:43:35.588564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.092 [2024-11-20 12:43:35.588587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.092 qpair failed and we were unable to recover it. 00:29:30.092 [2024-11-20 12:43:35.588741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.092 [2024-11-20 12:43:35.588765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.092 qpair failed and we were unable to recover it. 00:29:30.092 [2024-11-20 12:43:35.588847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.092 [2024-11-20 12:43:35.588870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.092 qpair failed and we were unable to recover it. 00:29:30.092 [2024-11-20 12:43:35.589019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.092 [2024-11-20 12:43:35.589042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.092 qpair failed and we were unable to recover it. 00:29:30.092 [2024-11-20 12:43:35.589195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.092 [2024-11-20 12:43:35.589226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.092 qpair failed and we were unable to recover it. 00:29:30.092 [2024-11-20 12:43:35.589445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.092 [2024-11-20 12:43:35.589468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.092 qpair failed and we were unable to recover it. 00:29:30.092 [2024-11-20 12:43:35.589559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.092 [2024-11-20 12:43:35.589582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.092 qpair failed and we were unable to recover it. 00:29:30.092 [2024-11-20 12:43:35.589809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.092 [2024-11-20 12:43:35.589832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.092 qpair failed and we were unable to recover it. 00:29:30.092 [2024-11-20 12:43:35.590002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.092 [2024-11-20 12:43:35.590025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.092 qpair failed and we were unable to recover it. 00:29:30.092 [2024-11-20 12:43:35.590185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.092 [2024-11-20 12:43:35.590216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.092 qpair failed and we were unable to recover it. 00:29:30.092 [2024-11-20 12:43:35.590390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.092 [2024-11-20 12:43:35.590412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.092 qpair failed and we were unable to recover it. 00:29:30.092 [2024-11-20 12:43:35.590680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.092 [2024-11-20 12:43:35.590704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.093 qpair failed and we were unable to recover it. 00:29:30.093 [2024-11-20 12:43:35.590799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.093 [2024-11-20 12:43:35.590821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.093 qpair failed and we were unable to recover it. 00:29:30.093 [2024-11-20 12:43:35.590923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.093 [2024-11-20 12:43:35.590946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.093 qpair failed and we were unable to recover it. 00:29:30.093 [2024-11-20 12:43:35.591098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.093 [2024-11-20 12:43:35.591121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.093 qpair failed and we were unable to recover it. 00:29:30.093 [2024-11-20 12:43:35.591230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.093 [2024-11-20 12:43:35.591254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.093 qpair failed and we were unable to recover it. 00:29:30.093 [2024-11-20 12:43:35.591518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.093 [2024-11-20 12:43:35.591541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.093 qpair failed and we were unable to recover it. 00:29:30.093 [2024-11-20 12:43:35.591635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.093 [2024-11-20 12:43:35.591659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.093 qpair failed and we were unable to recover it. 00:29:30.093 [2024-11-20 12:43:35.591810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.093 [2024-11-20 12:43:35.591833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.093 qpair failed and we were unable to recover it. 00:29:30.093 [2024-11-20 12:43:35.591998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.093 [2024-11-20 12:43:35.592022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.093 qpair failed and we were unable to recover it. 00:29:30.093 [2024-11-20 12:43:35.592198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.093 [2024-11-20 12:43:35.592242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.093 qpair failed and we were unable to recover it. 00:29:30.093 [2024-11-20 12:43:35.592338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.093 [2024-11-20 12:43:35.592361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.093 qpair failed and we were unable to recover it. 00:29:30.093 [2024-11-20 12:43:35.592525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.093 [2024-11-20 12:43:35.592548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.093 qpair failed and we were unable to recover it. 00:29:30.093 [2024-11-20 12:43:35.592741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.093 [2024-11-20 12:43:35.592764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.093 qpair failed and we were unable to recover it. 00:29:30.093 [2024-11-20 12:43:35.592923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.093 [2024-11-20 12:43:35.592946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.093 qpair failed and we were unable to recover it. 00:29:30.093 [2024-11-20 12:43:35.593136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.093 [2024-11-20 12:43:35.593159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.093 qpair failed and we were unable to recover it. 00:29:30.093 [2024-11-20 12:43:35.593280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.093 [2024-11-20 12:43:35.593304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.093 qpair failed and we were unable to recover it. 00:29:30.093 [2024-11-20 12:43:35.593454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.093 [2024-11-20 12:43:35.593477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.093 qpair failed and we were unable to recover it. 00:29:30.093 [2024-11-20 12:43:35.593632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.093 [2024-11-20 12:43:35.593654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.093 qpair failed and we were unable to recover it. 00:29:30.093 [2024-11-20 12:43:35.593903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.093 [2024-11-20 12:43:35.593926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.093 qpair failed and we were unable to recover it. 00:29:30.093 [2024-11-20 12:43:35.594021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.093 [2024-11-20 12:43:35.594044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.093 qpair failed and we were unable to recover it. 00:29:30.093 [2024-11-20 12:43:35.594133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.093 [2024-11-20 12:43:35.594155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.093 qpair failed and we were unable to recover it. 00:29:30.093 [2024-11-20 12:43:35.594254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.093 [2024-11-20 12:43:35.594278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.093 qpair failed and we were unable to recover it. 00:29:30.093 [2024-11-20 12:43:35.594435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.093 [2024-11-20 12:43:35.594458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.093 qpair failed and we were unable to recover it. 00:29:30.093 [2024-11-20 12:43:35.594658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.093 [2024-11-20 12:43:35.594681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.093 qpair failed and we were unable to recover it. 00:29:30.093 [2024-11-20 12:43:35.594782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.093 [2024-11-20 12:43:35.594804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.093 qpair failed and we were unable to recover it. 00:29:30.093 [2024-11-20 12:43:35.594953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.093 [2024-11-20 12:43:35.594979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.093 qpair failed and we were unable to recover it. 00:29:30.093 [2024-11-20 12:43:35.595080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.093 [2024-11-20 12:43:35.595104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.093 qpair failed and we were unable to recover it. 00:29:30.093 [2024-11-20 12:43:35.595266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.093 [2024-11-20 12:43:35.595290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.093 qpair failed and we were unable to recover it. 00:29:30.093 [2024-11-20 12:43:35.595374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.093 [2024-11-20 12:43:35.595398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.093 qpair failed and we were unable to recover it. 00:29:30.093 [2024-11-20 12:43:35.595552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.093 [2024-11-20 12:43:35.595575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.093 qpair failed and we were unable to recover it. 00:29:30.093 [2024-11-20 12:43:35.595827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.093 [2024-11-20 12:43:35.595849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.093 qpair failed and we were unable to recover it. 00:29:30.093 [2024-11-20 12:43:35.595999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.093 [2024-11-20 12:43:35.596023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.093 qpair failed and we were unable to recover it. 00:29:30.093 [2024-11-20 12:43:35.596123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.093 [2024-11-20 12:43:35.596146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.093 qpair failed and we were unable to recover it. 00:29:30.093 [2024-11-20 12:43:35.596347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.093 [2024-11-20 12:43:35.596371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.093 qpair failed and we were unable to recover it. 00:29:30.094 [2024-11-20 12:43:35.596467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.094 [2024-11-20 12:43:35.596490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.094 qpair failed and we were unable to recover it. 00:29:30.094 [2024-11-20 12:43:35.596588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.094 [2024-11-20 12:43:35.596611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.094 qpair failed and we were unable to recover it. 00:29:30.094 [2024-11-20 12:43:35.596770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.094 [2024-11-20 12:43:35.596793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.094 qpair failed and we were unable to recover it. 00:29:30.094 [2024-11-20 12:43:35.597015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.094 [2024-11-20 12:43:35.597037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.094 qpair failed and we were unable to recover it. 00:29:30.094 [2024-11-20 12:43:35.597137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.094 [2024-11-20 12:43:35.597160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.094 qpair failed and we were unable to recover it. 00:29:30.094 [2024-11-20 12:43:35.597265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.094 [2024-11-20 12:43:35.597302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.094 qpair failed and we were unable to recover it. 00:29:30.094 [2024-11-20 12:43:35.597464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.094 [2024-11-20 12:43:35.597485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.094 qpair failed and we were unable to recover it. 00:29:30.094 [2024-11-20 12:43:35.597641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.094 [2024-11-20 12:43:35.597665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.094 qpair failed and we were unable to recover it. 00:29:30.094 [2024-11-20 12:43:35.597907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.094 [2024-11-20 12:43:35.597930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.094 qpair failed and we were unable to recover it. 00:29:30.094 [2024-11-20 12:43:35.598012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.094 [2024-11-20 12:43:35.598035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.094 qpair failed and we were unable to recover it. 00:29:30.094 [2024-11-20 12:43:35.598197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.094 [2024-11-20 12:43:35.598227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.094 qpair failed and we were unable to recover it. 00:29:30.094 [2024-11-20 12:43:35.598328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.094 [2024-11-20 12:43:35.598350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.094 qpair failed and we were unable to recover it. 00:29:30.094 [2024-11-20 12:43:35.598564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.094 [2024-11-20 12:43:35.598588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.094 qpair failed and we were unable to recover it. 00:29:30.094 [2024-11-20 12:43:35.598747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.094 [2024-11-20 12:43:35.598769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.094 qpair failed and we were unable to recover it. 00:29:30.094 [2024-11-20 12:43:35.598943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.094 [2024-11-20 12:43:35.598965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.094 qpair failed and we were unable to recover it. 00:29:30.094 [2024-11-20 12:43:35.599062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.094 [2024-11-20 12:43:35.599085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.094 qpair failed and we were unable to recover it. 00:29:30.094 [2024-11-20 12:43:35.599252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.094 [2024-11-20 12:43:35.599276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.094 qpair failed and we were unable to recover it. 00:29:30.094 [2024-11-20 12:43:35.599453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.094 [2024-11-20 12:43:35.599475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.094 qpair failed and we were unable to recover it. 00:29:30.094 [2024-11-20 12:43:35.599637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.094 [2024-11-20 12:43:35.599665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.094 qpair failed and we were unable to recover it. 00:29:30.094 [2024-11-20 12:43:35.599764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.094 [2024-11-20 12:43:35.599786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.094 qpair failed and we were unable to recover it. 00:29:30.094 [2024-11-20 12:43:35.599873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.094 [2024-11-20 12:43:35.599896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.094 qpair failed and we were unable to recover it. 00:29:30.094 [2024-11-20 12:43:35.600160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.094 [2024-11-20 12:43:35.600183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.094 qpair failed and we were unable to recover it. 00:29:30.094 [2024-11-20 12:43:35.600356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.094 [2024-11-20 12:43:35.600379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.094 qpair failed and we were unable to recover it. 00:29:30.094 [2024-11-20 12:43:35.600550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.094 [2024-11-20 12:43:35.600573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.094 qpair failed and we were unable to recover it. 00:29:30.094 [2024-11-20 12:43:35.600737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.094 [2024-11-20 12:43:35.600759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.094 qpair failed and we were unable to recover it. 00:29:30.094 [2024-11-20 12:43:35.600860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.094 [2024-11-20 12:43:35.600884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.094 qpair failed and we were unable to recover it. 00:29:30.094 [2024-11-20 12:43:35.600993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.094 [2024-11-20 12:43:35.601024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.094 qpair failed and we were unable to recover it. 00:29:30.094 [2024-11-20 12:43:35.601175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.094 [2024-11-20 12:43:35.601199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.094 qpair failed and we were unable to recover it. 00:29:30.094 [2024-11-20 12:43:35.601314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.094 [2024-11-20 12:43:35.601336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.094 qpair failed and we were unable to recover it. 00:29:30.094 [2024-11-20 12:43:35.601436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.094 [2024-11-20 12:43:35.601459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.094 qpair failed and we were unable to recover it. 00:29:30.094 [2024-11-20 12:43:35.601543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.094 [2024-11-20 12:43:35.601566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.094 qpair failed and we were unable to recover it. 00:29:30.094 [2024-11-20 12:43:35.601669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.094 [2024-11-20 12:43:35.601692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.094 qpair failed and we were unable to recover it. 00:29:30.094 [2024-11-20 12:43:35.601943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.094 [2024-11-20 12:43:35.602014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:30.094 qpair failed and we were unable to recover it. 00:29:30.094 [2024-11-20 12:43:35.602159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.094 [2024-11-20 12:43:35.602196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:30.094 qpair failed and we were unable to recover it. 00:29:30.094 [2024-11-20 12:43:35.602429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.094 [2024-11-20 12:43:35.602463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:30.094 qpair failed and we were unable to recover it. 00:29:30.094 [2024-11-20 12:43:35.602680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.094 [2024-11-20 12:43:35.602713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:30.094 qpair failed and we were unable to recover it. 00:29:30.094 [2024-11-20 12:43:35.602823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.094 [2024-11-20 12:43:35.602855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:30.094 qpair failed and we were unable to recover it. 00:29:30.094 [2024-11-20 12:43:35.603014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.094 [2024-11-20 12:43:35.603087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.094 qpair failed and we were unable to recover it. 00:29:30.094 [2024-11-20 12:43:35.603376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.095 [2024-11-20 12:43:35.603402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.095 qpair failed and we were unable to recover it. 00:29:30.095 [2024-11-20 12:43:35.603507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.095 [2024-11-20 12:43:35.603529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.095 qpair failed and we were unable to recover it. 00:29:30.095 [2024-11-20 12:43:35.603614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.095 [2024-11-20 12:43:35.603636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.095 qpair failed and we were unable to recover it. 00:29:30.095 [2024-11-20 12:43:35.603751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.095 [2024-11-20 12:43:35.603774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.095 qpair failed and we were unable to recover it. 00:29:30.095 [2024-11-20 12:43:35.603872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.095 [2024-11-20 12:43:35.603894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.095 qpair failed and we were unable to recover it. 00:29:30.095 [2024-11-20 12:43:35.604067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.095 [2024-11-20 12:43:35.604090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.095 qpair failed and we were unable to recover it. 00:29:30.095 [2024-11-20 12:43:35.604270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.095 [2024-11-20 12:43:35.604294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.095 qpair failed and we were unable to recover it. 00:29:30.095 [2024-11-20 12:43:35.604452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.095 [2024-11-20 12:43:35.604474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.095 qpair failed and we were unable to recover it. 00:29:30.095 [2024-11-20 12:43:35.604631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.095 [2024-11-20 12:43:35.604655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.095 qpair failed and we were unable to recover it. 00:29:30.095 [2024-11-20 12:43:35.604804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.095 [2024-11-20 12:43:35.604827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.095 qpair failed and we were unable to recover it. 00:29:30.095 [2024-11-20 12:43:35.605018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.095 [2024-11-20 12:43:35.605041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.095 qpair failed and we were unable to recover it. 00:29:30.095 [2024-11-20 12:43:35.605198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.095 [2024-11-20 12:43:35.605227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.095 qpair failed and we were unable to recover it. 00:29:30.095 [2024-11-20 12:43:35.605390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.095 [2024-11-20 12:43:35.605413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.095 qpair failed and we were unable to recover it. 00:29:30.095 [2024-11-20 12:43:35.605565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.095 [2024-11-20 12:43:35.605588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.095 qpair failed and we were unable to recover it. 00:29:30.095 [2024-11-20 12:43:35.605671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.095 [2024-11-20 12:43:35.605695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.095 qpair failed and we were unable to recover it. 00:29:30.095 [2024-11-20 12:43:35.605800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.095 [2024-11-20 12:43:35.605823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.095 qpair failed and we were unable to recover it. 00:29:30.095 [2024-11-20 12:43:35.606033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.095 [2024-11-20 12:43:35.606057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.095 qpair failed and we were unable to recover it. 00:29:30.095 [2024-11-20 12:43:35.606176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.095 [2024-11-20 12:43:35.606198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.095 qpair failed and we were unable to recover it. 00:29:30.095 [2024-11-20 12:43:35.606376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.095 [2024-11-20 12:43:35.606400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.095 qpair failed and we were unable to recover it. 00:29:30.095 [2024-11-20 12:43:35.606623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.095 [2024-11-20 12:43:35.606646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.095 qpair failed and we were unable to recover it. 00:29:30.095 [2024-11-20 12:43:35.606740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.095 [2024-11-20 12:43:35.606763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.095 qpair failed and we were unable to recover it. 00:29:30.095 [2024-11-20 12:43:35.606980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.095 [2024-11-20 12:43:35.607005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.095 qpair failed and we were unable to recover it. 00:29:30.095 [2024-11-20 12:43:35.607172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.095 [2024-11-20 12:43:35.607196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.095 qpair failed and we were unable to recover it. 00:29:30.095 [2024-11-20 12:43:35.607426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.095 [2024-11-20 12:43:35.607448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.095 qpair failed and we were unable to recover it. 00:29:30.095 [2024-11-20 12:43:35.607564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.095 [2024-11-20 12:43:35.607588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.095 qpair failed and we were unable to recover it. 00:29:30.095 [2024-11-20 12:43:35.607699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.095 [2024-11-20 12:43:35.607721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.095 qpair failed and we were unable to recover it. 00:29:30.095 [2024-11-20 12:43:35.607969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.095 [2024-11-20 12:43:35.607991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.095 qpair failed and we were unable to recover it. 00:29:30.095 [2024-11-20 12:43:35.608088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.095 [2024-11-20 12:43:35.608112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.095 qpair failed and we were unable to recover it. 00:29:30.095 [2024-11-20 12:43:35.608335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.095 [2024-11-20 12:43:35.608359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.095 qpair failed and we were unable to recover it. 00:29:30.095 [2024-11-20 12:43:35.608458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.095 [2024-11-20 12:43:35.608480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.095 qpair failed and we were unable to recover it. 00:29:30.095 [2024-11-20 12:43:35.608653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.095 [2024-11-20 12:43:35.608676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.095 qpair failed and we were unable to recover it. 00:29:30.095 [2024-11-20 12:43:35.608831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.095 [2024-11-20 12:43:35.608854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.095 qpair failed and we were unable to recover it. 00:29:30.095 [2024-11-20 12:43:35.609016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.095 [2024-11-20 12:43:35.609039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.095 qpair failed and we were unable to recover it. 00:29:30.095 [2024-11-20 12:43:35.609143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.095 [2024-11-20 12:43:35.609166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.095 qpair failed and we were unable to recover it. 00:29:30.095 [2024-11-20 12:43:35.609346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.095 [2024-11-20 12:43:35.609370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.095 qpair failed and we were unable to recover it. 00:29:30.095 [2024-11-20 12:43:35.609545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.095 [2024-11-20 12:43:35.609568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.095 qpair failed and we were unable to recover it. 00:29:30.095 [2024-11-20 12:43:35.609672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.095 [2024-11-20 12:43:35.609694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.095 qpair failed and we were unable to recover it. 00:29:30.095 [2024-11-20 12:43:35.609792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.095 [2024-11-20 12:43:35.609816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.095 qpair failed and we were unable to recover it. 00:29:30.095 [2024-11-20 12:43:35.609964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.095 [2024-11-20 12:43:35.609986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.095 qpair failed and we were unable to recover it. 00:29:30.096 [2024-11-20 12:43:35.610140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.096 [2024-11-20 12:43:35.610163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.096 qpair failed and we were unable to recover it. 00:29:30.096 [2024-11-20 12:43:35.610364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.096 [2024-11-20 12:43:35.610388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.096 qpair failed and we were unable to recover it. 00:29:30.096 [2024-11-20 12:43:35.610469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.096 [2024-11-20 12:43:35.610491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.096 qpair failed and we were unable to recover it. 00:29:30.096 [2024-11-20 12:43:35.610647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.096 [2024-11-20 12:43:35.610670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.096 qpair failed and we were unable to recover it. 00:29:30.096 [2024-11-20 12:43:35.610840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.096 [2024-11-20 12:43:35.610863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.096 qpair failed and we were unable to recover it. 00:29:30.096 [2024-11-20 12:43:35.610964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.096 [2024-11-20 12:43:35.610986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.096 qpair failed and we were unable to recover it. 00:29:30.096 [2024-11-20 12:43:35.611081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.096 [2024-11-20 12:43:35.611105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.096 qpair failed and we were unable to recover it. 00:29:30.096 [2024-11-20 12:43:35.611258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.096 [2024-11-20 12:43:35.611281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.096 qpair failed and we were unable to recover it. 00:29:30.096 [2024-11-20 12:43:35.611382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.096 [2024-11-20 12:43:35.611406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.096 qpair failed and we were unable to recover it. 00:29:30.096 [2024-11-20 12:43:35.611572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.096 [2024-11-20 12:43:35.611604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.096 qpair failed and we were unable to recover it. 00:29:30.096 [2024-11-20 12:43:35.611793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.096 [2024-11-20 12:43:35.611815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.096 qpair failed and we were unable to recover it. 00:29:30.096 [2024-11-20 12:43:35.611918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.096 [2024-11-20 12:43:35.611941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.096 qpair failed and we were unable to recover it. 00:29:30.096 [2024-11-20 12:43:35.612169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.096 [2024-11-20 12:43:35.612192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.096 qpair failed and we were unable to recover it. 00:29:30.096 [2024-11-20 12:43:35.612308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.096 [2024-11-20 12:43:35.612330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.096 qpair failed and we were unable to recover it. 00:29:30.096 [2024-11-20 12:43:35.612513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.096 [2024-11-20 12:43:35.612536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.096 qpair failed and we were unable to recover it. 00:29:30.096 [2024-11-20 12:43:35.612628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.096 [2024-11-20 12:43:35.612650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.096 qpair failed and we were unable to recover it. 00:29:30.096 [2024-11-20 12:43:35.612810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.096 [2024-11-20 12:43:35.612832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.096 qpair failed and we were unable to recover it. 00:29:30.096 [2024-11-20 12:43:35.613019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.096 [2024-11-20 12:43:35.613042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.096 qpair failed and we were unable to recover it. 00:29:30.096 [2024-11-20 12:43:35.613194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.096 [2024-11-20 12:43:35.613242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.096 qpair failed and we were unable to recover it. 00:29:30.096 [2024-11-20 12:43:35.613355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.096 [2024-11-20 12:43:35.613378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.096 qpair failed and we were unable to recover it. 00:29:30.096 [2024-11-20 12:43:35.613466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.096 [2024-11-20 12:43:35.613488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.096 qpair failed and we were unable to recover it. 00:29:30.096 [2024-11-20 12:43:35.613650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.096 [2024-11-20 12:43:35.613674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.096 qpair failed and we were unable to recover it. 00:29:30.096 [2024-11-20 12:43:35.613833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.096 [2024-11-20 12:43:35.613854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.096 qpair failed and we were unable to recover it. 00:29:30.096 [2024-11-20 12:43:35.613974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.096 [2024-11-20 12:43:35.613998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.096 qpair failed and we were unable to recover it. 00:29:30.096 [2024-11-20 12:43:35.614088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.096 [2024-11-20 12:43:35.614111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.096 qpair failed and we were unable to recover it. 00:29:30.096 [2024-11-20 12:43:35.614215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.096 [2024-11-20 12:43:35.614240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.096 qpair failed and we were unable to recover it. 00:29:30.096 [2024-11-20 12:43:35.614392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.096 [2024-11-20 12:43:35.614413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.096 qpair failed and we were unable to recover it. 00:29:30.096 [2024-11-20 12:43:35.614519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.096 [2024-11-20 12:43:35.614541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.096 qpair failed and we were unable to recover it. 00:29:30.096 [2024-11-20 12:43:35.614704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.096 [2024-11-20 12:43:35.614726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.096 qpair failed and we were unable to recover it. 00:29:30.096 [2024-11-20 12:43:35.614888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.096 [2024-11-20 12:43:35.614911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.096 qpair failed and we were unable to recover it. 00:29:30.096 [2024-11-20 12:43:35.615083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.096 [2024-11-20 12:43:35.615105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.096 qpair failed and we were unable to recover it. 00:29:30.096 [2024-11-20 12:43:35.615295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.096 [2024-11-20 12:43:35.615319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.096 qpair failed and we were unable to recover it. 00:29:30.096 [2024-11-20 12:43:35.615507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.096 [2024-11-20 12:43:35.615528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.096 qpair failed and we were unable to recover it. 00:29:30.096 [2024-11-20 12:43:35.615611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.096 [2024-11-20 12:43:35.615634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.096 qpair failed and we were unable to recover it. 00:29:30.096 [2024-11-20 12:43:35.615795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.096 [2024-11-20 12:43:35.615817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.096 qpair failed and we were unable to recover it. 00:29:30.096 [2024-11-20 12:43:35.615967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.096 [2024-11-20 12:43:35.615989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.096 qpair failed and we were unable to recover it. 00:29:30.096 [2024-11-20 12:43:35.616081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.096 [2024-11-20 12:43:35.616104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.096 qpair failed and we were unable to recover it. 00:29:30.096 [2024-11-20 12:43:35.616216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.096 [2024-11-20 12:43:35.616239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.096 qpair failed and we were unable to recover it. 00:29:30.096 [2024-11-20 12:43:35.616318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.097 [2024-11-20 12:43:35.616340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.097 qpair failed and we were unable to recover it. 00:29:30.097 [2024-11-20 12:43:35.616500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.097 [2024-11-20 12:43:35.616524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.097 qpair failed and we were unable to recover it. 00:29:30.097 [2024-11-20 12:43:35.616694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.097 [2024-11-20 12:43:35.616717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.097 qpair failed and we were unable to recover it. 00:29:30.097 [2024-11-20 12:43:35.616957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.097 [2024-11-20 12:43:35.616981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.097 qpair failed and we were unable to recover it. 00:29:30.097 [2024-11-20 12:43:35.617216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.097 [2024-11-20 12:43:35.617241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.097 qpair failed and we were unable to recover it. 00:29:30.097 [2024-11-20 12:43:35.617415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.097 [2024-11-20 12:43:35.617438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.097 qpair failed and we were unable to recover it. 00:29:30.097 [2024-11-20 12:43:35.617602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.097 [2024-11-20 12:43:35.617624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.097 qpair failed and we were unable to recover it. 00:29:30.097 [2024-11-20 12:43:35.617844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.097 [2024-11-20 12:43:35.617866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.097 qpair failed and we were unable to recover it. 00:29:30.097 [2024-11-20 12:43:35.617964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.097 [2024-11-20 12:43:35.617987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.097 qpair failed and we were unable to recover it. 00:29:30.097 [2024-11-20 12:43:35.618196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.097 [2024-11-20 12:43:35.618226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.097 qpair failed and we were unable to recover it. 00:29:30.097 [2024-11-20 12:43:35.618407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.097 [2024-11-20 12:43:35.618432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.097 qpair failed and we were unable to recover it. 00:29:30.097 [2024-11-20 12:43:35.618659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.097 [2024-11-20 12:43:35.618684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.097 qpair failed and we were unable to recover it. 00:29:30.097 [2024-11-20 12:43:35.618862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.097 [2024-11-20 12:43:35.618887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.097 qpair failed and we were unable to recover it. 00:29:30.097 [2024-11-20 12:43:35.618988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.097 [2024-11-20 12:43:35.619012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.097 qpair failed and we were unable to recover it. 00:29:30.097 [2024-11-20 12:43:35.619134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.097 [2024-11-20 12:43:35.619155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.097 qpair failed and we were unable to recover it. 00:29:30.097 [2024-11-20 12:43:35.619255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.097 [2024-11-20 12:43:35.619279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.097 qpair failed and we were unable to recover it. 00:29:30.097 [2024-11-20 12:43:35.619470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.097 [2024-11-20 12:43:35.619493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.097 qpair failed and we were unable to recover it. 00:29:30.097 [2024-11-20 12:43:35.619670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.097 [2024-11-20 12:43:35.619693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.097 qpair failed and we were unable to recover it. 00:29:30.097 [2024-11-20 12:43:35.619849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.097 [2024-11-20 12:43:35.619871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.097 qpair failed and we were unable to recover it. 00:29:30.097 [2024-11-20 12:43:35.620061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.097 [2024-11-20 12:43:35.620084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.097 qpair failed and we were unable to recover it. 00:29:30.097 [2024-11-20 12:43:35.620304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.097 [2024-11-20 12:43:35.620328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.097 qpair failed and we were unable to recover it. 00:29:30.097 [2024-11-20 12:43:35.620440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.097 [2024-11-20 12:43:35.620461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.097 qpair failed and we were unable to recover it. 00:29:30.097 [2024-11-20 12:43:35.620553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.097 [2024-11-20 12:43:35.620575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.097 qpair failed and we were unable to recover it. 00:29:30.097 [2024-11-20 12:43:35.620771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.097 [2024-11-20 12:43:35.620794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.097 qpair failed and we were unable to recover it. 00:29:30.097 [2024-11-20 12:43:35.621013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.097 [2024-11-20 12:43:35.621036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.097 qpair failed and we were unable to recover it. 00:29:30.097 [2024-11-20 12:43:35.621152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.097 [2024-11-20 12:43:35.621175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.097 qpair failed and we were unable to recover it. 00:29:30.097 [2024-11-20 12:43:35.621366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.097 [2024-11-20 12:43:35.621391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.097 qpair failed and we were unable to recover it. 00:29:30.097 [2024-11-20 12:43:35.621575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.097 [2024-11-20 12:43:35.621598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.097 qpair failed and we were unable to recover it. 00:29:30.097 [2024-11-20 12:43:35.621707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.097 [2024-11-20 12:43:35.621730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.097 qpair failed and we were unable to recover it. 00:29:30.097 [2024-11-20 12:43:35.621902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.097 [2024-11-20 12:43:35.621924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.097 qpair failed and we were unable to recover it. 00:29:30.097 [2024-11-20 12:43:35.622073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.097 [2024-11-20 12:43:35.622097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.097 qpair failed and we were unable to recover it. 00:29:30.097 [2024-11-20 12:43:35.622250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.097 [2024-11-20 12:43:35.622274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.097 qpair failed and we were unable to recover it. 00:29:30.097 [2024-11-20 12:43:35.622364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.097 [2024-11-20 12:43:35.622386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.097 qpair failed and we were unable to recover it. 00:29:30.097 [2024-11-20 12:43:35.622503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.097 [2024-11-20 12:43:35.622526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.097 qpair failed and we were unable to recover it. 00:29:30.097 [2024-11-20 12:43:35.622620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.097 [2024-11-20 12:43:35.622642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.097 qpair failed and we were unable to recover it. 00:29:30.097 [2024-11-20 12:43:35.622802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.098 [2024-11-20 12:43:35.622827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.098 qpair failed and we were unable to recover it. 00:29:30.098 [2024-11-20 12:43:35.622995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.098 [2024-11-20 12:43:35.623017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.098 qpair failed and we were unable to recover it. 00:29:30.098 [2024-11-20 12:43:35.623183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.098 [2024-11-20 12:43:35.623227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.098 qpair failed and we were unable to recover it. 00:29:30.098 [2024-11-20 12:43:35.623383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.098 [2024-11-20 12:43:35.623407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.098 qpair failed and we were unable to recover it. 00:29:30.098 [2024-11-20 12:43:35.623579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.098 [2024-11-20 12:43:35.623607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.098 qpair failed and we were unable to recover it. 00:29:30.098 [2024-11-20 12:43:35.623705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.098 [2024-11-20 12:43:35.623728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.098 qpair failed and we were unable to recover it. 00:29:30.098 [2024-11-20 12:43:35.623820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.098 [2024-11-20 12:43:35.623843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.098 qpair failed and we were unable to recover it. 00:29:30.098 [2024-11-20 12:43:35.624007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.098 [2024-11-20 12:43:35.624029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.098 qpair failed and we were unable to recover it. 00:29:30.098 [2024-11-20 12:43:35.624141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.098 [2024-11-20 12:43:35.624164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.098 qpair failed and we were unable to recover it. 00:29:30.098 [2024-11-20 12:43:35.624349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.098 [2024-11-20 12:43:35.624373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.098 qpair failed and we were unable to recover it. 00:29:30.098 [2024-11-20 12:43:35.624529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.098 [2024-11-20 12:43:35.624551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.098 qpair failed and we were unable to recover it. 00:29:30.098 [2024-11-20 12:43:35.624714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.098 [2024-11-20 12:43:35.624738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.098 qpair failed and we were unable to recover it. 00:29:30.098 [2024-11-20 12:43:35.624840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.098 [2024-11-20 12:43:35.624863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.098 qpair failed and we were unable to recover it. 00:29:30.098 [2024-11-20 12:43:35.625079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.098 [2024-11-20 12:43:35.625102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.098 qpair failed and we were unable to recover it. 00:29:30.098 [2024-11-20 12:43:35.625217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.098 [2024-11-20 12:43:35.625240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.098 qpair failed and we were unable to recover it. 00:29:30.098 [2024-11-20 12:43:35.625458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.098 [2024-11-20 12:43:35.625482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.098 qpair failed and we were unable to recover it. 00:29:30.098 [2024-11-20 12:43:35.625701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.098 [2024-11-20 12:43:35.625724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.098 qpair failed and we were unable to recover it. 00:29:30.098 [2024-11-20 12:43:35.625811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.098 [2024-11-20 12:43:35.625833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.098 qpair failed and we were unable to recover it. 00:29:30.098 [2024-11-20 12:43:35.625927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.098 [2024-11-20 12:43:35.625951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.098 qpair failed and we were unable to recover it. 00:29:30.098 [2024-11-20 12:43:35.626060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.098 [2024-11-20 12:43:35.626083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.098 qpair failed and we were unable to recover it. 00:29:30.098 [2024-11-20 12:43:35.626323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.098 [2024-11-20 12:43:35.626347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.098 qpair failed and we were unable to recover it. 00:29:30.098 [2024-11-20 12:43:35.626445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.098 [2024-11-20 12:43:35.626467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.098 qpair failed and we were unable to recover it. 00:29:30.098 [2024-11-20 12:43:35.626642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.098 [2024-11-20 12:43:35.626665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.098 qpair failed and we were unable to recover it. 00:29:30.098 [2024-11-20 12:43:35.626822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.098 [2024-11-20 12:43:35.626845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.098 qpair failed and we were unable to recover it. 00:29:30.098 [2024-11-20 12:43:35.627063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.098 [2024-11-20 12:43:35.627087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.098 qpair failed and we were unable to recover it. 00:29:30.098 [2024-11-20 12:43:35.627186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.098 [2024-11-20 12:43:35.627216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.098 qpair failed and we were unable to recover it. 00:29:30.098 [2024-11-20 12:43:35.627408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.098 [2024-11-20 12:43:35.627431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.098 qpair failed and we were unable to recover it. 00:29:30.098 [2024-11-20 12:43:35.627531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.098 [2024-11-20 12:43:35.627552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.098 qpair failed and we were unable to recover it. 00:29:30.098 [2024-11-20 12:43:35.627642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.098 [2024-11-20 12:43:35.627664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.098 qpair failed and we were unable to recover it. 00:29:30.098 [2024-11-20 12:43:35.627814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.098 [2024-11-20 12:43:35.627837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.098 qpair failed and we were unable to recover it. 00:29:30.098 [2024-11-20 12:43:35.627988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.098 [2024-11-20 12:43:35.628011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.098 qpair failed and we were unable to recover it. 00:29:30.098 [2024-11-20 12:43:35.628225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.098 [2024-11-20 12:43:35.628249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.098 qpair failed and we were unable to recover it. 00:29:30.098 [2024-11-20 12:43:35.628423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.098 [2024-11-20 12:43:35.628447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.098 qpair failed and we were unable to recover it. 00:29:30.098 [2024-11-20 12:43:35.628549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.098 [2024-11-20 12:43:35.628570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.098 qpair failed and we were unable to recover it. 00:29:30.098 [2024-11-20 12:43:35.628733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.098 [2024-11-20 12:43:35.628757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.098 qpair failed and we were unable to recover it. 00:29:30.098 [2024-11-20 12:43:35.628833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.098 [2024-11-20 12:43:35.628854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.098 qpair failed and we were unable to recover it. 00:29:30.098 [2024-11-20 12:43:35.629033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.098 [2024-11-20 12:43:35.629057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.098 qpair failed and we were unable to recover it. 00:29:30.098 [2024-11-20 12:43:35.629222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.098 [2024-11-20 12:43:35.629244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.098 qpair failed and we were unable to recover it. 00:29:30.099 [2024-11-20 12:43:35.629401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.099 [2024-11-20 12:43:35.629423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.099 qpair failed and we were unable to recover it. 00:29:30.099 [2024-11-20 12:43:35.629601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.099 [2024-11-20 12:43:35.629624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.099 qpair failed and we were unable to recover it. 00:29:30.099 [2024-11-20 12:43:35.629773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.099 [2024-11-20 12:43:35.629795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.099 qpair failed and we were unable to recover it. 00:29:30.099 [2024-11-20 12:43:35.629980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.099 [2024-11-20 12:43:35.630002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.099 qpair failed and we were unable to recover it. 00:29:30.099 [2024-11-20 12:43:35.630152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.099 [2024-11-20 12:43:35.630174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.099 qpair failed and we were unable to recover it. 00:29:30.099 [2024-11-20 12:43:35.630333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.099 [2024-11-20 12:43:35.630358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.099 qpair failed and we were unable to recover it. 00:29:30.099 [2024-11-20 12:43:35.630527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.099 [2024-11-20 12:43:35.630548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.099 qpair failed and we were unable to recover it. 00:29:30.099 [2024-11-20 12:43:35.630769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.099 [2024-11-20 12:43:35.630795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.099 qpair failed and we were unable to recover it. 00:29:30.099 [2024-11-20 12:43:35.630983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.099 [2024-11-20 12:43:35.631005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.099 qpair failed and we were unable to recover it. 00:29:30.099 [2024-11-20 12:43:35.631104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.099 [2024-11-20 12:43:35.631127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.099 qpair failed and we were unable to recover it. 00:29:30.099 [2024-11-20 12:43:35.631223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.099 [2024-11-20 12:43:35.631247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.099 qpair failed and we were unable to recover it. 00:29:30.099 [2024-11-20 12:43:35.631478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.099 [2024-11-20 12:43:35.631501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.099 qpair failed and we were unable to recover it. 00:29:30.099 [2024-11-20 12:43:35.631671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.099 [2024-11-20 12:43:35.631694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.099 qpair failed and we were unable to recover it. 00:29:30.099 [2024-11-20 12:43:35.631796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.099 [2024-11-20 12:43:35.631819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.099 qpair failed and we were unable to recover it. 00:29:30.099 [2024-11-20 12:43:35.631931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.099 [2024-11-20 12:43:35.631953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.099 qpair failed and we were unable to recover it. 00:29:30.099 [2024-11-20 12:43:35.632102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.099 [2024-11-20 12:43:35.632126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.099 qpair failed and we were unable to recover it. 00:29:30.099 [2024-11-20 12:43:35.632292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.099 [2024-11-20 12:43:35.632315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.099 qpair failed and we were unable to recover it. 00:29:30.099 [2024-11-20 12:43:35.632409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.099 [2024-11-20 12:43:35.632431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.099 qpair failed and we were unable to recover it. 00:29:30.099 [2024-11-20 12:43:35.632520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.099 [2024-11-20 12:43:35.632543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.099 qpair failed and we were unable to recover it. 00:29:30.099 [2024-11-20 12:43:35.632788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.099 [2024-11-20 12:43:35.632812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.099 qpair failed and we were unable to recover it. 00:29:30.099 [2024-11-20 12:43:35.632896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.099 [2024-11-20 12:43:35.632920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.099 qpair failed and we were unable to recover it. 00:29:30.099 [2024-11-20 12:43:35.633040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.099 [2024-11-20 12:43:35.633063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.099 qpair failed and we were unable to recover it. 00:29:30.099 [2024-11-20 12:43:35.633227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.099 [2024-11-20 12:43:35.633251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.099 qpair failed and we were unable to recover it. 00:29:30.099 [2024-11-20 12:43:35.633357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.099 [2024-11-20 12:43:35.633379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.099 qpair failed and we were unable to recover it. 00:29:30.099 [2024-11-20 12:43:35.633532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.099 [2024-11-20 12:43:35.633554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.099 qpair failed and we were unable to recover it. 00:29:30.099 [2024-11-20 12:43:35.633651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.099 [2024-11-20 12:43:35.633673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.099 qpair failed and we were unable to recover it. 00:29:30.099 [2024-11-20 12:43:35.633769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.099 [2024-11-20 12:43:35.633792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.099 qpair failed and we were unable to recover it. 00:29:30.099 [2024-11-20 12:43:35.634010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.099 [2024-11-20 12:43:35.634033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.099 qpair failed and we were unable to recover it. 00:29:30.099 [2024-11-20 12:43:35.634193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.099 [2024-11-20 12:43:35.634223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.099 qpair failed and we were unable to recover it. 00:29:30.099 [2024-11-20 12:43:35.634370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.099 [2024-11-20 12:43:35.634393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.099 qpair failed and we were unable to recover it. 00:29:30.099 [2024-11-20 12:43:35.634662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.099 [2024-11-20 12:43:35.634685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.099 qpair failed and we were unable to recover it. 00:29:30.099 [2024-11-20 12:43:35.634900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.099 [2024-11-20 12:43:35.634923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.099 qpair failed and we were unable to recover it. 00:29:30.099 [2024-11-20 12:43:35.635093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.099 [2024-11-20 12:43:35.635115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.099 qpair failed and we were unable to recover it. 00:29:30.099 [2024-11-20 12:43:35.635233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.099 [2024-11-20 12:43:35.635257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.099 qpair failed and we were unable to recover it. 00:29:30.099 [2024-11-20 12:43:35.635429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.099 [2024-11-20 12:43:35.635455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.099 qpair failed and we were unable to recover it. 00:29:30.099 [2024-11-20 12:43:35.635560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.099 [2024-11-20 12:43:35.635584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.099 qpair failed and we were unable to recover it. 00:29:30.099 [2024-11-20 12:43:35.635735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.099 [2024-11-20 12:43:35.635757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.099 qpair failed and we were unable to recover it. 00:29:30.099 [2024-11-20 12:43:35.635926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.099 [2024-11-20 12:43:35.635950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.099 qpair failed and we were unable to recover it. 00:29:30.099 [2024-11-20 12:43:35.636039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.100 [2024-11-20 12:43:35.636062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.100 qpair failed and we were unable to recover it. 00:29:30.100 [2024-11-20 12:43:35.636244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.100 [2024-11-20 12:43:35.636268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.100 qpair failed and we were unable to recover it. 00:29:30.100 [2024-11-20 12:43:35.636486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.100 [2024-11-20 12:43:35.636510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.100 qpair failed and we were unable to recover it. 00:29:30.100 [2024-11-20 12:43:35.636729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.100 [2024-11-20 12:43:35.636752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.100 qpair failed and we were unable to recover it. 00:29:30.100 [2024-11-20 12:43:35.636903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.100 [2024-11-20 12:43:35.636937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.100 qpair failed and we were unable to recover it. 00:29:30.100 [2024-11-20 12:43:35.637122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.100 [2024-11-20 12:43:35.637146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.100 qpair failed and we were unable to recover it. 00:29:30.100 [2024-11-20 12:43:35.637330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.100 [2024-11-20 12:43:35.637352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.100 qpair failed and we were unable to recover it. 00:29:30.100 [2024-11-20 12:43:35.637519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.100 [2024-11-20 12:43:35.637541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.100 qpair failed and we were unable to recover it. 00:29:30.100 [2024-11-20 12:43:35.637657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.100 [2024-11-20 12:43:35.637681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.100 qpair failed and we were unable to recover it. 00:29:30.100 [2024-11-20 12:43:35.637841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.100 [2024-11-20 12:43:35.637862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.100 qpair failed and we were unable to recover it. 00:29:30.100 [2024-11-20 12:43:35.637963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.100 [2024-11-20 12:43:35.637986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.100 qpair failed and we were unable to recover it. 00:29:30.100 [2024-11-20 12:43:35.638153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.100 [2024-11-20 12:43:35.638176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.100 qpair failed and we were unable to recover it. 00:29:30.100 [2024-11-20 12:43:35.638337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.100 [2024-11-20 12:43:35.638361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.100 qpair failed and we were unable to recover it. 00:29:30.100 [2024-11-20 12:43:35.638444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.100 [2024-11-20 12:43:35.638468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.100 qpair failed and we were unable to recover it. 00:29:30.100 [2024-11-20 12:43:35.638618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.100 [2024-11-20 12:43:35.638639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.100 qpair failed and we were unable to recover it. 00:29:30.100 [2024-11-20 12:43:35.638730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.100 [2024-11-20 12:43:35.638752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.100 qpair failed and we were unable to recover it. 00:29:30.100 [2024-11-20 12:43:35.638858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.100 [2024-11-20 12:43:35.638880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.100 qpair failed and we were unable to recover it. 00:29:30.100 [2024-11-20 12:43:35.639034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.100 [2024-11-20 12:43:35.639058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.100 qpair failed and we were unable to recover it. 00:29:30.100 [2024-11-20 12:43:35.639213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.100 [2024-11-20 12:43:35.639237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.100 qpair failed and we were unable to recover it. 00:29:30.100 [2024-11-20 12:43:35.639334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.100 [2024-11-20 12:43:35.639356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.100 qpair failed and we were unable to recover it. 00:29:30.100 [2024-11-20 12:43:35.639451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.100 [2024-11-20 12:43:35.639473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.100 qpair failed and we were unable to recover it. 00:29:30.100 [2024-11-20 12:43:35.639635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.100 [2024-11-20 12:43:35.639657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.100 qpair failed and we were unable to recover it. 00:29:30.100 [2024-11-20 12:43:35.639876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.100 [2024-11-20 12:43:35.639899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.100 qpair failed and we were unable to recover it. 00:29:30.100 [2024-11-20 12:43:35.640088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.100 [2024-11-20 12:43:35.640112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.100 qpair failed and we were unable to recover it. 00:29:30.100 [2024-11-20 12:43:35.640227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.100 [2024-11-20 12:43:35.640251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.100 qpair failed and we were unable to recover it. 00:29:30.100 [2024-11-20 12:43:35.640348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.100 [2024-11-20 12:43:35.640372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.100 qpair failed and we were unable to recover it. 00:29:30.100 [2024-11-20 12:43:35.640611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.100 [2024-11-20 12:43:35.640634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.100 qpair failed and we were unable to recover it. 00:29:30.100 [2024-11-20 12:43:35.640799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.100 [2024-11-20 12:43:35.640822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.100 qpair failed and we were unable to recover it. 00:29:30.100 [2024-11-20 12:43:35.640972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.100 [2024-11-20 12:43:35.640995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.100 qpair failed and we were unable to recover it. 00:29:30.100 [2024-11-20 12:43:35.641180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.100 [2024-11-20 12:43:35.641210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.100 qpair failed and we were unable to recover it. 00:29:30.100 [2024-11-20 12:43:35.641306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.100 [2024-11-20 12:43:35.641328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.100 qpair failed and we were unable to recover it. 00:29:30.100 [2024-11-20 12:43:35.641478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.100 [2024-11-20 12:43:35.641500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.100 qpair failed and we were unable to recover it. 00:29:30.100 [2024-11-20 12:43:35.641744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.100 [2024-11-20 12:43:35.641768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.100 qpair failed and we were unable to recover it. 00:29:30.100 [2024-11-20 12:43:35.641862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.100 [2024-11-20 12:43:35.641885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.100 qpair failed and we were unable to recover it. 00:29:30.100 [2024-11-20 12:43:35.642051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.100 [2024-11-20 12:43:35.642073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.100 qpair failed and we were unable to recover it. 00:29:30.100 [2024-11-20 12:43:35.642293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.100 [2024-11-20 12:43:35.642317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.100 qpair failed and we were unable to recover it. 00:29:30.100 [2024-11-20 12:43:35.642469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.100 [2024-11-20 12:43:35.642492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.100 qpair failed and we were unable to recover it. 00:29:30.100 [2024-11-20 12:43:35.642588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.100 [2024-11-20 12:43:35.642614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.100 qpair failed and we were unable to recover it. 00:29:30.100 [2024-11-20 12:43:35.642729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.100 [2024-11-20 12:43:35.642751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.101 qpair failed and we were unable to recover it. 00:29:30.101 [2024-11-20 12:43:35.642863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.101 [2024-11-20 12:43:35.642885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.101 qpair failed and we were unable to recover it. 00:29:30.101 [2024-11-20 12:43:35.643036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.101 [2024-11-20 12:43:35.643059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.101 qpair failed and we were unable to recover it. 00:29:30.101 [2024-11-20 12:43:35.643303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.101 [2024-11-20 12:43:35.643329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.101 qpair failed and we were unable to recover it. 00:29:30.101 [2024-11-20 12:43:35.643447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.101 [2024-11-20 12:43:35.643471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.101 qpair failed and we were unable to recover it. 00:29:30.101 [2024-11-20 12:43:35.643566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.101 [2024-11-20 12:43:35.643588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.101 qpair failed and we were unable to recover it. 00:29:30.101 [2024-11-20 12:43:35.643788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.101 [2024-11-20 12:43:35.643812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.101 qpair failed and we were unable to recover it. 00:29:30.101 [2024-11-20 12:43:35.643973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.101 [2024-11-20 12:43:35.643996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.101 qpair failed and we were unable to recover it. 00:29:30.101 [2024-11-20 12:43:35.644188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.101 [2024-11-20 12:43:35.644220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.101 qpair failed and we were unable to recover it. 00:29:30.101 [2024-11-20 12:43:35.644317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.101 [2024-11-20 12:43:35.644339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.101 qpair failed and we were unable to recover it. 00:29:30.101 [2024-11-20 12:43:35.644427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.101 [2024-11-20 12:43:35.644449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.101 qpair failed and we were unable to recover it. 00:29:30.101 [2024-11-20 12:43:35.644532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.101 [2024-11-20 12:43:35.644555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.101 qpair failed and we were unable to recover it. 00:29:30.101 [2024-11-20 12:43:35.644805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.101 [2024-11-20 12:43:35.644828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.101 qpair failed and we were unable to recover it. 00:29:30.101 [2024-11-20 12:43:35.644944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.101 [2024-11-20 12:43:35.644967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.101 qpair failed and we were unable to recover it. 00:29:30.101 [2024-11-20 12:43:35.645135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.101 [2024-11-20 12:43:35.645157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.101 qpair failed and we were unable to recover it. 00:29:30.101 [2024-11-20 12:43:35.645250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.101 [2024-11-20 12:43:35.645275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.101 qpair failed and we were unable to recover it. 00:29:30.101 [2024-11-20 12:43:35.645371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.101 [2024-11-20 12:43:35.645393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.101 qpair failed and we were unable to recover it. 00:29:30.101 [2024-11-20 12:43:35.645587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.101 [2024-11-20 12:43:35.645610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.101 qpair failed and we were unable to recover it. 00:29:30.101 [2024-11-20 12:43:35.645709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.101 [2024-11-20 12:43:35.645732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.101 qpair failed and we were unable to recover it. 00:29:30.101 [2024-11-20 12:43:35.645886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.101 [2024-11-20 12:43:35.645908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.101 qpair failed and we were unable to recover it. 00:29:30.101 [2024-11-20 12:43:35.646073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.101 [2024-11-20 12:43:35.646096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.101 qpair failed and we were unable to recover it. 00:29:30.101 [2024-11-20 12:43:35.646264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.101 [2024-11-20 12:43:35.646288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.101 qpair failed and we were unable to recover it. 00:29:30.101 [2024-11-20 12:43:35.646436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.101 [2024-11-20 12:43:35.646459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.101 qpair failed and we were unable to recover it. 00:29:30.101 [2024-11-20 12:43:35.646557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.101 [2024-11-20 12:43:35.646580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.101 qpair failed and we were unable to recover it. 00:29:30.101 [2024-11-20 12:43:35.646743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.101 [2024-11-20 12:43:35.646765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.101 qpair failed and we were unable to recover it. 00:29:30.101 [2024-11-20 12:43:35.647031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.101 [2024-11-20 12:43:35.647054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.101 qpair failed and we were unable to recover it. 00:29:30.101 [2024-11-20 12:43:35.647153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.101 [2024-11-20 12:43:35.647176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.101 qpair failed and we were unable to recover it. 00:29:30.101 [2024-11-20 12:43:35.647346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.101 [2024-11-20 12:43:35.647371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.101 qpair failed and we were unable to recover it. 00:29:30.101 [2024-11-20 12:43:35.647534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.101 [2024-11-20 12:43:35.647559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.101 qpair failed and we were unable to recover it. 00:29:30.101 [2024-11-20 12:43:35.647762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.101 [2024-11-20 12:43:35.647784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.101 qpair failed and we were unable to recover it. 00:29:30.101 [2024-11-20 12:43:35.647938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.101 [2024-11-20 12:43:35.647960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.101 qpair failed and we were unable to recover it. 00:29:30.101 [2024-11-20 12:43:35.648045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.101 [2024-11-20 12:43:35.648067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.101 qpair failed and we were unable to recover it. 00:29:30.101 [2024-11-20 12:43:35.648153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.101 [2024-11-20 12:43:35.648175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.101 qpair failed and we were unable to recover it. 00:29:30.101 [2024-11-20 12:43:35.648462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.101 [2024-11-20 12:43:35.648487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.101 qpair failed and we were unable to recover it. 00:29:30.101 [2024-11-20 12:43:35.648579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.101 [2024-11-20 12:43:35.648602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.101 qpair failed and we were unable to recover it. 00:29:30.101 [2024-11-20 12:43:35.648702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.101 [2024-11-20 12:43:35.648724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.101 qpair failed and we were unable to recover it. 00:29:30.101 [2024-11-20 12:43:35.648805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.101 [2024-11-20 12:43:35.648828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.101 qpair failed and we were unable to recover it. 00:29:30.101 [2024-11-20 12:43:35.648934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.101 [2024-11-20 12:43:35.648957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.101 qpair failed and we were unable to recover it. 00:29:30.101 [2024-11-20 12:43:35.649101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.101 [2024-11-20 12:43:35.649125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.102 qpair failed and we were unable to recover it. 00:29:30.102 [2024-11-20 12:43:35.649279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.102 [2024-11-20 12:43:35.649312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.102 qpair failed and we were unable to recover it. 00:29:30.102 [2024-11-20 12:43:35.649502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.102 [2024-11-20 12:43:35.649524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.102 qpair failed and we were unable to recover it. 00:29:30.102 [2024-11-20 12:43:35.649637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.102 [2024-11-20 12:43:35.649660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.102 qpair failed and we were unable to recover it. 00:29:30.102 [2024-11-20 12:43:35.649844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.102 [2024-11-20 12:43:35.649866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.102 qpair failed and we were unable to recover it. 00:29:30.102 [2024-11-20 12:43:35.650051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.102 [2024-11-20 12:43:35.650074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.102 qpair failed and we were unable to recover it. 00:29:30.102 [2024-11-20 12:43:35.650294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.102 [2024-11-20 12:43:35.650318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.102 qpair failed and we were unable to recover it. 00:29:30.102 [2024-11-20 12:43:35.650435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.102 [2024-11-20 12:43:35.650458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.102 qpair failed and we were unable to recover it. 00:29:30.102 [2024-11-20 12:43:35.650693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.102 [2024-11-20 12:43:35.650717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.102 qpair failed and we were unable to recover it. 00:29:30.102 [2024-11-20 12:43:35.650867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.102 [2024-11-20 12:43:35.650888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.102 qpair failed and we were unable to recover it. 00:29:30.102 [2024-11-20 12:43:35.650976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.102 [2024-11-20 12:43:35.650999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.102 qpair failed and we were unable to recover it. 00:29:30.102 [2024-11-20 12:43:35.651144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.102 [2024-11-20 12:43:35.651167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.102 qpair failed and we were unable to recover it. 00:29:30.102 [2024-11-20 12:43:35.651338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.102 [2024-11-20 12:43:35.651362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.102 qpair failed and we were unable to recover it. 00:29:30.102 [2024-11-20 12:43:35.651529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.102 [2024-11-20 12:43:35.651553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.102 qpair failed and we were unable to recover it. 00:29:30.102 [2024-11-20 12:43:35.651723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.102 [2024-11-20 12:43:35.651746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.102 qpair failed and we were unable to recover it. 00:29:30.102 [2024-11-20 12:43:35.651860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.102 [2024-11-20 12:43:35.651882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.102 qpair failed and we were unable to recover it. 00:29:30.102 [2024-11-20 12:43:35.651982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.102 [2024-11-20 12:43:35.652006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.102 qpair failed and we were unable to recover it. 00:29:30.102 [2024-11-20 12:43:35.652168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.102 [2024-11-20 12:43:35.652190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.102 qpair failed and we were unable to recover it. 00:29:30.102 [2024-11-20 12:43:35.652307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.102 [2024-11-20 12:43:35.652330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.102 qpair failed and we were unable to recover it. 00:29:30.102 [2024-11-20 12:43:35.652545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.102 [2024-11-20 12:43:35.652568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.102 qpair failed and we were unable to recover it. 00:29:30.102 [2024-11-20 12:43:35.652721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.102 [2024-11-20 12:43:35.652744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.102 qpair failed and we were unable to recover it. 00:29:30.102 [2024-11-20 12:43:35.652858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.102 [2024-11-20 12:43:35.652880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.102 qpair failed and we were unable to recover it. 00:29:30.102 [2024-11-20 12:43:35.653047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.102 [2024-11-20 12:43:35.653071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.102 qpair failed and we were unable to recover it. 00:29:30.102 [2024-11-20 12:43:35.653180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.102 [2024-11-20 12:43:35.653211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.102 qpair failed and we were unable to recover it. 00:29:30.102 [2024-11-20 12:43:35.653305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.102 [2024-11-20 12:43:35.653328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.102 qpair failed and we were unable to recover it. 00:29:30.102 [2024-11-20 12:43:35.653562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.102 [2024-11-20 12:43:35.653584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.102 qpair failed and we were unable to recover it. 00:29:30.102 [2024-11-20 12:43:35.653690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.102 [2024-11-20 12:43:35.653711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.102 qpair failed and we were unable to recover it. 00:29:30.102 [2024-11-20 12:43:35.653801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.102 [2024-11-20 12:43:35.653825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.102 qpair failed and we were unable to recover it. 00:29:30.102 [2024-11-20 12:43:35.653921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.102 [2024-11-20 12:43:35.653943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.102 qpair failed and we were unable to recover it. 00:29:30.102 [2024-11-20 12:43:35.654045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.102 [2024-11-20 12:43:35.654071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.102 qpair failed and we were unable to recover it. 00:29:30.102 [2024-11-20 12:43:35.654224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.102 [2024-11-20 12:43:35.654248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.102 qpair failed and we were unable to recover it. 00:29:30.102 [2024-11-20 12:43:35.654339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.102 [2024-11-20 12:43:35.654363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.102 qpair failed and we were unable to recover it. 00:29:30.102 [2024-11-20 12:43:35.654517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.103 [2024-11-20 12:43:35.654540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.103 qpair failed and we were unable to recover it. 00:29:30.103 [2024-11-20 12:43:35.654692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.103 [2024-11-20 12:43:35.654715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.103 qpair failed and we were unable to recover it. 00:29:30.103 [2024-11-20 12:43:35.654904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.103 [2024-11-20 12:43:35.654927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.103 qpair failed and we were unable to recover it. 00:29:30.103 [2024-11-20 12:43:35.655084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.103 [2024-11-20 12:43:35.655108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.103 qpair failed and we were unable to recover it. 00:29:30.103 [2024-11-20 12:43:35.655233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.103 [2024-11-20 12:43:35.655257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.103 qpair failed and we were unable to recover it. 00:29:30.103 [2024-11-20 12:43:35.655341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.103 [2024-11-20 12:43:35.655364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.103 qpair failed and we were unable to recover it. 00:29:30.103 [2024-11-20 12:43:35.655461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.103 [2024-11-20 12:43:35.655483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.103 qpair failed and we were unable to recover it. 00:29:30.103 [2024-11-20 12:43:35.655645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.103 [2024-11-20 12:43:35.655669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.103 qpair failed and we were unable to recover it. 00:29:30.103 [2024-11-20 12:43:35.655886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.103 [2024-11-20 12:43:35.655910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.103 qpair failed and we were unable to recover it. 00:29:30.103 [2024-11-20 12:43:35.656081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.103 [2024-11-20 12:43:35.656104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.103 qpair failed and we were unable to recover it. 00:29:30.103 [2024-11-20 12:43:35.656264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.103 [2024-11-20 12:43:35.656288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.103 qpair failed and we were unable to recover it. 00:29:30.103 [2024-11-20 12:43:35.656537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.103 [2024-11-20 12:43:35.656560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.103 qpair failed and we were unable to recover it. 00:29:30.103 [2024-11-20 12:43:35.656667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.103 [2024-11-20 12:43:35.656690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.103 qpair failed and we were unable to recover it. 00:29:30.103 [2024-11-20 12:43:35.656944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.103 [2024-11-20 12:43:35.656967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.103 qpair failed and we were unable to recover it. 00:29:30.103 [2024-11-20 12:43:35.657147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.103 [2024-11-20 12:43:35.657170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.103 qpair failed and we were unable to recover it. 00:29:30.103 [2024-11-20 12:43:35.657336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.103 [2024-11-20 12:43:35.657360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.103 qpair failed and we were unable to recover it. 00:29:30.103 [2024-11-20 12:43:35.657537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.103 [2024-11-20 12:43:35.657559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.103 qpair failed and we were unable to recover it. 00:29:30.103 [2024-11-20 12:43:35.657792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.103 [2024-11-20 12:43:35.657814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.103 qpair failed and we were unable to recover it. 00:29:30.103 [2024-11-20 12:43:35.657901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.103 [2024-11-20 12:43:35.657923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.103 qpair failed and we were unable to recover it. 00:29:30.103 [2024-11-20 12:43:35.658083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.103 [2024-11-20 12:43:35.658105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.103 qpair failed and we were unable to recover it. 00:29:30.103 [2024-11-20 12:43:35.658271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.103 [2024-11-20 12:43:35.658295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.103 qpair failed and we were unable to recover it. 00:29:30.103 [2024-11-20 12:43:35.658447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.103 [2024-11-20 12:43:35.658470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.103 qpair failed and we were unable to recover it. 00:29:30.103 [2024-11-20 12:43:35.658563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.103 [2024-11-20 12:43:35.658585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.103 qpair failed and we were unable to recover it. 00:29:30.103 [2024-11-20 12:43:35.658832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.103 [2024-11-20 12:43:35.658854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.103 qpair failed and we were unable to recover it. 00:29:30.103 [2024-11-20 12:43:35.659016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.103 [2024-11-20 12:43:35.659039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.103 qpair failed and we were unable to recover it. 00:29:30.103 [2024-11-20 12:43:35.659128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.103 [2024-11-20 12:43:35.659152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.103 qpair failed and we were unable to recover it. 00:29:30.103 [2024-11-20 12:43:35.659350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.103 [2024-11-20 12:43:35.659374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.103 qpair failed and we were unable to recover it. 00:29:30.103 [2024-11-20 12:43:35.659540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.103 [2024-11-20 12:43:35.659562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.103 qpair failed and we were unable to recover it. 00:29:30.103 [2024-11-20 12:43:35.659714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.103 [2024-11-20 12:43:35.659737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.103 qpair failed and we were unable to recover it. 00:29:30.103 [2024-11-20 12:43:35.659888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.103 [2024-11-20 12:43:35.659911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.103 qpair failed and we were unable to recover it. 00:29:30.103 [2024-11-20 12:43:35.660074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.103 [2024-11-20 12:43:35.660097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.103 qpair failed and we were unable to recover it. 00:29:30.103 [2024-11-20 12:43:35.660263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.103 [2024-11-20 12:43:35.660288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.103 qpair failed and we were unable to recover it. 00:29:30.103 [2024-11-20 12:43:35.660377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.103 [2024-11-20 12:43:35.660400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.103 qpair failed and we were unable to recover it. 00:29:30.103 [2024-11-20 12:43:35.660649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.103 [2024-11-20 12:43:35.660673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.103 qpair failed and we were unable to recover it. 00:29:30.103 [2024-11-20 12:43:35.660839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.103 [2024-11-20 12:43:35.660862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.103 qpair failed and we were unable to recover it. 00:29:30.103 [2024-11-20 12:43:35.661037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.103 [2024-11-20 12:43:35.661059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.103 qpair failed and we were unable to recover it. 00:29:30.103 [2024-11-20 12:43:35.661151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.103 [2024-11-20 12:43:35.661175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.103 qpair failed and we were unable to recover it. 00:29:30.103 [2024-11-20 12:43:35.661276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.103 [2024-11-20 12:43:35.661300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.103 qpair failed and we were unable to recover it. 00:29:30.103 [2024-11-20 12:43:35.661399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.103 [2024-11-20 12:43:35.661423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.104 qpair failed and we were unable to recover it. 00:29:30.104 [2024-11-20 12:43:35.661582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.104 [2024-11-20 12:43:35.661604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.104 qpair failed and we were unable to recover it. 00:29:30.104 [2024-11-20 12:43:35.661836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.104 [2024-11-20 12:43:35.661859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.104 qpair failed and we were unable to recover it. 00:29:30.104 [2024-11-20 12:43:35.662018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.104 [2024-11-20 12:43:35.662042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.104 qpair failed and we were unable to recover it. 00:29:30.104 [2024-11-20 12:43:35.662218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.104 [2024-11-20 12:43:35.662242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.104 qpair failed and we were unable to recover it. 00:29:30.104 [2024-11-20 12:43:35.662495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.104 [2024-11-20 12:43:35.662518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.104 qpair failed and we were unable to recover it. 00:29:30.104 [2024-11-20 12:43:35.662699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.104 [2024-11-20 12:43:35.662722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.104 qpair failed and we were unable to recover it. 00:29:30.104 [2024-11-20 12:43:35.662874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.104 [2024-11-20 12:43:35.662897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.104 qpair failed and we were unable to recover it. 00:29:30.104 [2024-11-20 12:43:35.663071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.104 [2024-11-20 12:43:35.663094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.104 qpair failed and we were unable to recover it. 00:29:30.104 [2024-11-20 12:43:35.663369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.104 [2024-11-20 12:43:35.663393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.104 qpair failed and we were unable to recover it. 00:29:30.104 [2024-11-20 12:43:35.663558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.104 [2024-11-20 12:43:35.663581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.104 qpair failed and we were unable to recover it. 00:29:30.104 [2024-11-20 12:43:35.663748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.104 [2024-11-20 12:43:35.663770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.104 qpair failed and we were unable to recover it. 00:29:30.104 [2024-11-20 12:43:35.663933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.104 [2024-11-20 12:43:35.663957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.104 qpair failed and we were unable to recover it. 00:29:30.104 [2024-11-20 12:43:35.664041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.104 [2024-11-20 12:43:35.664064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.104 qpair failed and we were unable to recover it. 00:29:30.104 [2024-11-20 12:43:35.664315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.104 [2024-11-20 12:43:35.664339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.104 qpair failed and we were unable to recover it. 00:29:30.104 [2024-11-20 12:43:35.664461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.104 [2024-11-20 12:43:35.664484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.104 qpair failed and we were unable to recover it. 00:29:30.104 [2024-11-20 12:43:35.664639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.104 [2024-11-20 12:43:35.664662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.104 qpair failed and we were unable to recover it. 00:29:30.104 [2024-11-20 12:43:35.664851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.104 [2024-11-20 12:43:35.664876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.104 qpair failed and we were unable to recover it. 00:29:30.104 [2024-11-20 12:43:35.665045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.104 [2024-11-20 12:43:35.665068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.104 qpair failed and we were unable to recover it. 00:29:30.104 [2024-11-20 12:43:35.665249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.104 [2024-11-20 12:43:35.665273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.104 qpair failed and we were unable to recover it. 00:29:30.104 [2024-11-20 12:43:35.665437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.104 [2024-11-20 12:43:35.665460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.104 qpair failed and we were unable to recover it. 00:29:30.104 [2024-11-20 12:43:35.665624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.104 [2024-11-20 12:43:35.665647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.104 qpair failed and we were unable to recover it. 00:29:30.104 [2024-11-20 12:43:35.665815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.104 [2024-11-20 12:43:35.665838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.104 qpair failed and we were unable to recover it. 00:29:30.104 [2024-11-20 12:43:35.666080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.104 [2024-11-20 12:43:35.666104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.104 qpair failed and we were unable to recover it. 00:29:30.104 [2024-11-20 12:43:35.666285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.104 [2024-11-20 12:43:35.666310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.104 qpair failed and we were unable to recover it. 00:29:30.104 [2024-11-20 12:43:35.666411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.104 [2024-11-20 12:43:35.666434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.104 qpair failed and we were unable to recover it. 00:29:30.104 [2024-11-20 12:43:35.666529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.104 [2024-11-20 12:43:35.666551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.104 qpair failed and we were unable to recover it. 00:29:30.104 [2024-11-20 12:43:35.666712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.104 [2024-11-20 12:43:35.666739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.104 qpair failed and we were unable to recover it. 00:29:30.104 [2024-11-20 12:43:35.666925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.104 [2024-11-20 12:43:35.666948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.104 qpair failed and we were unable to recover it. 00:29:30.104 [2024-11-20 12:43:35.667131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.104 [2024-11-20 12:43:35.667153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.104 qpair failed and we were unable to recover it. 00:29:30.104 [2024-11-20 12:43:35.667257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.104 [2024-11-20 12:43:35.667281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.104 qpair failed and we were unable to recover it. 00:29:30.104 [2024-11-20 12:43:35.667390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.104 [2024-11-20 12:43:35.667413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.104 qpair failed and we were unable to recover it. 00:29:30.104 [2024-11-20 12:43:35.667649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.104 [2024-11-20 12:43:35.667671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.104 qpair failed and we were unable to recover it. 00:29:30.104 [2024-11-20 12:43:35.667758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.104 [2024-11-20 12:43:35.667780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.104 qpair failed and we were unable to recover it. 00:29:30.104 [2024-11-20 12:43:35.667882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.104 [2024-11-20 12:43:35.667905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.104 qpair failed and we were unable to recover it. 00:29:30.104 [2024-11-20 12:43:35.668059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.104 [2024-11-20 12:43:35.668082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.104 qpair failed and we were unable to recover it. 00:29:30.104 [2024-11-20 12:43:35.668232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.104 [2024-11-20 12:43:35.668256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.104 qpair failed and we were unable to recover it. 00:29:30.104 [2024-11-20 12:43:35.668418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.104 [2024-11-20 12:43:35.668441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.104 qpair failed and we were unable to recover it. 00:29:30.104 [2024-11-20 12:43:35.668595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.104 [2024-11-20 12:43:35.668617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.104 qpair failed and we were unable to recover it. 00:29:30.104 [2024-11-20 12:43:35.668725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.105 [2024-11-20 12:43:35.668749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.105 qpair failed and we were unable to recover it. 00:29:30.105 [2024-11-20 12:43:35.668916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.105 [2024-11-20 12:43:35.668938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.105 qpair failed and we were unable to recover it. 00:29:30.105 [2024-11-20 12:43:35.669132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.105 [2024-11-20 12:43:35.669155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.105 qpair failed and we were unable to recover it. 00:29:30.105 [2024-11-20 12:43:35.669388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.105 [2024-11-20 12:43:35.669413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.105 qpair failed and we were unable to recover it. 00:29:30.105 [2024-11-20 12:43:35.669584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.105 [2024-11-20 12:43:35.669607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.105 qpair failed and we were unable to recover it. 00:29:30.105 [2024-11-20 12:43:35.669709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.105 [2024-11-20 12:43:35.669732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.105 qpair failed and we were unable to recover it. 00:29:30.105 [2024-11-20 12:43:35.669879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.105 [2024-11-20 12:43:35.669902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.105 qpair failed and we were unable to recover it. 00:29:30.105 [2024-11-20 12:43:35.670051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.105 [2024-11-20 12:43:35.670075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.105 qpair failed and we were unable to recover it. 00:29:30.105 [2024-11-20 12:43:35.670291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.105 [2024-11-20 12:43:35.670316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.105 qpair failed and we were unable to recover it. 00:29:30.105 [2024-11-20 12:43:35.670480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.105 [2024-11-20 12:43:35.670502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.105 qpair failed and we were unable to recover it. 00:29:30.105 [2024-11-20 12:43:35.670611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.105 [2024-11-20 12:43:35.670634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.105 qpair failed and we were unable to recover it. 00:29:30.105 [2024-11-20 12:43:35.670729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.105 [2024-11-20 12:43:35.670752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.105 qpair failed and we were unable to recover it. 00:29:30.105 [2024-11-20 12:43:35.670838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.105 [2024-11-20 12:43:35.670862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.105 qpair failed and we were unable to recover it. 00:29:30.105 [2024-11-20 12:43:35.671014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.105 [2024-11-20 12:43:35.671037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.105 qpair failed and we were unable to recover it. 00:29:30.105 [2024-11-20 12:43:35.671252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.105 [2024-11-20 12:43:35.671275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.105 qpair failed and we were unable to recover it. 00:29:30.105 [2024-11-20 12:43:35.671452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.105 [2024-11-20 12:43:35.671476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.105 qpair failed and we were unable to recover it. 00:29:30.105 [2024-11-20 12:43:35.671580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.105 [2024-11-20 12:43:35.671604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.105 qpair failed and we were unable to recover it. 00:29:30.105 [2024-11-20 12:43:35.671754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.105 [2024-11-20 12:43:35.671777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.105 qpair failed and we were unable to recover it. 00:29:30.105 [2024-11-20 12:43:35.671948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.105 [2024-11-20 12:43:35.671972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.105 qpair failed and we were unable to recover it. 00:29:30.105 [2024-11-20 12:43:35.672075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.105 [2024-11-20 12:43:35.672099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.105 qpair failed and we were unable to recover it. 00:29:30.105 [2024-11-20 12:43:35.672284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.105 [2024-11-20 12:43:35.672307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.105 qpair failed and we were unable to recover it. 00:29:30.105 [2024-11-20 12:43:35.672461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.105 [2024-11-20 12:43:35.672484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.105 qpair failed and we were unable to recover it. 00:29:30.105 [2024-11-20 12:43:35.672598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.105 [2024-11-20 12:43:35.672622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.105 qpair failed and we were unable to recover it. 00:29:30.105 [2024-11-20 12:43:35.672795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.105 [2024-11-20 12:43:35.672818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.105 qpair failed and we were unable to recover it. 00:29:30.105 [2024-11-20 12:43:35.672926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.105 [2024-11-20 12:43:35.672949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.105 qpair failed and we were unable to recover it. 00:29:30.105 [2024-11-20 12:43:35.673045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.105 [2024-11-20 12:43:35.673067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.105 qpair failed and we were unable to recover it. 00:29:30.105 [2024-11-20 12:43:35.673228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.105 [2024-11-20 12:43:35.673251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.105 qpair failed and we were unable to recover it. 00:29:30.105 [2024-11-20 12:43:35.673352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.105 [2024-11-20 12:43:35.673376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.105 qpair failed and we were unable to recover it. 00:29:30.105 [2024-11-20 12:43:35.673481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.105 [2024-11-20 12:43:35.673503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.105 qpair failed and we were unable to recover it. 00:29:30.105 [2024-11-20 12:43:35.673592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.105 [2024-11-20 12:43:35.673620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.105 qpair failed and we were unable to recover it. 00:29:30.105 [2024-11-20 12:43:35.673794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.105 [2024-11-20 12:43:35.673816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.105 qpair failed and we were unable to recover it. 00:29:30.105 [2024-11-20 12:43:35.673898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.105 [2024-11-20 12:43:35.673921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.105 qpair failed and we were unable to recover it. 00:29:30.105 [2024-11-20 12:43:35.674138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.105 [2024-11-20 12:43:35.674161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.105 qpair failed and we were unable to recover it. 00:29:30.105 [2024-11-20 12:43:35.674290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.105 [2024-11-20 12:43:35.674313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.105 qpair failed and we were unable to recover it. 00:29:30.105 [2024-11-20 12:43:35.674475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.105 [2024-11-20 12:43:35.674498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.105 qpair failed and we were unable to recover it. 00:29:30.105 [2024-11-20 12:43:35.674653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.105 [2024-11-20 12:43:35.674676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.105 qpair failed and we were unable to recover it. 00:29:30.105 [2024-11-20 12:43:35.674906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.105 [2024-11-20 12:43:35.674929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.105 qpair failed and we were unable to recover it. 00:29:30.105 [2024-11-20 12:43:35.675026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.105 [2024-11-20 12:43:35.675050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.105 qpair failed and we were unable to recover it. 00:29:30.105 [2024-11-20 12:43:35.675227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.105 [2024-11-20 12:43:35.675250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.106 qpair failed and we were unable to recover it. 00:29:30.106 [2024-11-20 12:43:35.675417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.106 [2024-11-20 12:43:35.675440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.106 qpair failed and we were unable to recover it. 00:29:30.106 [2024-11-20 12:43:35.675522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.106 [2024-11-20 12:43:35.675545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.106 qpair failed and we were unable to recover it. 00:29:30.106 [2024-11-20 12:43:35.675715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.106 [2024-11-20 12:43:35.675738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.106 qpair failed and we were unable to recover it. 00:29:30.106 [2024-11-20 12:43:35.675894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.106 [2024-11-20 12:43:35.675918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.106 qpair failed and we were unable to recover it. 00:29:30.106 [2024-11-20 12:43:35.676079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.106 [2024-11-20 12:43:35.676102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.106 qpair failed and we were unable to recover it. 00:29:30.106 [2024-11-20 12:43:35.676284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.106 [2024-11-20 12:43:35.676308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.106 qpair failed and we were unable to recover it. 00:29:30.106 [2024-11-20 12:43:35.676400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.106 [2024-11-20 12:43:35.676424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.106 qpair failed and we were unable to recover it. 00:29:30.106 [2024-11-20 12:43:35.676594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.106 [2024-11-20 12:43:35.676618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.106 qpair failed and we were unable to recover it. 00:29:30.106 [2024-11-20 12:43:35.676863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.106 [2024-11-20 12:43:35.676887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.106 qpair failed and we were unable to recover it. 00:29:30.106 [2024-11-20 12:43:35.677072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.106 [2024-11-20 12:43:35.677095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.106 qpair failed and we were unable to recover it. 00:29:30.106 [2024-11-20 12:43:35.677244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.106 [2024-11-20 12:43:35.677268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.106 qpair failed and we were unable to recover it. 00:29:30.106 [2024-11-20 12:43:35.677482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.106 [2024-11-20 12:43:35.677506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.106 qpair failed and we were unable to recover it. 00:29:30.106 [2024-11-20 12:43:35.677625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.106 [2024-11-20 12:43:35.677648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.106 qpair failed and we were unable to recover it. 00:29:30.106 [2024-11-20 12:43:35.677799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.106 [2024-11-20 12:43:35.677821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.106 qpair failed and we were unable to recover it. 00:29:30.106 [2024-11-20 12:43:35.677984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.106 [2024-11-20 12:43:35.678006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.106 qpair failed and we were unable to recover it. 00:29:30.106 [2024-11-20 12:43:35.678158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.106 [2024-11-20 12:43:35.678182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.106 qpair failed and we were unable to recover it. 00:29:30.106 [2024-11-20 12:43:35.678425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.106 [2024-11-20 12:43:35.678449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.106 qpair failed and we were unable to recover it. 00:29:30.106 [2024-11-20 12:43:35.678601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.106 [2024-11-20 12:43:35.678628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.106 qpair failed and we were unable to recover it. 00:29:30.106 [2024-11-20 12:43:35.678729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.106 [2024-11-20 12:43:35.678751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.106 qpair failed and we were unable to recover it. 00:29:30.106 [2024-11-20 12:43:35.678847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.106 [2024-11-20 12:43:35.678869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.106 qpair failed and we were unable to recover it. 00:29:30.106 [2024-11-20 12:43:35.678972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.106 [2024-11-20 12:43:35.678995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.106 qpair failed and we were unable to recover it. 00:29:30.106 [2024-11-20 12:43:35.679193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.106 [2024-11-20 12:43:35.679222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.106 qpair failed and we were unable to recover it. 00:29:30.106 [2024-11-20 12:43:35.679314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.106 [2024-11-20 12:43:35.679336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.106 qpair failed and we were unable to recover it. 00:29:30.106 [2024-11-20 12:43:35.679433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.106 [2024-11-20 12:43:35.679456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.106 qpair failed and we were unable to recover it. 00:29:30.106 [2024-11-20 12:43:35.679623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.106 [2024-11-20 12:43:35.679646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.106 qpair failed and we were unable to recover it. 00:29:30.106 [2024-11-20 12:43:35.679797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.106 [2024-11-20 12:43:35.679819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.106 qpair failed and we were unable to recover it. 00:29:30.106 [2024-11-20 12:43:35.679904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.106 [2024-11-20 12:43:35.679928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.106 qpair failed and we were unable to recover it. 00:29:30.106 [2024-11-20 12:43:35.680115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.106 [2024-11-20 12:43:35.680138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.106 qpair failed and we were unable to recover it. 00:29:30.106 [2024-11-20 12:43:35.680288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.106 [2024-11-20 12:43:35.680313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.106 qpair failed and we were unable to recover it. 00:29:30.106 [2024-11-20 12:43:35.680494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.106 [2024-11-20 12:43:35.680518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.106 qpair failed and we were unable to recover it. 00:29:30.106 [2024-11-20 12:43:35.680616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.106 [2024-11-20 12:43:35.680638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.106 qpair failed and we were unable to recover it. 00:29:30.106 [2024-11-20 12:43:35.680735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.106 [2024-11-20 12:43:35.680760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.106 qpair failed and we were unable to recover it. 00:29:30.106 [2024-11-20 12:43:35.680926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.106 [2024-11-20 12:43:35.680950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.106 qpair failed and we were unable to recover it. 00:29:30.106 [2024-11-20 12:43:35.681031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.106 [2024-11-20 12:43:35.681054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.106 qpair failed and we were unable to recover it. 00:29:30.106 [2024-11-20 12:43:35.681210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.106 [2024-11-20 12:43:35.681235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.106 qpair failed and we were unable to recover it. 00:29:30.106 [2024-11-20 12:43:35.681398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.106 [2024-11-20 12:43:35.681422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.106 qpair failed and we were unable to recover it. 00:29:30.106 [2024-11-20 12:43:35.681586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.106 [2024-11-20 12:43:35.681608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.106 qpair failed and we were unable to recover it. 00:29:30.106 [2024-11-20 12:43:35.681761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.106 [2024-11-20 12:43:35.681785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.106 qpair failed and we were unable to recover it. 00:29:30.106 [2024-11-20 12:43:35.681876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.107 [2024-11-20 12:43:35.681899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.107 qpair failed and we were unable to recover it. 00:29:30.107 [2024-11-20 12:43:35.682087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.107 [2024-11-20 12:43:35.682111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.107 qpair failed and we were unable to recover it. 00:29:30.107 [2024-11-20 12:43:35.682306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.107 [2024-11-20 12:43:35.682331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.107 qpair failed and we were unable to recover it. 00:29:30.107 [2024-11-20 12:43:35.682439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.107 [2024-11-20 12:43:35.682462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.107 qpair failed and we were unable to recover it. 00:29:30.107 [2024-11-20 12:43:35.682617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.107 [2024-11-20 12:43:35.682640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.107 qpair failed and we were unable to recover it. 00:29:30.107 [2024-11-20 12:43:35.682735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.107 [2024-11-20 12:43:35.682758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.107 qpair failed and we were unable to recover it. 00:29:30.107 [2024-11-20 12:43:35.682976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.107 [2024-11-20 12:43:35.682998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.107 qpair failed and we were unable to recover it. 00:29:30.107 [2024-11-20 12:43:35.683125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.107 [2024-11-20 12:43:35.683148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.107 qpair failed and we were unable to recover it. 00:29:30.107 [2024-11-20 12:43:35.683252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.107 [2024-11-20 12:43:35.683277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.107 qpair failed and we were unable to recover it. 00:29:30.107 [2024-11-20 12:43:35.683361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.107 [2024-11-20 12:43:35.683384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.107 qpair failed and we were unable to recover it. 00:29:30.107 [2024-11-20 12:43:35.683479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.107 [2024-11-20 12:43:35.683503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.107 qpair failed and we were unable to recover it. 00:29:30.107 [2024-11-20 12:43:35.683729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.107 [2024-11-20 12:43:35.683752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.107 qpair failed and we were unable to recover it. 00:29:30.107 [2024-11-20 12:43:35.683969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.107 [2024-11-20 12:43:35.683992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.107 qpair failed and we were unable to recover it. 00:29:30.107 [2024-11-20 12:43:35.684222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.107 [2024-11-20 12:43:35.684245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.107 qpair failed and we were unable to recover it. 00:29:30.107 [2024-11-20 12:43:35.684407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.107 [2024-11-20 12:43:35.684430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.107 qpair failed and we were unable to recover it. 00:29:30.107 [2024-11-20 12:43:35.684516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.107 [2024-11-20 12:43:35.684539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.107 qpair failed and we were unable to recover it. 00:29:30.107 [2024-11-20 12:43:35.684640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.107 [2024-11-20 12:43:35.684662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.107 qpair failed and we were unable to recover it. 00:29:30.107 [2024-11-20 12:43:35.684827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.107 [2024-11-20 12:43:35.684851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.107 qpair failed and we were unable to recover it. 00:29:30.107 [2024-11-20 12:43:35.685118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.107 [2024-11-20 12:43:35.685141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.107 qpair failed and we were unable to recover it. 00:29:30.107 [2024-11-20 12:43:35.685375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.107 [2024-11-20 12:43:35.685398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.107 qpair failed and we were unable to recover it. 00:29:30.107 [2024-11-20 12:43:35.685500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.107 [2024-11-20 12:43:35.685527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.107 qpair failed and we were unable to recover it. 00:29:30.107 [2024-11-20 12:43:35.685693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.107 [2024-11-20 12:43:35.685715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.107 qpair failed and we were unable to recover it. 00:29:30.107 [2024-11-20 12:43:35.685905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.107 [2024-11-20 12:43:35.685928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.107 qpair failed and we were unable to recover it. 00:29:30.107 [2024-11-20 12:43:35.686045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.107 [2024-11-20 12:43:35.686068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.107 qpair failed and we were unable to recover it. 00:29:30.107 [2024-11-20 12:43:35.686309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.107 [2024-11-20 12:43:35.686334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.107 qpair failed and we were unable to recover it. 00:29:30.107 [2024-11-20 12:43:35.686503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.107 [2024-11-20 12:43:35.686527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.107 qpair failed and we were unable to recover it. 00:29:30.107 [2024-11-20 12:43:35.686688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.107 [2024-11-20 12:43:35.686711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.107 qpair failed and we were unable to recover it. 00:29:30.107 [2024-11-20 12:43:35.686950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.107 [2024-11-20 12:43:35.686974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.107 qpair failed and we were unable to recover it. 00:29:30.107 [2024-11-20 12:43:35.687191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.107 [2024-11-20 12:43:35.687219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.107 qpair failed and we were unable to recover it. 00:29:30.107 [2024-11-20 12:43:35.687383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.107 [2024-11-20 12:43:35.687407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.107 qpair failed and we were unable to recover it. 00:29:30.107 [2024-11-20 12:43:35.687518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.107 [2024-11-20 12:43:35.687541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.107 qpair failed and we were unable to recover it. 00:29:30.107 [2024-11-20 12:43:35.687755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.107 [2024-11-20 12:43:35.687777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.107 qpair failed and we were unable to recover it. 00:29:30.107 [2024-11-20 12:43:35.687960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.107 [2024-11-20 12:43:35.687983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.107 qpair failed and we were unable to recover it. 00:29:30.107 [2024-11-20 12:43:35.688131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.107 [2024-11-20 12:43:35.688154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.107 qpair failed and we were unable to recover it. 00:29:30.107 [2024-11-20 12:43:35.688317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.108 [2024-11-20 12:43:35.688342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.108 qpair failed and we were unable to recover it. 00:29:30.108 [2024-11-20 12:43:35.688453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.108 [2024-11-20 12:43:35.688476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.108 qpair failed and we were unable to recover it. 00:29:30.108 [2024-11-20 12:43:35.688628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.108 [2024-11-20 12:43:35.688651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.108 qpair failed and we were unable to recover it. 00:29:30.108 [2024-11-20 12:43:35.688805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.108 [2024-11-20 12:43:35.688828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.108 qpair failed and we were unable to recover it. 00:29:30.108 [2024-11-20 12:43:35.688984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.108 [2024-11-20 12:43:35.689007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.108 qpair failed and we were unable to recover it. 00:29:30.108 [2024-11-20 12:43:35.689091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.108 [2024-11-20 12:43:35.689115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.108 qpair failed and we were unable to recover it. 00:29:30.108 [2024-11-20 12:43:35.689267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.108 [2024-11-20 12:43:35.689291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.108 qpair failed and we were unable to recover it. 00:29:30.108 [2024-11-20 12:43:35.689389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.108 [2024-11-20 12:43:35.689412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.108 qpair failed and we were unable to recover it. 00:29:30.108 [2024-11-20 12:43:35.689594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.108 [2024-11-20 12:43:35.689618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.108 qpair failed and we were unable to recover it. 00:29:30.108 [2024-11-20 12:43:35.689710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.108 [2024-11-20 12:43:35.689733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.108 qpair failed and we were unable to recover it. 00:29:30.108 [2024-11-20 12:43:35.689974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.108 [2024-11-20 12:43:35.689998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.108 qpair failed and we were unable to recover it. 00:29:30.108 [2024-11-20 12:43:35.690161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.108 [2024-11-20 12:43:35.690184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.108 qpair failed and we were unable to recover it. 00:29:30.108 [2024-11-20 12:43:35.690358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.108 [2024-11-20 12:43:35.690382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.108 qpair failed and we were unable to recover it. 00:29:30.108 [2024-11-20 12:43:35.690544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.108 [2024-11-20 12:43:35.690570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.108 qpair failed and we were unable to recover it. 00:29:30.108 [2024-11-20 12:43:35.690785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.108 [2024-11-20 12:43:35.690808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.108 qpair failed and we were unable to recover it. 00:29:30.108 [2024-11-20 12:43:35.690906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.108 [2024-11-20 12:43:35.690930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.108 qpair failed and we were unable to recover it. 00:29:30.108 [2024-11-20 12:43:35.691145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.108 [2024-11-20 12:43:35.691169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.108 qpair failed and we were unable to recover it. 00:29:30.108 [2024-11-20 12:43:35.691352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.108 [2024-11-20 12:43:35.691375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.108 qpair failed and we were unable to recover it. 00:29:30.108 [2024-11-20 12:43:35.691492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.108 [2024-11-20 12:43:35.691515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.108 qpair failed and we were unable to recover it. 00:29:30.108 [2024-11-20 12:43:35.691672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.108 [2024-11-20 12:43:35.691695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.108 qpair failed and we were unable to recover it. 00:29:30.108 [2024-11-20 12:43:35.691813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.108 [2024-11-20 12:43:35.691836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.108 qpair failed and we were unable to recover it. 00:29:30.108 [2024-11-20 12:43:35.691929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.108 [2024-11-20 12:43:35.691953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.108 qpair failed and we were unable to recover it. 00:29:30.108 [2024-11-20 12:43:35.692104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.108 [2024-11-20 12:43:35.692127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.108 qpair failed and we were unable to recover it. 00:29:30.108 [2024-11-20 12:43:35.692217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.108 [2024-11-20 12:43:35.692240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.108 qpair failed and we were unable to recover it. 00:29:30.108 [2024-11-20 12:43:35.692356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.108 [2024-11-20 12:43:35.692379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.108 qpair failed and we were unable to recover it. 00:29:30.108 [2024-11-20 12:43:35.692620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.108 [2024-11-20 12:43:35.692643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.108 qpair failed and we were unable to recover it. 00:29:30.108 [2024-11-20 12:43:35.692748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.108 [2024-11-20 12:43:35.692771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.108 qpair failed and we were unable to recover it. 00:29:30.108 [2024-11-20 12:43:35.692960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.108 [2024-11-20 12:43:35.692983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.108 qpair failed and we were unable to recover it. 00:29:30.108 [2024-11-20 12:43:35.693137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.108 [2024-11-20 12:43:35.693161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.108 qpair failed and we were unable to recover it. 00:29:30.108 [2024-11-20 12:43:35.693350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.108 [2024-11-20 12:43:35.693372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.108 qpair failed and we were unable to recover it. 00:29:30.108 [2024-11-20 12:43:35.693542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.108 [2024-11-20 12:43:35.693565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.108 qpair failed and we were unable to recover it. 00:29:30.108 [2024-11-20 12:43:35.693661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.108 [2024-11-20 12:43:35.693687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.108 qpair failed and we were unable to recover it. 00:29:30.108 [2024-11-20 12:43:35.693779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.108 [2024-11-20 12:43:35.693801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.108 qpair failed and we were unable to recover it. 00:29:30.108 [2024-11-20 12:43:35.693982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.108 [2024-11-20 12:43:35.694006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.108 qpair failed and we were unable to recover it. 00:29:30.108 [2024-11-20 12:43:35.694222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.108 [2024-11-20 12:43:35.694246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.108 qpair failed and we were unable to recover it. 00:29:30.108 [2024-11-20 12:43:35.694398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.108 [2024-11-20 12:43:35.694422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.108 qpair failed and we were unable to recover it. 00:29:30.108 [2024-11-20 12:43:35.694516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.108 [2024-11-20 12:43:35.694539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.108 qpair failed and we were unable to recover it. 00:29:30.108 [2024-11-20 12:43:35.694645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.108 [2024-11-20 12:43:35.694668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.108 qpair failed and we were unable to recover it. 00:29:30.108 [2024-11-20 12:43:35.694855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.108 [2024-11-20 12:43:35.694879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.108 qpair failed and we were unable to recover it. 00:29:30.109 [2024-11-20 12:43:35.695030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.109 [2024-11-20 12:43:35.695054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.109 qpair failed and we were unable to recover it. 00:29:30.109 [2024-11-20 12:43:35.695233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.109 [2024-11-20 12:43:35.695258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.109 qpair failed and we were unable to recover it. 00:29:30.109 [2024-11-20 12:43:35.695361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.109 [2024-11-20 12:43:35.695384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.109 qpair failed and we were unable to recover it. 00:29:30.109 [2024-11-20 12:43:35.695546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.109 [2024-11-20 12:43:35.695569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.109 qpair failed and we were unable to recover it. 00:29:30.109 [2024-11-20 12:43:35.695717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.109 [2024-11-20 12:43:35.695740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.109 qpair failed and we were unable to recover it. 00:29:30.109 [2024-11-20 12:43:35.695842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.109 [2024-11-20 12:43:35.695864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.109 qpair failed and we were unable to recover it. 00:29:30.109 [2024-11-20 12:43:35.696017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.109 [2024-11-20 12:43:35.696041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.109 qpair failed and we were unable to recover it. 00:29:30.109 [2024-11-20 12:43:35.696188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.109 [2024-11-20 12:43:35.696216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.109 qpair failed and we were unable to recover it. 00:29:30.109 [2024-11-20 12:43:35.696310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.109 [2024-11-20 12:43:35.696333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.109 qpair failed and we were unable to recover it. 00:29:30.109 [2024-11-20 12:43:35.696548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.109 [2024-11-20 12:43:35.696571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.109 qpair failed and we were unable to recover it. 00:29:30.109 [2024-11-20 12:43:35.696791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.109 [2024-11-20 12:43:35.696814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.109 qpair failed and we were unable to recover it. 00:29:30.109 [2024-11-20 12:43:35.696975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.109 [2024-11-20 12:43:35.696998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.109 qpair failed and we were unable to recover it. 00:29:30.109 [2024-11-20 12:43:35.697216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.109 [2024-11-20 12:43:35.697239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.109 qpair failed and we were unable to recover it. 00:29:30.109 [2024-11-20 12:43:35.697351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.109 [2024-11-20 12:43:35.697374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.109 qpair failed and we were unable to recover it. 00:29:30.109 [2024-11-20 12:43:35.697531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.109 [2024-11-20 12:43:35.697554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.109 qpair failed and we were unable to recover it. 00:29:30.109 [2024-11-20 12:43:35.697649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.109 [2024-11-20 12:43:35.697675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.109 qpair failed and we were unable to recover it. 00:29:30.109 [2024-11-20 12:43:35.697841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.109 [2024-11-20 12:43:35.697866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.109 qpair failed and we were unable to recover it. 00:29:30.109 [2024-11-20 12:43:35.698113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.109 [2024-11-20 12:43:35.698136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.109 qpair failed and we were unable to recover it. 00:29:30.109 [2024-11-20 12:43:35.698234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.109 [2024-11-20 12:43:35.698258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.109 qpair failed and we were unable to recover it. 00:29:30.109 [2024-11-20 12:43:35.698477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.109 [2024-11-20 12:43:35.698501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.109 qpair failed and we were unable to recover it. 00:29:30.109 [2024-11-20 12:43:35.698660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.109 [2024-11-20 12:43:35.698683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.109 qpair failed and we were unable to recover it. 00:29:30.109 [2024-11-20 12:43:35.698782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.109 [2024-11-20 12:43:35.698805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.109 qpair failed and we were unable to recover it. 00:29:30.109 [2024-11-20 12:43:35.698997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.109 [2024-11-20 12:43:35.699018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.109 qpair failed and we were unable to recover it. 00:29:30.109 [2024-11-20 12:43:35.699194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.109 [2024-11-20 12:43:35.699223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.109 qpair failed and we were unable to recover it. 00:29:30.109 [2024-11-20 12:43:35.699392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.109 [2024-11-20 12:43:35.699416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.109 qpair failed and we were unable to recover it. 00:29:30.109 [2024-11-20 12:43:35.699571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.109 [2024-11-20 12:43:35.699594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.109 qpair failed and we were unable to recover it. 00:29:30.109 [2024-11-20 12:43:35.699765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.109 [2024-11-20 12:43:35.699788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.109 qpair failed and we were unable to recover it. 00:29:30.109 [2024-11-20 12:43:35.699888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.109 [2024-11-20 12:43:35.699912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.109 qpair failed and we were unable to recover it. 00:29:30.109 [2024-11-20 12:43:35.700059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.109 [2024-11-20 12:43:35.700081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.109 qpair failed and we were unable to recover it. 00:29:30.109 [2024-11-20 12:43:35.700318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.109 [2024-11-20 12:43:35.700341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.109 qpair failed and we were unable to recover it. 00:29:30.109 [2024-11-20 12:43:35.700495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.109 [2024-11-20 12:43:35.700517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.109 qpair failed and we were unable to recover it. 00:29:30.109 [2024-11-20 12:43:35.700666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.109 [2024-11-20 12:43:35.700690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.109 qpair failed and we were unable to recover it. 00:29:30.109 [2024-11-20 12:43:35.700865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.109 [2024-11-20 12:43:35.700887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.109 qpair failed and we were unable to recover it. 00:29:30.109 [2024-11-20 12:43:35.701044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.109 [2024-11-20 12:43:35.701066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.109 qpair failed and we were unable to recover it. 00:29:30.109 [2024-11-20 12:43:35.701184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.109 [2024-11-20 12:43:35.701216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.109 qpair failed and we were unable to recover it. 00:29:30.109 [2024-11-20 12:43:35.701328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.109 [2024-11-20 12:43:35.701353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.109 qpair failed and we were unable to recover it. 00:29:30.109 [2024-11-20 12:43:35.701540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.109 [2024-11-20 12:43:35.701563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.109 qpair failed and we were unable to recover it. 00:29:30.109 [2024-11-20 12:43:35.701724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.109 [2024-11-20 12:43:35.701747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.109 qpair failed and we were unable to recover it. 00:29:30.109 [2024-11-20 12:43:35.701840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.110 [2024-11-20 12:43:35.701863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.110 qpair failed and we were unable to recover it. 00:29:30.110 [2024-11-20 12:43:35.702026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.110 [2024-11-20 12:43:35.702050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.110 qpair failed and we were unable to recover it. 00:29:30.110 [2024-11-20 12:43:35.702131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.110 [2024-11-20 12:43:35.702153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.110 qpair failed and we were unable to recover it. 00:29:30.110 [2024-11-20 12:43:35.702305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.110 [2024-11-20 12:43:35.702330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.110 qpair failed and we were unable to recover it. 00:29:30.110 [2024-11-20 12:43:35.702498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.110 [2024-11-20 12:43:35.702527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.110 qpair failed and we were unable to recover it. 00:29:30.110 [2024-11-20 12:43:35.702675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.110 [2024-11-20 12:43:35.702697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.110 qpair failed and we were unable to recover it. 00:29:30.110 [2024-11-20 12:43:35.702866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.110 [2024-11-20 12:43:35.702889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.110 qpair failed and we were unable to recover it. 00:29:30.110 [2024-11-20 12:43:35.702981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.110 [2024-11-20 12:43:35.703005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.110 qpair failed and we were unable to recover it. 00:29:30.110 [2024-11-20 12:43:35.703099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.110 [2024-11-20 12:43:35.703122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.110 qpair failed and we were unable to recover it. 00:29:30.110 [2024-11-20 12:43:35.703295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.110 [2024-11-20 12:43:35.703320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.110 qpair failed and we were unable to recover it. 00:29:30.110 [2024-11-20 12:43:35.703473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.110 [2024-11-20 12:43:35.703496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.110 qpair failed and we were unable to recover it. 00:29:30.110 [2024-11-20 12:43:35.703670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.110 [2024-11-20 12:43:35.703692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.110 qpair failed and we were unable to recover it. 00:29:30.110 [2024-11-20 12:43:35.703777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.110 [2024-11-20 12:43:35.703801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.110 qpair failed and we were unable to recover it. 00:29:30.110 [2024-11-20 12:43:35.703950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.110 [2024-11-20 12:43:35.703973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.110 qpair failed and we were unable to recover it. 00:29:30.110 [2024-11-20 12:43:35.704127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.110 [2024-11-20 12:43:35.704150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.110 qpair failed and we were unable to recover it. 00:29:30.110 [2024-11-20 12:43:35.704300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.110 [2024-11-20 12:43:35.704324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.110 qpair failed and we were unable to recover it. 00:29:30.110 [2024-11-20 12:43:35.704543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.110 [2024-11-20 12:43:35.704566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.110 qpair failed and we were unable to recover it. 00:29:30.110 [2024-11-20 12:43:35.704817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.110 [2024-11-20 12:43:35.704840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.110 qpair failed and we were unable to recover it. 00:29:30.110 [2024-11-20 12:43:35.704998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.110 [2024-11-20 12:43:35.705021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.110 qpair failed and we were unable to recover it. 00:29:30.110 [2024-11-20 12:43:35.705262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.110 [2024-11-20 12:43:35.705286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.110 qpair failed and we were unable to recover it. 00:29:30.110 [2024-11-20 12:43:35.705527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.110 [2024-11-20 12:43:35.705549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.110 qpair failed and we were unable to recover it. 00:29:30.110 [2024-11-20 12:43:35.705716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.110 [2024-11-20 12:43:35.705739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.110 qpair failed and we were unable to recover it. 00:29:30.110 [2024-11-20 12:43:35.705928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.110 [2024-11-20 12:43:35.705951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.110 qpair failed and we were unable to recover it. 00:29:30.110 [2024-11-20 12:43:35.706143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.110 [2024-11-20 12:43:35.706166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.110 qpair failed and we were unable to recover it. 00:29:30.110 [2024-11-20 12:43:35.706284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.110 [2024-11-20 12:43:35.706307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.110 qpair failed and we were unable to recover it. 00:29:30.110 [2024-11-20 12:43:35.706458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.110 [2024-11-20 12:43:35.706480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.110 qpair failed and we were unable to recover it. 00:29:30.110 [2024-11-20 12:43:35.706645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.110 [2024-11-20 12:43:35.706669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.110 qpair failed and we were unable to recover it. 00:29:30.110 [2024-11-20 12:43:35.706763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.110 [2024-11-20 12:43:35.706785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.110 qpair failed and we were unable to recover it. 00:29:30.110 [2024-11-20 12:43:35.706933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.110 [2024-11-20 12:43:35.706956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.110 qpair failed and we were unable to recover it. 00:29:30.110 [2024-11-20 12:43:35.707196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.110 [2024-11-20 12:43:35.707226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.110 qpair failed and we were unable to recover it. 00:29:30.110 [2024-11-20 12:43:35.707339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.110 [2024-11-20 12:43:35.707361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.110 qpair failed and we were unable to recover it. 00:29:30.110 [2024-11-20 12:43:35.707614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.110 [2024-11-20 12:43:35.707638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.110 qpair failed and we were unable to recover it. 00:29:30.110 [2024-11-20 12:43:35.707728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.110 [2024-11-20 12:43:35.707751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.110 qpair failed and we were unable to recover it. 00:29:30.110 [2024-11-20 12:43:35.707831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.110 [2024-11-20 12:43:35.707854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.110 qpair failed and we were unable to recover it. 00:29:30.110 [2024-11-20 12:43:35.707967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.110 [2024-11-20 12:43:35.707991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.110 qpair failed and we were unable to recover it. 00:29:30.110 [2024-11-20 12:43:35.708154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.110 [2024-11-20 12:43:35.708177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.110 qpair failed and we were unable to recover it. 00:29:30.110 [2024-11-20 12:43:35.708301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.110 [2024-11-20 12:43:35.708324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.110 qpair failed and we were unable to recover it. 00:29:30.110 [2024-11-20 12:43:35.708543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.110 [2024-11-20 12:43:35.708566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.110 qpair failed and we were unable to recover it. 00:29:30.110 [2024-11-20 12:43:35.708810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.110 [2024-11-20 12:43:35.708833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.110 qpair failed and we were unable to recover it. 00:29:30.111 [2024-11-20 12:43:35.708913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.111 [2024-11-20 12:43:35.708936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.111 qpair failed and we were unable to recover it. 00:29:30.111 [2024-11-20 12:43:35.709106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.111 [2024-11-20 12:43:35.709129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.111 qpair failed and we were unable to recover it. 00:29:30.111 [2024-11-20 12:43:35.709241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.111 [2024-11-20 12:43:35.709265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.111 qpair failed and we were unable to recover it. 00:29:30.111 [2024-11-20 12:43:35.709428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.111 [2024-11-20 12:43:35.709451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.111 qpair failed and we were unable to recover it. 00:29:30.111 [2024-11-20 12:43:35.709604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.111 [2024-11-20 12:43:35.709629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.111 qpair failed and we were unable to recover it. 00:29:30.111 [2024-11-20 12:43:35.709786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.111 [2024-11-20 12:43:35.709809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.111 qpair failed and we were unable to recover it. 00:29:30.111 [2024-11-20 12:43:35.709956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.111 [2024-11-20 12:43:35.709982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.111 qpair failed and we were unable to recover it. 00:29:30.111 [2024-11-20 12:43:35.710135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.111 [2024-11-20 12:43:35.710158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.111 qpair failed and we were unable to recover it. 00:29:30.111 [2024-11-20 12:43:35.710343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.111 [2024-11-20 12:43:35.710367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.111 qpair failed and we were unable to recover it. 00:29:30.111 [2024-11-20 12:43:35.710617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.111 [2024-11-20 12:43:35.710640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.111 qpair failed and we were unable to recover it. 00:29:30.111 [2024-11-20 12:43:35.710788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.111 [2024-11-20 12:43:35.710812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.111 qpair failed and we were unable to recover it. 00:29:30.111 [2024-11-20 12:43:35.710917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.111 [2024-11-20 12:43:35.710940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.111 qpair failed and we were unable to recover it. 00:29:30.111 [2024-11-20 12:43:35.711097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.111 [2024-11-20 12:43:35.711120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.111 qpair failed and we were unable to recover it. 00:29:30.111 [2024-11-20 12:43:35.711228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.111 [2024-11-20 12:43:35.711253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.111 qpair failed and we were unable to recover it. 00:29:30.111 [2024-11-20 12:43:35.711430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.111 [2024-11-20 12:43:35.711453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.111 qpair failed and we were unable to recover it. 00:29:30.111 [2024-11-20 12:43:35.711556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.111 [2024-11-20 12:43:35.711580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.111 qpair failed and we were unable to recover it. 00:29:30.111 [2024-11-20 12:43:35.711745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.111 [2024-11-20 12:43:35.711769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.111 qpair failed and we were unable to recover it. 00:29:30.111 [2024-11-20 12:43:35.711935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.111 [2024-11-20 12:43:35.711958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.111 qpair failed and we were unable to recover it. 00:29:30.111 [2024-11-20 12:43:35.712125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.111 [2024-11-20 12:43:35.712148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.111 qpair failed and we were unable to recover it. 00:29:30.111 [2024-11-20 12:43:35.712295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.111 [2024-11-20 12:43:35.712320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.111 qpair failed and we were unable to recover it. 00:29:30.111 [2024-11-20 12:43:35.712519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.111 [2024-11-20 12:43:35.712543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.111 qpair failed and we were unable to recover it. 00:29:30.111 [2024-11-20 12:43:35.712709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.111 [2024-11-20 12:43:35.712731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.111 qpair failed and we were unable to recover it. 00:29:30.111 [2024-11-20 12:43:35.712908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.111 [2024-11-20 12:43:35.712931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.111 qpair failed and we were unable to recover it. 00:29:30.111 [2024-11-20 12:43:35.713174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.111 [2024-11-20 12:43:35.713197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.111 qpair failed and we were unable to recover it. 00:29:30.111 [2024-11-20 12:43:35.713314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.111 [2024-11-20 12:43:35.713337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.111 qpair failed and we were unable to recover it. 00:29:30.111 [2024-11-20 12:43:35.713445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.111 [2024-11-20 12:43:35.713467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.111 qpair failed and we were unable to recover it. 00:29:30.111 [2024-11-20 12:43:35.713711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.111 [2024-11-20 12:43:35.713734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.111 qpair failed and we were unable to recover it. 00:29:30.111 [2024-11-20 12:43:35.713906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.111 [2024-11-20 12:43:35.713928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.111 qpair failed and we were unable to recover it. 00:29:30.111 [2024-11-20 12:43:35.714105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.111 [2024-11-20 12:43:35.714129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.111 qpair failed and we were unable to recover it. 00:29:30.111 [2024-11-20 12:43:35.714219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.111 [2024-11-20 12:43:35.714243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.111 qpair failed and we were unable to recover it. 00:29:30.111 [2024-11-20 12:43:35.714465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.111 [2024-11-20 12:43:35.714488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.111 qpair failed and we were unable to recover it. 00:29:30.111 [2024-11-20 12:43:35.714729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.111 [2024-11-20 12:43:35.714752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.111 qpair failed and we were unable to recover it. 00:29:30.111 [2024-11-20 12:43:35.714866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.111 [2024-11-20 12:43:35.714890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.111 qpair failed and we were unable to recover it. 00:29:30.111 [2024-11-20 12:43:35.715040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.111 [2024-11-20 12:43:35.715063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.112 qpair failed and we were unable to recover it. 00:29:30.112 [2024-11-20 12:43:35.715169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.112 [2024-11-20 12:43:35.715192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.112 qpair failed and we were unable to recover it. 00:29:30.112 [2024-11-20 12:43:35.715366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.112 [2024-11-20 12:43:35.715390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.112 qpair failed and we were unable to recover it. 00:29:30.112 [2024-11-20 12:43:35.715545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.112 [2024-11-20 12:43:35.715569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.112 qpair failed and we were unable to recover it. 00:29:30.112 [2024-11-20 12:43:35.715662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.112 [2024-11-20 12:43:35.715685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.112 qpair failed and we were unable to recover it. 00:29:30.112 [2024-11-20 12:43:35.715789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.112 [2024-11-20 12:43:35.715813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.112 qpair failed and we were unable to recover it. 00:29:30.112 [2024-11-20 12:43:35.715969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.112 [2024-11-20 12:43:35.715993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.112 qpair failed and we were unable to recover it. 00:29:30.112 [2024-11-20 12:43:35.716218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.112 [2024-11-20 12:43:35.716253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.112 qpair failed and we were unable to recover it. 00:29:30.112 [2024-11-20 12:43:35.716512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.112 [2024-11-20 12:43:35.716544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.112 qpair failed and we were unable to recover it. 00:29:30.112 [2024-11-20 12:43:35.716730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.112 [2024-11-20 12:43:35.716763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.112 qpair failed and we were unable to recover it. 00:29:30.112 [2024-11-20 12:43:35.717007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.112 [2024-11-20 12:43:35.717039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.112 qpair failed and we were unable to recover it. 00:29:30.112 [2024-11-20 12:43:35.717241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.112 [2024-11-20 12:43:35.717276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.112 qpair failed and we were unable to recover it. 00:29:30.112 [2024-11-20 12:43:35.717394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.112 [2024-11-20 12:43:35.717416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.112 qpair failed and we were unable to recover it. 00:29:30.112 [2024-11-20 12:43:35.717590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.112 [2024-11-20 12:43:35.717622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.112 qpair failed and we were unable to recover it. 00:29:30.112 [2024-11-20 12:43:35.717804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.112 [2024-11-20 12:43:35.717838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.112 qpair failed and we were unable to recover it. 00:29:30.112 [2024-11-20 12:43:35.718035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.112 [2024-11-20 12:43:35.718068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.112 qpair failed and we were unable to recover it. 00:29:30.112 [2024-11-20 12:43:35.718238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.112 [2024-11-20 12:43:35.718262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.112 qpair failed and we were unable to recover it. 00:29:30.112 [2024-11-20 12:43:35.718356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.112 [2024-11-20 12:43:35.718379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.112 qpair failed and we were unable to recover it. 00:29:30.112 [2024-11-20 12:43:35.718527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.112 [2024-11-20 12:43:35.718567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.112 qpair failed and we were unable to recover it. 00:29:30.112 [2024-11-20 12:43:35.718819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.112 [2024-11-20 12:43:35.718852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.112 qpair failed and we were unable to recover it. 00:29:30.112 [2024-11-20 12:43:35.719137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.112 [2024-11-20 12:43:35.719171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.112 qpair failed and we were unable to recover it. 00:29:30.112 [2024-11-20 12:43:35.719399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.112 [2024-11-20 12:43:35.719424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.112 qpair failed and we were unable to recover it. 00:29:30.112 [2024-11-20 12:43:35.719595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.112 [2024-11-20 12:43:35.719628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.112 qpair failed and we were unable to recover it. 00:29:30.112 [2024-11-20 12:43:35.719871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.112 [2024-11-20 12:43:35.719904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.112 qpair failed and we were unable to recover it. 00:29:30.112 [2024-11-20 12:43:35.720042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.112 [2024-11-20 12:43:35.720075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.112 qpair failed and we were unable to recover it. 00:29:30.112 [2024-11-20 12:43:35.720294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.112 [2024-11-20 12:43:35.720329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.112 qpair failed and we were unable to recover it. 00:29:30.112 [2024-11-20 12:43:35.720538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.112 [2024-11-20 12:43:35.720571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.112 qpair failed and we were unable to recover it. 00:29:30.112 [2024-11-20 12:43:35.720766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.112 [2024-11-20 12:43:35.720799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.112 qpair failed and we were unable to recover it. 00:29:30.112 [2024-11-20 12:43:35.721100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.112 [2024-11-20 12:43:35.721123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.112 qpair failed and we were unable to recover it. 00:29:30.112 [2024-11-20 12:43:35.721286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.112 [2024-11-20 12:43:35.721311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.112 qpair failed and we were unable to recover it. 00:29:30.112 [2024-11-20 12:43:35.721399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.112 [2024-11-20 12:43:35.721443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.112 qpair failed and we were unable to recover it. 00:29:30.112 [2024-11-20 12:43:35.721555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.112 [2024-11-20 12:43:35.721588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.112 qpair failed and we were unable to recover it. 00:29:30.112 [2024-11-20 12:43:35.721723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.112 [2024-11-20 12:43:35.721756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.112 qpair failed and we were unable to recover it. 00:29:30.112 [2024-11-20 12:43:35.722014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.112 [2024-11-20 12:43:35.722048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.112 qpair failed and we were unable to recover it. 00:29:30.112 [2024-11-20 12:43:35.722225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.112 [2024-11-20 12:43:35.722259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.112 qpair failed and we were unable to recover it. 00:29:30.112 [2024-11-20 12:43:35.722457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.112 [2024-11-20 12:43:35.722480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.112 qpair failed and we were unable to recover it. 00:29:30.112 [2024-11-20 12:43:35.722699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.112 [2024-11-20 12:43:35.722723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.112 qpair failed and we were unable to recover it. 00:29:30.112 [2024-11-20 12:43:35.722960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.112 [2024-11-20 12:43:35.722993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.112 qpair failed and we were unable to recover it. 00:29:30.112 [2024-11-20 12:43:35.723117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.112 [2024-11-20 12:43:35.723148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.113 qpair failed and we were unable to recover it. 00:29:30.113 [2024-11-20 12:43:35.723421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.113 [2024-11-20 12:43:35.723455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.113 qpair failed and we were unable to recover it. 00:29:30.113 [2024-11-20 12:43:35.723637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.113 [2024-11-20 12:43:35.723659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.113 qpair failed and we were unable to recover it. 00:29:30.113 [2024-11-20 12:43:35.723827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.113 [2024-11-20 12:43:35.723865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.113 qpair failed and we were unable to recover it. 00:29:30.113 [2024-11-20 12:43:35.723987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.113 [2024-11-20 12:43:35.724021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.113 qpair failed and we were unable to recover it. 00:29:30.113 [2024-11-20 12:43:35.724130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.113 [2024-11-20 12:43:35.724153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.113 qpair failed and we were unable to recover it. 00:29:30.113 [2024-11-20 12:43:35.724373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.113 [2024-11-20 12:43:35.724397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.113 qpair failed and we were unable to recover it. 00:29:30.113 [2024-11-20 12:43:35.724500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.113 [2024-11-20 12:43:35.724543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.113 qpair failed and we were unable to recover it. 00:29:30.113 [2024-11-20 12:43:35.724672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.113 [2024-11-20 12:43:35.724704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.113 qpair failed and we were unable to recover it. 00:29:30.113 [2024-11-20 12:43:35.724883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.113 [2024-11-20 12:43:35.724915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.113 qpair failed and we were unable to recover it. 00:29:30.113 [2024-11-20 12:43:35.725097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.113 [2024-11-20 12:43:35.725120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.113 qpair failed and we were unable to recover it. 00:29:30.113 [2024-11-20 12:43:35.725271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.113 [2024-11-20 12:43:35.725295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.113 qpair failed and we were unable to recover it. 00:29:30.113 [2024-11-20 12:43:35.725405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.113 [2024-11-20 12:43:35.725427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.113 qpair failed and we were unable to recover it. 00:29:30.113 [2024-11-20 12:43:35.725583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.113 [2024-11-20 12:43:35.725607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.113 qpair failed and we were unable to recover it. 00:29:30.113 [2024-11-20 12:43:35.725688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.113 [2024-11-20 12:43:35.725711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.113 qpair failed and we were unable to recover it. 00:29:30.113 [2024-11-20 12:43:35.725812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.113 [2024-11-20 12:43:35.725834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.113 qpair failed and we were unable to recover it. 00:29:30.113 [2024-11-20 12:43:35.725949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.113 [2024-11-20 12:43:35.725981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.113 qpair failed and we were unable to recover it. 00:29:30.113 [2024-11-20 12:43:35.726169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.113 [2024-11-20 12:43:35.726210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.113 qpair failed and we were unable to recover it. 00:29:30.113 [2024-11-20 12:43:35.726424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.113 [2024-11-20 12:43:35.726446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.113 qpair failed and we were unable to recover it. 00:29:30.113 [2024-11-20 12:43:35.726541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.113 [2024-11-20 12:43:35.726565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.113 qpair failed and we were unable to recover it. 00:29:30.113 [2024-11-20 12:43:35.726806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.113 [2024-11-20 12:43:35.726830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.113 qpair failed and we were unable to recover it. 00:29:30.113 [2024-11-20 12:43:35.727017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.113 [2024-11-20 12:43:35.727041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.113 qpair failed and we were unable to recover it. 00:29:30.113 [2024-11-20 12:43:35.727221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.113 [2024-11-20 12:43:35.727245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.113 qpair failed and we were unable to recover it. 00:29:30.113 [2024-11-20 12:43:35.727347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.113 [2024-11-20 12:43:35.727370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.113 qpair failed and we were unable to recover it. 00:29:30.113 [2024-11-20 12:43:35.727545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.113 [2024-11-20 12:43:35.727568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.113 qpair failed and we were unable to recover it. 00:29:30.113 [2024-11-20 12:43:35.727689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.113 [2024-11-20 12:43:35.727711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.113 qpair failed and we were unable to recover it. 00:29:30.113 [2024-11-20 12:43:35.727871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.113 [2024-11-20 12:43:35.727894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.113 qpair failed and we were unable to recover it. 00:29:30.113 [2024-11-20 12:43:35.727992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.113 [2024-11-20 12:43:35.728014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.113 qpair failed and we were unable to recover it. 00:29:30.113 [2024-11-20 12:43:35.728238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.113 [2024-11-20 12:43:35.728273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.113 qpair failed and we were unable to recover it. 00:29:30.113 [2024-11-20 12:43:35.728406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.113 [2024-11-20 12:43:35.728439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.113 qpair failed and we were unable to recover it. 00:29:30.113 [2024-11-20 12:43:35.728646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.113 [2024-11-20 12:43:35.728677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.113 qpair failed and we were unable to recover it. 00:29:30.113 [2024-11-20 12:43:35.728902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.113 [2024-11-20 12:43:35.728935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.113 qpair failed and we were unable to recover it. 00:29:30.113 [2024-11-20 12:43:35.729128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.113 [2024-11-20 12:43:35.729161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.113 qpair failed and we were unable to recover it. 00:29:30.113 [2024-11-20 12:43:35.729352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.113 [2024-11-20 12:43:35.729377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.113 qpair failed and we were unable to recover it. 00:29:30.113 [2024-11-20 12:43:35.729484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.113 [2024-11-20 12:43:35.729507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.113 qpair failed and we were unable to recover it. 00:29:30.113 [2024-11-20 12:43:35.729773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.113 [2024-11-20 12:43:35.729814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.113 qpair failed and we were unable to recover it. 00:29:30.113 [2024-11-20 12:43:35.730004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.113 [2024-11-20 12:43:35.730037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.113 qpair failed and we were unable to recover it. 00:29:30.113 [2024-11-20 12:43:35.730218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.113 [2024-11-20 12:43:35.730252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.113 qpair failed and we were unable to recover it. 00:29:30.113 [2024-11-20 12:43:35.730443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.113 [2024-11-20 12:43:35.730477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.113 qpair failed and we were unable to recover it. 00:29:30.114 [2024-11-20 12:43:35.730741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.114 [2024-11-20 12:43:35.730785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.114 qpair failed and we were unable to recover it. 00:29:30.114 [2024-11-20 12:43:35.730874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.114 [2024-11-20 12:43:35.730897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.114 qpair failed and we were unable to recover it. 00:29:30.114 [2024-11-20 12:43:35.731117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.114 [2024-11-20 12:43:35.731150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.114 qpair failed and we were unable to recover it. 00:29:30.114 [2024-11-20 12:43:35.731312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.114 [2024-11-20 12:43:35.731345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.114 qpair failed and we were unable to recover it. 00:29:30.114 [2024-11-20 12:43:35.731452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.114 [2024-11-20 12:43:35.731486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.114 qpair failed and we were unable to recover it. 00:29:30.114 [2024-11-20 12:43:35.731769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.114 [2024-11-20 12:43:35.731808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.114 qpair failed and we were unable to recover it. 00:29:30.114 [2024-11-20 12:43:35.732091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.114 [2024-11-20 12:43:35.732123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.114 qpair failed and we were unable to recover it. 00:29:30.114 [2024-11-20 12:43:35.732249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.114 [2024-11-20 12:43:35.732287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.114 qpair failed and we were unable to recover it. 00:29:30.114 [2024-11-20 12:43:35.732372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.114 [2024-11-20 12:43:35.732395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.114 qpair failed and we were unable to recover it. 00:29:30.114 [2024-11-20 12:43:35.732617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.114 [2024-11-20 12:43:35.732650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.114 qpair failed and we were unable to recover it. 00:29:30.114 [2024-11-20 12:43:35.732828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.114 [2024-11-20 12:43:35.732861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.114 qpair failed and we were unable to recover it. 00:29:30.114 [2024-11-20 12:43:35.733046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.114 [2024-11-20 12:43:35.733078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.114 qpair failed and we were unable to recover it. 00:29:30.114 [2024-11-20 12:43:35.733251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.114 [2024-11-20 12:43:35.733274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.114 qpair failed and we were unable to recover it. 00:29:30.114 [2024-11-20 12:43:35.733464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.114 [2024-11-20 12:43:35.733486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.114 qpair failed and we were unable to recover it. 00:29:30.114 [2024-11-20 12:43:35.733581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.114 [2024-11-20 12:43:35.733604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.114 qpair failed and we were unable to recover it. 00:29:30.114 [2024-11-20 12:43:35.733767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.114 [2024-11-20 12:43:35.733790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.114 qpair failed and we were unable to recover it. 00:29:30.114 [2024-11-20 12:43:35.733947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.114 [2024-11-20 12:43:35.733979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.114 qpair failed and we were unable to recover it. 00:29:30.114 [2024-11-20 12:43:35.734159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.114 [2024-11-20 12:43:35.734189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.114 qpair failed and we were unable to recover it. 00:29:30.114 [2024-11-20 12:43:35.734333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.114 [2024-11-20 12:43:35.734367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.114 qpair failed and we were unable to recover it. 00:29:30.114 [2024-11-20 12:43:35.734506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.114 [2024-11-20 12:43:35.734541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.114 qpair failed and we were unable to recover it. 00:29:30.114 [2024-11-20 12:43:35.734789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.114 [2024-11-20 12:43:35.734821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.114 qpair failed and we were unable to recover it. 00:29:30.114 [2024-11-20 12:43:35.735062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.114 [2024-11-20 12:43:35.735094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.114 qpair failed and we were unable to recover it. 00:29:30.114 [2024-11-20 12:43:35.735282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.114 [2024-11-20 12:43:35.735306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.114 qpair failed and we were unable to recover it. 00:29:30.114 [2024-11-20 12:43:35.735391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.114 [2024-11-20 12:43:35.735413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.114 qpair failed and we were unable to recover it. 00:29:30.114 [2024-11-20 12:43:35.735644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.114 [2024-11-20 12:43:35.735667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.114 qpair failed and we were unable to recover it. 00:29:30.114 [2024-11-20 12:43:35.735883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.114 [2024-11-20 12:43:35.735907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.114 qpair failed and we were unable to recover it. 00:29:30.114 [2024-11-20 12:43:35.736066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.114 [2024-11-20 12:43:35.736090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.114 qpair failed and we were unable to recover it. 00:29:30.114 [2024-11-20 12:43:35.736251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.114 [2024-11-20 12:43:35.736274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.114 qpair failed and we were unable to recover it. 00:29:30.114 [2024-11-20 12:43:35.736444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.114 [2024-11-20 12:43:35.736476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.114 qpair failed and we were unable to recover it. 00:29:30.114 [2024-11-20 12:43:35.736663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.114 [2024-11-20 12:43:35.736696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.114 qpair failed and we were unable to recover it. 00:29:30.114 [2024-11-20 12:43:35.736888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.114 [2024-11-20 12:43:35.736921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.114 qpair failed and we were unable to recover it. 00:29:30.114 [2024-11-20 12:43:35.737110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.114 [2024-11-20 12:43:35.737133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.114 qpair failed and we were unable to recover it. 00:29:30.114 [2024-11-20 12:43:35.737349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.114 [2024-11-20 12:43:35.737377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.114 qpair failed and we were unable to recover it. 00:29:30.114 [2024-11-20 12:43:35.737558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.114 [2024-11-20 12:43:35.737580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.114 qpair failed and we were unable to recover it. 00:29:30.114 [2024-11-20 12:43:35.737675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.114 [2024-11-20 12:43:35.737698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.114 qpair failed and we were unable to recover it. 00:29:30.114 [2024-11-20 12:43:35.737796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.114 [2024-11-20 12:43:35.737819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.114 qpair failed and we were unable to recover it. 00:29:30.114 [2024-11-20 12:43:35.737987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.114 [2024-11-20 12:43:35.738009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.114 qpair failed and we were unable to recover it. 00:29:30.114 [2024-11-20 12:43:35.738173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.114 [2024-11-20 12:43:35.738197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.114 qpair failed and we were unable to recover it. 00:29:30.114 [2024-11-20 12:43:35.738354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.115 [2024-11-20 12:43:35.738376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.115 qpair failed and we were unable to recover it. 00:29:30.115 [2024-11-20 12:43:35.738489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.115 [2024-11-20 12:43:35.738512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.115 qpair failed and we were unable to recover it. 00:29:30.115 [2024-11-20 12:43:35.738662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.115 [2024-11-20 12:43:35.738684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.115 qpair failed and we were unable to recover it. 00:29:30.115 [2024-11-20 12:43:35.738773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.115 [2024-11-20 12:43:35.738795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.115 qpair failed and we were unable to recover it. 00:29:30.115 [2024-11-20 12:43:35.738944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.115 [2024-11-20 12:43:35.738983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.115 qpair failed and we were unable to recover it. 00:29:30.115 [2024-11-20 12:43:35.739177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.115 [2024-11-20 12:43:35.739215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.115 qpair failed and we were unable to recover it. 00:29:30.115 [2024-11-20 12:43:35.739327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.115 [2024-11-20 12:43:35.739359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.115 qpair failed and we were unable to recover it. 00:29:30.115 [2024-11-20 12:43:35.739531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.115 [2024-11-20 12:43:35.739564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.115 qpair failed and we were unable to recover it. 00:29:30.115 [2024-11-20 12:43:35.739698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.115 [2024-11-20 12:43:35.739732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.115 qpair failed and we were unable to recover it. 00:29:30.115 [2024-11-20 12:43:35.739845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.115 [2024-11-20 12:43:35.739887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.115 qpair failed and we were unable to recover it. 00:29:30.115 [2024-11-20 12:43:35.740132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.115 [2024-11-20 12:43:35.740155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.115 qpair failed and we were unable to recover it. 00:29:30.115 [2024-11-20 12:43:35.740308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.115 [2024-11-20 12:43:35.740331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.115 qpair failed and we were unable to recover it. 00:29:30.115 [2024-11-20 12:43:35.740522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.115 [2024-11-20 12:43:35.740555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.115 qpair failed and we were unable to recover it. 00:29:30.115 [2024-11-20 12:43:35.740749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.115 [2024-11-20 12:43:35.740780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.115 qpair failed and we were unable to recover it. 00:29:30.115 [2024-11-20 12:43:35.740923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.115 [2024-11-20 12:43:35.740957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.115 qpair failed and we were unable to recover it. 00:29:30.115 [2024-11-20 12:43:35.741197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.115 [2024-11-20 12:43:35.741228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.115 qpair failed and we were unable to recover it. 00:29:30.115 [2024-11-20 12:43:35.741383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.115 [2024-11-20 12:43:35.741416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.115 qpair failed and we were unable to recover it. 00:29:30.115 [2024-11-20 12:43:35.741532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.115 [2024-11-20 12:43:35.741563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.115 qpair failed and we were unable to recover it. 00:29:30.115 [2024-11-20 12:43:35.741740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.115 [2024-11-20 12:43:35.741773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.115 qpair failed and we were unable to recover it. 00:29:30.115 [2024-11-20 12:43:35.742041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.115 [2024-11-20 12:43:35.742074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.115 qpair failed and we were unable to recover it. 00:29:30.115 [2024-11-20 12:43:35.742262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.115 [2024-11-20 12:43:35.742286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.115 qpair failed and we were unable to recover it. 00:29:30.115 [2024-11-20 12:43:35.742457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.115 [2024-11-20 12:43:35.742480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.115 qpair failed and we were unable to recover it. 00:29:30.115 [2024-11-20 12:43:35.742756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.115 [2024-11-20 12:43:35.742790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.115 qpair failed and we were unable to recover it. 00:29:30.115 [2024-11-20 12:43:35.742979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.115 [2024-11-20 12:43:35.743011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.115 qpair failed and we were unable to recover it. 00:29:30.115 [2024-11-20 12:43:35.743226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.115 [2024-11-20 12:43:35.743261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.115 qpair failed and we were unable to recover it. 00:29:30.115 [2024-11-20 12:43:35.743531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.115 [2024-11-20 12:43:35.743564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.115 qpair failed and we were unable to recover it. 00:29:30.115 [2024-11-20 12:43:35.743687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.115 [2024-11-20 12:43:35.743719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.115 qpair failed and we were unable to recover it. 00:29:30.115 [2024-11-20 12:43:35.743841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.115 [2024-11-20 12:43:35.743873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.115 qpair failed and we were unable to recover it. 00:29:30.115 [2024-11-20 12:43:35.744135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.115 [2024-11-20 12:43:35.744168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.115 qpair failed and we were unable to recover it. 00:29:30.115 [2024-11-20 12:43:35.744434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.115 [2024-11-20 12:43:35.744468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.115 qpair failed and we were unable to recover it. 00:29:30.115 [2024-11-20 12:43:35.744594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.115 [2024-11-20 12:43:35.744626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.115 qpair failed and we were unable to recover it. 00:29:30.115 [2024-11-20 12:43:35.744865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.115 [2024-11-20 12:43:35.744888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.115 qpair failed and we were unable to recover it. 00:29:30.115 [2024-11-20 12:43:35.745040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.115 [2024-11-20 12:43:35.745078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.115 qpair failed and we were unable to recover it. 00:29:30.115 [2024-11-20 12:43:35.745198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.115 [2024-11-20 12:43:35.745238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.115 qpair failed and we were unable to recover it. 00:29:30.115 [2024-11-20 12:43:35.745481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.115 [2024-11-20 12:43:35.745513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.115 qpair failed and we were unable to recover it. 00:29:30.115 [2024-11-20 12:43:35.745718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.115 [2024-11-20 12:43:35.745755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.115 qpair failed and we were unable to recover it. 00:29:30.115 [2024-11-20 12:43:35.746020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.115 [2024-11-20 12:43:35.746053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.115 qpair failed and we were unable to recover it. 00:29:30.115 [2024-11-20 12:43:35.746246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.115 [2024-11-20 12:43:35.746268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.115 qpair failed and we were unable to recover it. 00:29:30.115 [2024-11-20 12:43:35.746488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.115 [2024-11-20 12:43:35.746521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.115 qpair failed and we were unable to recover it. 00:29:30.116 [2024-11-20 12:43:35.746645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.116 [2024-11-20 12:43:35.746677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.116 qpair failed and we were unable to recover it. 00:29:30.116 [2024-11-20 12:43:35.746805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.116 [2024-11-20 12:43:35.746836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.116 qpair failed and we were unable to recover it. 00:29:30.116 [2024-11-20 12:43:35.746969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.116 [2024-11-20 12:43:35.746991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.116 qpair failed and we were unable to recover it. 00:29:30.116 [2024-11-20 12:43:35.747080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.116 [2024-11-20 12:43:35.747103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.116 qpair failed and we were unable to recover it. 00:29:30.116 [2024-11-20 12:43:35.747272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.116 [2024-11-20 12:43:35.747294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.116 qpair failed and we were unable to recover it. 00:29:30.116 [2024-11-20 12:43:35.747458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.116 [2024-11-20 12:43:35.747480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.116 qpair failed and we were unable to recover it. 00:29:30.116 [2024-11-20 12:43:35.747594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.116 [2024-11-20 12:43:35.747627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.116 qpair failed and we were unable to recover it. 00:29:30.116 [2024-11-20 12:43:35.747798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.116 [2024-11-20 12:43:35.747830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.116 qpair failed and we were unable to recover it. 00:29:30.116 [2024-11-20 12:43:35.748005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.116 [2024-11-20 12:43:35.748037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.116 qpair failed and we were unable to recover it. 00:29:30.116 [2024-11-20 12:43:35.748210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.116 [2024-11-20 12:43:35.748234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.116 qpair failed and we were unable to recover it. 00:29:30.116 [2024-11-20 12:43:35.748429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.116 [2024-11-20 12:43:35.748462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.116 qpair failed and we were unable to recover it. 00:29:30.116 [2024-11-20 12:43:35.748599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.116 [2024-11-20 12:43:35.748630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.116 qpair failed and we were unable to recover it. 00:29:30.116 [2024-11-20 12:43:35.748807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.116 [2024-11-20 12:43:35.748839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.116 qpair failed and we were unable to recover it. 00:29:30.116 [2024-11-20 12:43:35.749033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.116 [2024-11-20 12:43:35.749066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.116 qpair failed and we were unable to recover it. 00:29:30.116 [2024-11-20 12:43:35.749217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.116 [2024-11-20 12:43:35.749252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.116 qpair failed and we were unable to recover it. 00:29:30.116 [2024-11-20 12:43:35.749518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.116 [2024-11-20 12:43:35.749551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.116 qpair failed and we were unable to recover it. 00:29:30.116 [2024-11-20 12:43:35.749729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.116 [2024-11-20 12:43:35.749762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.116 qpair failed and we were unable to recover it. 00:29:30.116 [2024-11-20 12:43:35.749957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.116 [2024-11-20 12:43:35.749989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.116 qpair failed and we were unable to recover it. 00:29:30.116 [2024-11-20 12:43:35.750113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.116 [2024-11-20 12:43:35.750135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.116 qpair failed and we were unable to recover it. 00:29:30.116 [2024-11-20 12:43:35.750297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.116 [2024-11-20 12:43:35.750321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.116 qpair failed and we were unable to recover it. 00:29:30.116 [2024-11-20 12:43:35.750548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.116 [2024-11-20 12:43:35.750582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.116 qpair failed and we were unable to recover it. 00:29:30.116 [2024-11-20 12:43:35.750791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.116 [2024-11-20 12:43:35.750823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.116 qpair failed and we were unable to recover it. 00:29:30.116 [2024-11-20 12:43:35.751109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.116 [2024-11-20 12:43:35.751141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.116 qpair failed and we were unable to recover it. 00:29:30.116 [2024-11-20 12:43:35.751299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.116 [2024-11-20 12:43:35.751342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.116 qpair failed and we were unable to recover it. 00:29:30.116 [2024-11-20 12:43:35.751582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.116 [2024-11-20 12:43:35.751616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.116 qpair failed and we were unable to recover it. 00:29:30.116 [2024-11-20 12:43:35.751752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.116 [2024-11-20 12:43:35.751786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.116 qpair failed and we were unable to recover it. 00:29:30.116 [2024-11-20 12:43:35.751958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.116 [2024-11-20 12:43:35.751991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.116 qpair failed and we were unable to recover it. 00:29:30.116 [2024-11-20 12:43:35.752179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.116 [2024-11-20 12:43:35.752224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.116 qpair failed and we were unable to recover it. 00:29:30.116 [2024-11-20 12:43:35.752466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.116 [2024-11-20 12:43:35.752499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.116 qpair failed and we were unable to recover it. 00:29:30.116 [2024-11-20 12:43:35.752608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.116 [2024-11-20 12:43:35.752641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.116 qpair failed and we were unable to recover it. 00:29:30.116 [2024-11-20 12:43:35.752828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.116 [2024-11-20 12:43:35.752861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.116 qpair failed and we were unable to recover it. 00:29:30.116 [2024-11-20 12:43:35.753001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.116 [2024-11-20 12:43:35.753033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.116 qpair failed and we were unable to recover it. 00:29:30.116 [2024-11-20 12:43:35.753222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.116 [2024-11-20 12:43:35.753255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.116 qpair failed and we were unable to recover it. 00:29:30.116 [2024-11-20 12:43:35.753444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.116 [2024-11-20 12:43:35.753468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.117 qpair failed and we were unable to recover it. 00:29:30.117 [2024-11-20 12:43:35.753626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.117 [2024-11-20 12:43:35.753649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.117 qpair failed and we were unable to recover it. 00:29:30.117 [2024-11-20 12:43:35.753805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.117 [2024-11-20 12:43:35.753827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.117 qpair failed and we were unable to recover it. 00:29:30.117 [2024-11-20 12:43:35.754001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.117 [2024-11-20 12:43:35.754025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.117 qpair failed and we were unable to recover it. 00:29:30.117 [2024-11-20 12:43:35.754259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.117 [2024-11-20 12:43:35.754284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.117 qpair failed and we were unable to recover it. 00:29:30.117 [2024-11-20 12:43:35.754369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.117 [2024-11-20 12:43:35.754390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.117 qpair failed and we were unable to recover it. 00:29:30.117 [2024-11-20 12:43:35.754541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.117 [2024-11-20 12:43:35.754564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.117 qpair failed and we were unable to recover it. 00:29:30.117 [2024-11-20 12:43:35.754717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.117 [2024-11-20 12:43:35.754750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.117 qpair failed and we were unable to recover it. 00:29:30.117 [2024-11-20 12:43:35.754925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.117 [2024-11-20 12:43:35.754958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.117 qpair failed and we were unable to recover it. 00:29:30.117 [2024-11-20 12:43:35.755100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.117 [2024-11-20 12:43:35.755132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.117 qpair failed and we were unable to recover it. 00:29:30.117 [2024-11-20 12:43:35.755322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.117 [2024-11-20 12:43:35.755357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.117 qpair failed and we were unable to recover it. 00:29:30.117 [2024-11-20 12:43:35.755475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.117 [2024-11-20 12:43:35.755509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.117 qpair failed and we were unable to recover it. 00:29:30.117 [2024-11-20 12:43:35.755746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.117 [2024-11-20 12:43:35.755779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.117 qpair failed and we were unable to recover it. 00:29:30.117 [2024-11-20 12:43:35.755955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.117 [2024-11-20 12:43:35.755987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.117 qpair failed and we were unable to recover it. 00:29:30.117 [2024-11-20 12:43:35.756159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.117 [2024-11-20 12:43:35.756182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.117 qpair failed and we were unable to recover it. 00:29:30.117 [2024-11-20 12:43:35.756369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.117 [2024-11-20 12:43:35.756393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.117 qpair failed and we were unable to recover it. 00:29:30.117 [2024-11-20 12:43:35.756584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.117 [2024-11-20 12:43:35.756607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.117 qpair failed and we were unable to recover it. 00:29:30.117 [2024-11-20 12:43:35.756773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.117 [2024-11-20 12:43:35.756795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.117 qpair failed and we were unable to recover it. 00:29:30.117 [2024-11-20 12:43:35.756885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.117 [2024-11-20 12:43:35.756909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.117 qpair failed and we were unable to recover it. 00:29:30.117 [2024-11-20 12:43:35.757015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.117 [2024-11-20 12:43:35.757037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.117 qpair failed and we were unable to recover it. 00:29:30.117 [2024-11-20 12:43:35.757199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.117 [2024-11-20 12:43:35.757229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.117 qpair failed and we were unable to recover it. 00:29:30.117 [2024-11-20 12:43:35.757342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.117 [2024-11-20 12:43:35.757374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.117 qpair failed and we were unable to recover it. 00:29:30.117 [2024-11-20 12:43:35.757613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.117 [2024-11-20 12:43:35.757645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.117 qpair failed and we were unable to recover it. 00:29:30.117 [2024-11-20 12:43:35.757908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.117 [2024-11-20 12:43:35.757940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.117 qpair failed and we were unable to recover it. 00:29:30.117 [2024-11-20 12:43:35.758124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.117 [2024-11-20 12:43:35.758157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.117 qpair failed and we were unable to recover it. 00:29:30.117 [2024-11-20 12:43:35.758352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.117 [2024-11-20 12:43:35.758385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.117 qpair failed and we were unable to recover it. 00:29:30.117 [2024-11-20 12:43:35.758566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.117 [2024-11-20 12:43:35.758600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.117 qpair failed and we were unable to recover it. 00:29:30.117 [2024-11-20 12:43:35.758791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.117 [2024-11-20 12:43:35.758824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.117 qpair failed and we were unable to recover it. 00:29:30.117 [2024-11-20 12:43:35.759017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.117 [2024-11-20 12:43:35.759041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.117 qpair failed and we were unable to recover it. 00:29:30.117 [2024-11-20 12:43:35.759234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.117 [2024-11-20 12:43:35.759258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.117 qpair failed and we were unable to recover it. 00:29:30.117 [2024-11-20 12:43:35.759422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.117 [2024-11-20 12:43:35.759444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.117 qpair failed and we were unable to recover it. 00:29:30.117 [2024-11-20 12:43:35.759666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.117 [2024-11-20 12:43:35.759706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.117 qpair failed and we were unable to recover it. 00:29:30.117 [2024-11-20 12:43:35.759838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.117 [2024-11-20 12:43:35.759871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.117 qpair failed and we were unable to recover it. 00:29:30.117 [2024-11-20 12:43:35.760062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.117 [2024-11-20 12:43:35.760095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.117 qpair failed and we were unable to recover it. 00:29:30.117 [2024-11-20 12:43:35.760341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.117 [2024-11-20 12:43:35.760365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.117 qpair failed and we were unable to recover it. 00:29:30.117 [2024-11-20 12:43:35.760472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.117 [2024-11-20 12:43:35.760495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.117 qpair failed and we were unable to recover it. 00:29:30.117 [2024-11-20 12:43:35.760597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.117 [2024-11-20 12:43:35.760620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.117 qpair failed and we were unable to recover it. 00:29:30.117 [2024-11-20 12:43:35.760714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.117 [2024-11-20 12:43:35.760736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.117 qpair failed and we were unable to recover it. 00:29:30.117 [2024-11-20 12:43:35.760983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.117 [2024-11-20 12:43:35.761016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.117 qpair failed and we were unable to recover it. 00:29:30.117 [2024-11-20 12:43:35.761240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.117 [2024-11-20 12:43:35.761275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.117 qpair failed and we were unable to recover it. 00:29:30.117 [2024-11-20 12:43:35.761392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.117 [2024-11-20 12:43:35.761425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.117 qpair failed and we were unable to recover it. 00:29:30.117 [2024-11-20 12:43:35.761601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.117 [2024-11-20 12:43:35.761634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.117 qpair failed and we were unable to recover it. 00:29:30.118 [2024-11-20 12:43:35.761755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.118 [2024-11-20 12:43:35.761787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.118 qpair failed and we were unable to recover it. 00:29:30.118 [2024-11-20 12:43:35.761905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.118 [2024-11-20 12:43:35.761939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.118 qpair failed and we were unable to recover it. 00:29:30.118 [2024-11-20 12:43:35.762055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.118 [2024-11-20 12:43:35.762088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.118 qpair failed and we were unable to recover it. 00:29:30.118 [2024-11-20 12:43:35.762213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.118 [2024-11-20 12:43:35.762247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.118 qpair failed and we were unable to recover it. 00:29:30.118 [2024-11-20 12:43:35.762531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.118 [2024-11-20 12:43:35.762553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.118 qpair failed and we were unable to recover it. 00:29:30.118 [2024-11-20 12:43:35.762653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.118 [2024-11-20 12:43:35.762676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.118 qpair failed and we were unable to recover it. 00:29:30.118 [2024-11-20 12:43:35.762830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.118 [2024-11-20 12:43:35.762853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.118 qpair failed and we were unable to recover it. 00:29:30.118 [2024-11-20 12:43:35.763073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.118 [2024-11-20 12:43:35.763095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.118 qpair failed and we were unable to recover it. 00:29:30.118 [2024-11-20 12:43:35.763271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.118 [2024-11-20 12:43:35.763296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.118 qpair failed and we were unable to recover it. 00:29:30.118 [2024-11-20 12:43:35.763484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.118 [2024-11-20 12:43:35.763516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.118 qpair failed and we were unable to recover it. 00:29:30.118 [2024-11-20 12:43:35.763689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.118 [2024-11-20 12:43:35.763721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.118 qpair failed and we were unable to recover it. 00:29:30.118 [2024-11-20 12:43:35.763849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.118 [2024-11-20 12:43:35.763883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.118 qpair failed and we were unable to recover it. 00:29:30.118 [2024-11-20 12:43:35.763988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.118 [2024-11-20 12:43:35.764021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.118 qpair failed and we were unable to recover it. 00:29:30.118 [2024-11-20 12:43:35.764228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.118 [2024-11-20 12:43:35.764252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.118 qpair failed and we were unable to recover it. 00:29:30.118 [2024-11-20 12:43:35.764447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.118 [2024-11-20 12:43:35.764470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.118 qpair failed and we were unable to recover it. 00:29:30.118 [2024-11-20 12:43:35.764557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.118 [2024-11-20 12:43:35.764580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.118 qpair failed and we were unable to recover it. 00:29:30.118 [2024-11-20 12:43:35.764735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.118 [2024-11-20 12:43:35.764762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.118 qpair failed and we were unable to recover it. 00:29:30.118 [2024-11-20 12:43:35.764878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.118 [2024-11-20 12:43:35.764901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.118 qpair failed and we were unable to recover it. 00:29:30.118 [2024-11-20 12:43:35.765001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.118 [2024-11-20 12:43:35.765025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.118 qpair failed and we were unable to recover it. 00:29:30.118 [2024-11-20 12:43:35.765246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.118 [2024-11-20 12:43:35.765270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.118 qpair failed and we were unable to recover it. 00:29:30.118 [2024-11-20 12:43:35.765434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.118 [2024-11-20 12:43:35.765456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.118 qpair failed and we were unable to recover it. 00:29:30.118 [2024-11-20 12:43:35.765626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.118 [2024-11-20 12:43:35.765660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.118 qpair failed and we were unable to recover it. 00:29:30.118 [2024-11-20 12:43:35.765783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.118 [2024-11-20 12:43:35.765816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.118 qpair failed and we were unable to recover it. 00:29:30.118 [2024-11-20 12:43:35.766067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.118 [2024-11-20 12:43:35.766100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.118 qpair failed and we were unable to recover it. 00:29:30.118 [2024-11-20 12:43:35.766289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.118 [2024-11-20 12:43:35.766323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.118 qpair failed and we were unable to recover it. 00:29:30.118 [2024-11-20 12:43:35.766443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.118 [2024-11-20 12:43:35.766465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.118 qpair failed and we were unable to recover it. 00:29:30.118 [2024-11-20 12:43:35.766707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.118 [2024-11-20 12:43:35.766731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.118 qpair failed and we were unable to recover it. 00:29:30.118 [2024-11-20 12:43:35.766835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.118 [2024-11-20 12:43:35.766859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.118 qpair failed and we were unable to recover it. 00:29:30.118 [2024-11-20 12:43:35.767024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.118 [2024-11-20 12:43:35.767047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.118 qpair failed and we were unable to recover it. 00:29:30.118 [2024-11-20 12:43:35.767161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.118 [2024-11-20 12:43:35.767183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.118 qpair failed and we were unable to recover it. 00:29:30.118 [2024-11-20 12:43:35.767287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.118 [2024-11-20 12:43:35.767311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.118 qpair failed and we were unable to recover it. 00:29:30.118 [2024-11-20 12:43:35.767545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.118 [2024-11-20 12:43:35.767578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.118 qpair failed and we were unable to recover it. 00:29:30.118 [2024-11-20 12:43:35.767703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.118 [2024-11-20 12:43:35.767736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.118 qpair failed and we were unable to recover it. 00:29:30.118 [2024-11-20 12:43:35.767990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.118 [2024-11-20 12:43:35.768022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.118 qpair failed and we were unable to recover it. 00:29:30.118 [2024-11-20 12:43:35.768269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.118 [2024-11-20 12:43:35.768304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.118 qpair failed and we were unable to recover it. 00:29:30.118 [2024-11-20 12:43:35.768495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.118 [2024-11-20 12:43:35.768527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.118 qpair failed and we were unable to recover it. 00:29:30.118 [2024-11-20 12:43:35.768715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.118 [2024-11-20 12:43:35.768749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.118 qpair failed and we were unable to recover it. 00:29:30.118 [2024-11-20 12:43:35.768957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.118 [2024-11-20 12:43:35.768990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.118 qpair failed and we were unable to recover it. 00:29:30.118 [2024-11-20 12:43:35.769103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.118 [2024-11-20 12:43:35.769136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.118 qpair failed and we were unable to recover it. 00:29:30.118 [2024-11-20 12:43:35.769307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.118 [2024-11-20 12:43:35.769341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.118 qpair failed and we were unable to recover it. 00:29:30.118 [2024-11-20 12:43:35.769530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.118 [2024-11-20 12:43:35.769553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.118 qpair failed and we were unable to recover it. 00:29:30.118 [2024-11-20 12:43:35.769639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.119 [2024-11-20 12:43:35.769681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.119 qpair failed and we were unable to recover it. 00:29:30.119 [2024-11-20 12:43:35.769870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.119 [2024-11-20 12:43:35.769902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.119 qpair failed and we were unable to recover it. 00:29:30.119 [2024-11-20 12:43:35.770039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.119 [2024-11-20 12:43:35.770071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.119 qpair failed and we were unable to recover it. 00:29:30.119 [2024-11-20 12:43:35.770188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.119 [2024-11-20 12:43:35.770239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.119 qpair failed and we were unable to recover it. 00:29:30.119 [2024-11-20 12:43:35.770482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.119 [2024-11-20 12:43:35.770505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.119 qpair failed and we were unable to recover it. 00:29:30.119 [2024-11-20 12:43:35.770657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.119 [2024-11-20 12:43:35.770680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.119 qpair failed and we were unable to recover it. 00:29:30.119 [2024-11-20 12:43:35.770787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.119 [2024-11-20 12:43:35.770810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.119 qpair failed and we were unable to recover it. 00:29:30.119 [2024-11-20 12:43:35.771027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.119 [2024-11-20 12:43:35.771059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.119 qpair failed and we were unable to recover it. 00:29:30.119 [2024-11-20 12:43:35.771278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.119 [2024-11-20 12:43:35.771313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.119 qpair failed and we were unable to recover it. 00:29:30.119 [2024-11-20 12:43:35.771437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.119 [2024-11-20 12:43:35.771470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.119 qpair failed and we were unable to recover it. 00:29:30.119 [2024-11-20 12:43:35.771655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.119 [2024-11-20 12:43:35.771689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.119 qpair failed and we were unable to recover it. 00:29:30.119 [2024-11-20 12:43:35.771932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.119 [2024-11-20 12:43:35.771965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.119 qpair failed and we were unable to recover it. 00:29:30.119 [2024-11-20 12:43:35.772254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.119 [2024-11-20 12:43:35.772288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.119 qpair failed and we were unable to recover it. 00:29:30.119 [2024-11-20 12:43:35.772412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.119 [2024-11-20 12:43:35.772445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.119 qpair failed and we were unable to recover it. 00:29:30.119 [2024-11-20 12:43:35.772708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.119 [2024-11-20 12:43:35.772741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.119 qpair failed and we were unable to recover it. 00:29:30.119 [2024-11-20 12:43:35.772868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.119 [2024-11-20 12:43:35.772901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.119 qpair failed and we were unable to recover it. 00:29:30.119 [2024-11-20 12:43:35.773144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.119 [2024-11-20 12:43:35.773189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.119 qpair failed and we were unable to recover it. 00:29:30.119 [2024-11-20 12:43:35.773360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.119 [2024-11-20 12:43:35.773383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.119 qpair failed and we were unable to recover it. 00:29:30.119 [2024-11-20 12:43:35.773542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.119 [2024-11-20 12:43:35.773565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.119 qpair failed and we were unable to recover it. 00:29:30.119 [2024-11-20 12:43:35.773723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.119 [2024-11-20 12:43:35.773746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.119 qpair failed and we were unable to recover it. 00:29:30.119 [2024-11-20 12:43:35.774001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.119 [2024-11-20 12:43:35.774024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.119 qpair failed and we were unable to recover it. 00:29:30.119 [2024-11-20 12:43:35.774177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.119 [2024-11-20 12:43:35.774216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.119 qpair failed and we were unable to recover it. 00:29:30.119 [2024-11-20 12:43:35.774320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.119 [2024-11-20 12:43:35.774352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.119 qpair failed and we were unable to recover it. 00:29:30.119 [2024-11-20 12:43:35.774483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.119 [2024-11-20 12:43:35.774516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.119 qpair failed and we were unable to recover it. 00:29:30.119 [2024-11-20 12:43:35.774689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.119 [2024-11-20 12:43:35.774723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.119 qpair failed and we were unable to recover it. 00:29:30.119 [2024-11-20 12:43:35.774846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.119 [2024-11-20 12:43:35.774880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.119 qpair failed and we were unable to recover it. 00:29:30.119 [2024-11-20 12:43:35.775175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.119 [2024-11-20 12:43:35.775226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.119 qpair failed and we were unable to recover it. 00:29:30.119 [2024-11-20 12:43:35.775356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.119 [2024-11-20 12:43:35.775389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.119 qpair failed and we were unable to recover it. 00:29:30.119 [2024-11-20 12:43:35.775575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.119 [2024-11-20 12:43:35.775607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.119 qpair failed and we were unable to recover it. 00:29:30.119 [2024-11-20 12:43:35.775750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.119 [2024-11-20 12:43:35.775783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.119 qpair failed and we were unable to recover it. 00:29:30.119 [2024-11-20 12:43:35.775991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.119 [2024-11-20 12:43:35.776026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.119 qpair failed and we were unable to recover it. 00:29:30.119 [2024-11-20 12:43:35.776233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.119 [2024-11-20 12:43:35.776270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.119 qpair failed and we were unable to recover it. 00:29:30.119 [2024-11-20 12:43:35.776538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.119 [2024-11-20 12:43:35.776579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.119 qpair failed and we were unable to recover it. 00:29:30.119 [2024-11-20 12:43:35.776701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.119 [2024-11-20 12:43:35.776735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.119 qpair failed and we were unable to recover it. 00:29:30.119 [2024-11-20 12:43:35.776865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.119 [2024-11-20 12:43:35.776903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.119 qpair failed and we were unable to recover it. 00:29:30.119 [2024-11-20 12:43:35.776991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.119 [2024-11-20 12:43:35.777015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.119 qpair failed and we were unable to recover it. 00:29:30.119 [2024-11-20 12:43:35.777220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.119 [2024-11-20 12:43:35.777255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.119 qpair failed and we were unable to recover it. 00:29:30.119 [2024-11-20 12:43:35.777503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.119 [2024-11-20 12:43:35.777536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.119 qpair failed and we were unable to recover it. 00:29:30.119 [2024-11-20 12:43:35.777669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.119 [2024-11-20 12:43:35.777702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.119 qpair failed and we were unable to recover it. 00:29:30.119 [2024-11-20 12:43:35.777901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.119 [2024-11-20 12:43:35.777934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.119 qpair failed and we were unable to recover it. 00:29:30.119 [2024-11-20 12:43:35.778122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.119 [2024-11-20 12:43:35.778144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.119 qpair failed and we were unable to recover it. 00:29:30.119 [2024-11-20 12:43:35.778316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.119 [2024-11-20 12:43:35.778351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.119 qpair failed and we were unable to recover it. 00:29:30.120 [2024-11-20 12:43:35.778462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.120 [2024-11-20 12:43:35.778495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.120 qpair failed and we were unable to recover it. 00:29:30.120 [2024-11-20 12:43:35.778602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.120 [2024-11-20 12:43:35.778634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.120 qpair failed and we were unable to recover it. 00:29:30.120 [2024-11-20 12:43:35.778838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.120 [2024-11-20 12:43:35.778870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.120 qpair failed and we were unable to recover it. 00:29:30.120 [2024-11-20 12:43:35.778975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.120 [2024-11-20 12:43:35.779008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.120 qpair failed and we were unable to recover it. 00:29:30.120 [2024-11-20 12:43:35.779230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.120 [2024-11-20 12:43:35.779263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.120 qpair failed and we were unable to recover it. 00:29:30.120 [2024-11-20 12:43:35.779394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.120 [2024-11-20 12:43:35.779417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.120 qpair failed and we were unable to recover it. 00:29:30.120 [2024-11-20 12:43:35.779580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.120 [2024-11-20 12:43:35.779602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.120 qpair failed and we were unable to recover it. 00:29:30.120 [2024-11-20 12:43:35.779785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.120 [2024-11-20 12:43:35.779819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.120 qpair failed and we were unable to recover it. 00:29:30.120 [2024-11-20 12:43:35.780007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.120 [2024-11-20 12:43:35.780030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.120 qpair failed and we were unable to recover it. 00:29:30.120 [2024-11-20 12:43:35.780118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.120 [2024-11-20 12:43:35.780141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.120 qpair failed and we were unable to recover it. 00:29:30.120 [2024-11-20 12:43:35.780359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.120 [2024-11-20 12:43:35.780383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.120 qpair failed and we were unable to recover it. 00:29:30.120 [2024-11-20 12:43:35.780593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.120 [2024-11-20 12:43:35.780616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.120 qpair failed and we were unable to recover it. 00:29:30.120 [2024-11-20 12:43:35.780785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.120 [2024-11-20 12:43:35.780808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.120 qpair failed and we were unable to recover it. 00:29:30.120 [2024-11-20 12:43:35.781027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.120 [2024-11-20 12:43:35.781060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.120 qpair failed and we were unable to recover it. 00:29:30.120 [2024-11-20 12:43:35.781262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.120 [2024-11-20 12:43:35.781296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.120 qpair failed and we were unable to recover it. 00:29:30.120 [2024-11-20 12:43:35.781472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.120 [2024-11-20 12:43:35.781495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.120 qpair failed and we were unable to recover it. 00:29:30.120 [2024-11-20 12:43:35.781743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.120 [2024-11-20 12:43:35.781776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.120 qpair failed and we were unable to recover it. 00:29:30.120 [2024-11-20 12:43:35.781917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.120 [2024-11-20 12:43:35.781949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.120 qpair failed and we were unable to recover it. 00:29:30.120 [2024-11-20 12:43:35.782057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.120 [2024-11-20 12:43:35.782090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.120 qpair failed and we were unable to recover it. 00:29:30.120 [2024-11-20 12:43:35.782214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.120 [2024-11-20 12:43:35.782248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.120 qpair failed and we were unable to recover it. 00:29:30.120 [2024-11-20 12:43:35.782372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.120 [2024-11-20 12:43:35.782404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.120 qpair failed and we were unable to recover it. 00:29:30.120 [2024-11-20 12:43:35.782585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.120 [2024-11-20 12:43:35.782617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.120 qpair failed and we were unable to recover it. 00:29:30.120 [2024-11-20 12:43:35.782812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.120 [2024-11-20 12:43:35.782844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.120 qpair failed and we were unable to recover it. 00:29:30.120 [2024-11-20 12:43:35.782968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.120 [2024-11-20 12:43:35.783001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.120 qpair failed and we were unable to recover it. 00:29:30.120 [2024-11-20 12:43:35.783177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.120 [2024-11-20 12:43:35.783217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.120 qpair failed and we were unable to recover it. 00:29:30.120 [2024-11-20 12:43:35.783477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.120 [2024-11-20 12:43:35.783500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.120 qpair failed and we were unable to recover it. 00:29:30.120 [2024-11-20 12:43:35.783664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.120 [2024-11-20 12:43:35.783688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.120 qpair failed and we were unable to recover it. 00:29:30.120 [2024-11-20 12:43:35.783908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.120 [2024-11-20 12:43:35.783941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.120 qpair failed and we were unable to recover it. 00:29:30.120 [2024-11-20 12:43:35.784111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.120 [2024-11-20 12:43:35.784145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.120 qpair failed and we were unable to recover it. 00:29:30.120 [2024-11-20 12:43:35.784352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.120 [2024-11-20 12:43:35.784387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.120 qpair failed and we were unable to recover it. 00:29:30.120 [2024-11-20 12:43:35.784502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.120 [2024-11-20 12:43:35.784536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.120 qpair failed and we were unable to recover it. 00:29:30.120 [2024-11-20 12:43:35.784658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.120 [2024-11-20 12:43:35.784691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.120 qpair failed and we were unable to recover it. 00:29:30.120 [2024-11-20 12:43:35.784815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.120 [2024-11-20 12:43:35.784847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.120 qpair failed and we were unable to recover it. 00:29:30.120 [2024-11-20 12:43:35.785034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.120 [2024-11-20 12:43:35.785067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.120 qpair failed and we were unable to recover it. 00:29:30.120 [2024-11-20 12:43:35.785253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.120 [2024-11-20 12:43:35.785288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.120 qpair failed and we were unable to recover it. 00:29:30.121 [2024-11-20 12:43:35.785483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.121 [2024-11-20 12:43:35.785506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.121 qpair failed and we were unable to recover it. 00:29:30.121 [2024-11-20 12:43:35.785664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.121 [2024-11-20 12:43:35.785705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.121 qpair failed and we were unable to recover it. 00:29:30.121 [2024-11-20 12:43:35.785898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.121 [2024-11-20 12:43:35.785932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.121 qpair failed and we were unable to recover it. 00:29:30.121 [2024-11-20 12:43:35.786196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.121 [2024-11-20 12:43:35.786248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.121 qpair failed and we were unable to recover it. 00:29:30.121 [2024-11-20 12:43:35.786352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.121 [2024-11-20 12:43:35.786375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.121 qpair failed and we were unable to recover it. 00:29:30.121 [2024-11-20 12:43:35.786666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.121 [2024-11-20 12:43:35.786699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.121 qpair failed and we were unable to recover it. 00:29:30.121 [2024-11-20 12:43:35.786906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.121 [2024-11-20 12:43:35.786939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.121 qpair failed and we were unable to recover it. 00:29:30.121 [2024-11-20 12:43:35.787232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.121 [2024-11-20 12:43:35.787271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.121 qpair failed and we were unable to recover it. 00:29:30.121 [2024-11-20 12:43:35.787445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.121 [2024-11-20 12:43:35.787468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.121 qpair failed and we were unable to recover it. 00:29:30.121 [2024-11-20 12:43:35.787713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.121 [2024-11-20 12:43:35.787746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.121 qpair failed and we were unable to recover it. 00:29:30.121 [2024-11-20 12:43:35.787937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.121 [2024-11-20 12:43:35.787970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.121 qpair failed and we were unable to recover it. 00:29:30.121 [2024-11-20 12:43:35.788234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.121 [2024-11-20 12:43:35.788276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.121 qpair failed and we were unable to recover it. 00:29:30.121 [2024-11-20 12:43:35.788455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.121 [2024-11-20 12:43:35.788479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.121 qpair failed and we were unable to recover it. 00:29:30.121 [2024-11-20 12:43:35.788722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.121 [2024-11-20 12:43:35.788745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.121 qpair failed and we were unable to recover it. 00:29:30.121 [2024-11-20 12:43:35.788853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.121 [2024-11-20 12:43:35.788876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.121 qpair failed and we were unable to recover it. 00:29:30.121 [2024-11-20 12:43:35.789025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.121 [2024-11-20 12:43:35.789048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.121 qpair failed and we were unable to recover it. 00:29:30.121 [2024-11-20 12:43:35.789163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.121 [2024-11-20 12:43:35.789186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.121 qpair failed and we were unable to recover it. 00:29:30.121 [2024-11-20 12:43:35.789379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.121 [2024-11-20 12:43:35.789403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.121 qpair failed and we were unable to recover it. 00:29:30.121 [2024-11-20 12:43:35.789489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.121 [2024-11-20 12:43:35.789512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.121 qpair failed and we were unable to recover it. 00:29:30.121 [2024-11-20 12:43:35.789609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.121 [2024-11-20 12:43:35.789631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.121 qpair failed and we were unable to recover it. 00:29:30.121 [2024-11-20 12:43:35.789729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.121 [2024-11-20 12:43:35.789751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.121 qpair failed and we were unable to recover it. 00:29:30.121 [2024-11-20 12:43:35.789862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.121 [2024-11-20 12:43:35.789885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.121 qpair failed and we were unable to recover it. 00:29:30.121 [2024-11-20 12:43:35.789985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.121 [2024-11-20 12:43:35.790009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.121 qpair failed and we were unable to recover it. 00:29:30.121 [2024-11-20 12:43:35.790093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.121 [2024-11-20 12:43:35.790116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.121 qpair failed and we were unable to recover it. 00:29:30.121 [2024-11-20 12:43:35.790284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.121 [2024-11-20 12:43:35.790308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.121 qpair failed and we were unable to recover it. 00:29:30.121 [2024-11-20 12:43:35.790395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.121 [2024-11-20 12:43:35.790419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.121 qpair failed and we were unable to recover it. 00:29:30.121 [2024-11-20 12:43:35.790497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.121 [2024-11-20 12:43:35.790520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.121 qpair failed and we were unable to recover it. 00:29:30.121 [2024-11-20 12:43:35.790695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.121 [2024-11-20 12:43:35.790728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.121 qpair failed and we were unable to recover it. 00:29:30.121 [2024-11-20 12:43:35.790907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.121 [2024-11-20 12:43:35.790940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.121 qpair failed and we were unable to recover it. 00:29:30.121 [2024-11-20 12:43:35.791060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.121 [2024-11-20 12:43:35.791093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.121 qpair failed and we were unable to recover it. 00:29:30.121 [2024-11-20 12:43:35.791223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.121 [2024-11-20 12:43:35.791246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.121 qpair failed and we were unable to recover it. 00:29:30.121 [2024-11-20 12:43:35.791403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.121 [2024-11-20 12:43:35.791426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.121 qpair failed and we were unable to recover it. 00:29:30.121 [2024-11-20 12:43:35.791512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.121 [2024-11-20 12:43:35.791553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.121 qpair failed and we were unable to recover it. 00:29:30.121 [2024-11-20 12:43:35.791763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.121 [2024-11-20 12:43:35.791797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.121 qpair failed and we were unable to recover it. 00:29:30.121 [2024-11-20 12:43:35.791908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.121 [2024-11-20 12:43:35.791941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.121 qpair failed and we were unable to recover it. 00:29:30.121 [2024-11-20 12:43:35.792083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.121 [2024-11-20 12:43:35.792116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.121 qpair failed and we were unable to recover it. 00:29:30.121 [2024-11-20 12:43:35.792309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.121 [2024-11-20 12:43:35.792344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.121 qpair failed and we were unable to recover it. 00:29:30.121 [2024-11-20 12:43:35.792607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.121 [2024-11-20 12:43:35.792630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.121 qpair failed and we were unable to recover it. 00:29:30.121 [2024-11-20 12:43:35.792726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.121 [2024-11-20 12:43:35.792749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.121 qpair failed and we were unable to recover it. 00:29:30.121 [2024-11-20 12:43:35.792848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.121 [2024-11-20 12:43:35.792871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.121 qpair failed and we were unable to recover it. 00:29:30.121 [2024-11-20 12:43:35.792970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.121 [2024-11-20 12:43:35.792993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.121 qpair failed and we were unable to recover it. 00:29:30.121 [2024-11-20 12:43:35.793237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.122 [2024-11-20 12:43:35.793272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.122 qpair failed and we were unable to recover it. 00:29:30.122 [2024-11-20 12:43:35.793480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.122 [2024-11-20 12:43:35.793512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.122 qpair failed and we were unable to recover it. 00:29:30.122 [2024-11-20 12:43:35.793630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.122 [2024-11-20 12:43:35.793663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.122 qpair failed and we were unable to recover it. 00:29:30.122 [2024-11-20 12:43:35.793854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.122 [2024-11-20 12:43:35.793886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.122 qpair failed and we were unable to recover it. 00:29:30.122 [2024-11-20 12:43:35.794151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.122 [2024-11-20 12:43:35.794174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.122 qpair failed and we were unable to recover it. 00:29:30.122 [2024-11-20 12:43:35.794282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.122 [2024-11-20 12:43:35.794307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.122 qpair failed and we were unable to recover it. 00:29:30.122 [2024-11-20 12:43:35.794483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.122 [2024-11-20 12:43:35.794515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.122 qpair failed and we were unable to recover it. 00:29:30.122 [2024-11-20 12:43:35.794749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.122 [2024-11-20 12:43:35.794823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.122 qpair failed and we were unable to recover it. 00:29:30.122 [2024-11-20 12:43:35.795038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.122 [2024-11-20 12:43:35.795075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.122 qpair failed and we were unable to recover it. 00:29:30.122 [2024-11-20 12:43:35.795190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.122 [2024-11-20 12:43:35.795249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.122 qpair failed and we were unable to recover it. 00:29:30.122 [2024-11-20 12:43:35.795382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.122 [2024-11-20 12:43:35.795416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.122 qpair failed and we were unable to recover it. 00:29:30.122 [2024-11-20 12:43:35.795543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.122 [2024-11-20 12:43:35.795576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.122 qpair failed and we were unable to recover it. 00:29:30.122 [2024-11-20 12:43:35.795754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.122 [2024-11-20 12:43:35.795787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.122 qpair failed and we were unable to recover it. 00:29:30.122 [2024-11-20 12:43:35.796008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.122 [2024-11-20 12:43:35.796042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.122 qpair failed and we were unable to recover it. 00:29:30.122 [2024-11-20 12:43:35.796249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.122 [2024-11-20 12:43:35.796285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.122 qpair failed and we were unable to recover it. 00:29:30.122 [2024-11-20 12:43:35.796464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.122 [2024-11-20 12:43:35.796503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.122 qpair failed and we were unable to recover it. 00:29:30.122 [2024-11-20 12:43:35.796697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.122 [2024-11-20 12:43:35.796723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.122 qpair failed and we were unable to recover it. 00:29:30.122 [2024-11-20 12:43:35.796840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.122 [2024-11-20 12:43:35.796864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.122 qpair failed and we were unable to recover it. 00:29:30.122 [2024-11-20 12:43:35.797030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.122 [2024-11-20 12:43:35.797064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.122 qpair failed and we were unable to recover it. 00:29:30.122 [2024-11-20 12:43:35.797308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.122 [2024-11-20 12:43:35.797342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.122 qpair failed and we were unable to recover it. 00:29:30.122 [2024-11-20 12:43:35.797516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.122 [2024-11-20 12:43:35.797549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.122 qpair failed and we were unable to recover it. 00:29:30.122 [2024-11-20 12:43:35.797745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.122 [2024-11-20 12:43:35.797779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.122 qpair failed and we were unable to recover it. 00:29:30.122 [2024-11-20 12:43:35.797965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.122 [2024-11-20 12:43:35.797997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.122 qpair failed and we were unable to recover it. 00:29:30.122 [2024-11-20 12:43:35.798120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.122 [2024-11-20 12:43:35.798153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.122 qpair failed and we were unable to recover it. 00:29:30.122 [2024-11-20 12:43:35.798343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.122 [2024-11-20 12:43:35.798378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.122 qpair failed and we were unable to recover it. 00:29:30.122 [2024-11-20 12:43:35.798555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.122 [2024-11-20 12:43:35.798587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.122 qpair failed and we were unable to recover it. 00:29:30.122 [2024-11-20 12:43:35.798783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.122 [2024-11-20 12:43:35.798815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.122 qpair failed and we were unable to recover it. 00:29:30.122 [2024-11-20 12:43:35.799027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.122 [2024-11-20 12:43:35.799061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.122 qpair failed and we were unable to recover it. 00:29:30.122 [2024-11-20 12:43:35.799303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.122 [2024-11-20 12:43:35.799338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.122 qpair failed and we were unable to recover it. 00:29:30.122 [2024-11-20 12:43:35.799597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.122 [2024-11-20 12:43:35.799629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.122 qpair failed and we were unable to recover it. 00:29:30.122 [2024-11-20 12:43:35.799805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.122 [2024-11-20 12:43:35.799839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.122 qpair failed and we were unable to recover it. 00:29:30.122 [2024-11-20 12:43:35.800048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.122 [2024-11-20 12:43:35.800081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.122 qpair failed and we were unable to recover it. 00:29:30.122 [2024-11-20 12:43:35.800252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.122 [2024-11-20 12:43:35.800287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.122 qpair failed and we were unable to recover it. 00:29:30.122 [2024-11-20 12:43:35.800402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.122 [2024-11-20 12:43:35.800435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.122 qpair failed and we were unable to recover it. 00:29:30.122 [2024-11-20 12:43:35.800677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.122 [2024-11-20 12:43:35.800716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.122 qpair failed and we were unable to recover it. 00:29:30.122 [2024-11-20 12:43:35.800901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.122 [2024-11-20 12:43:35.800934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.122 qpair failed and we were unable to recover it. 00:29:30.122 [2024-11-20 12:43:35.801181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.122 [2024-11-20 12:43:35.801209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.122 qpair failed and we were unable to recover it. 00:29:30.122 [2024-11-20 12:43:35.801313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.122 [2024-11-20 12:43:35.801340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.122 qpair failed and we were unable to recover it. 00:29:30.122 [2024-11-20 12:43:35.801533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.122 [2024-11-20 12:43:35.801557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.122 qpair failed and we were unable to recover it. 00:29:30.122 [2024-11-20 12:43:35.801657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.122 [2024-11-20 12:43:35.801680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.122 qpair failed and we were unable to recover it. 00:29:30.122 [2024-11-20 12:43:35.801799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.122 [2024-11-20 12:43:35.801822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.122 qpair failed and we were unable to recover it. 00:29:30.122 [2024-11-20 12:43:35.801925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.123 [2024-11-20 12:43:35.801947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.123 qpair failed and we were unable to recover it. 00:29:30.123 [2024-11-20 12:43:35.802046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.123 [2024-11-20 12:43:35.802069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.123 qpair failed and we were unable to recover it. 00:29:30.123 [2024-11-20 12:43:35.802157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.123 [2024-11-20 12:43:35.802180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.123 qpair failed and we were unable to recover it. 00:29:30.123 [2024-11-20 12:43:35.802298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.123 [2024-11-20 12:43:35.802322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.123 qpair failed and we were unable to recover it. 00:29:30.123 [2024-11-20 12:43:35.802419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.123 [2024-11-20 12:43:35.802441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.123 qpair failed and we were unable to recover it. 00:29:30.123 [2024-11-20 12:43:35.802664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.123 [2024-11-20 12:43:35.802697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.123 qpair failed and we were unable to recover it. 00:29:30.123 [2024-11-20 12:43:35.802815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.123 [2024-11-20 12:43:35.802847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.123 qpair failed and we were unable to recover it. 00:29:30.123 [2024-11-20 12:43:35.803037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.123 [2024-11-20 12:43:35.803070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.123 qpair failed and we were unable to recover it. 00:29:30.123 [2024-11-20 12:43:35.803325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.123 [2024-11-20 12:43:35.803349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.123 qpair failed and we were unable to recover it. 00:29:30.123 [2024-11-20 12:43:35.803444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.123 [2024-11-20 12:43:35.803467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.123 qpair failed and we were unable to recover it. 00:29:30.123 [2024-11-20 12:43:35.803664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.123 [2024-11-20 12:43:35.803687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.123 qpair failed and we were unable to recover it. 00:29:30.123 [2024-11-20 12:43:35.803789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.123 [2024-11-20 12:43:35.803811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.123 qpair failed and we were unable to recover it. 00:29:30.123 [2024-11-20 12:43:35.803983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.123 [2024-11-20 12:43:35.804006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.123 qpair failed and we were unable to recover it. 00:29:30.123 [2024-11-20 12:43:35.804228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.123 [2024-11-20 12:43:35.804263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.123 qpair failed and we were unable to recover it. 00:29:30.123 [2024-11-20 12:43:35.804389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.123 [2024-11-20 12:43:35.804421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.123 qpair failed and we were unable to recover it. 00:29:30.123 [2024-11-20 12:43:35.804688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.123 [2024-11-20 12:43:35.804721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.123 qpair failed and we were unable to recover it. 00:29:30.123 [2024-11-20 12:43:35.804854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.123 [2024-11-20 12:43:35.804888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.123 qpair failed and we were unable to recover it. 00:29:30.123 [2024-11-20 12:43:35.805081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.123 [2024-11-20 12:43:35.805113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.123 qpair failed and we were unable to recover it. 00:29:30.123 [2024-11-20 12:43:35.805286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.123 [2024-11-20 12:43:35.805337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.123 qpair failed and we were unable to recover it. 00:29:30.123 [2024-11-20 12:43:35.805528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.123 [2024-11-20 12:43:35.805561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.123 qpair failed and we were unable to recover it. 00:29:30.123 [2024-11-20 12:43:35.805746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.123 [2024-11-20 12:43:35.805784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.123 qpair failed and we were unable to recover it. 00:29:30.123 [2024-11-20 12:43:35.805987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.123 [2024-11-20 12:43:35.806020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.123 qpair failed and we were unable to recover it. 00:29:30.123 [2024-11-20 12:43:35.806157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.123 [2024-11-20 12:43:35.806189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.123 qpair failed and we were unable to recover it. 00:29:30.123 [2024-11-20 12:43:35.806375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.123 [2024-11-20 12:43:35.806407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.123 qpair failed and we were unable to recover it. 00:29:30.123 [2024-11-20 12:43:35.806542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.123 [2024-11-20 12:43:35.806575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.123 qpair failed and we were unable to recover it. 00:29:30.123 [2024-11-20 12:43:35.806679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.123 [2024-11-20 12:43:35.806711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.123 qpair failed and we were unable to recover it. 00:29:30.123 [2024-11-20 12:43:35.806840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.123 [2024-11-20 12:43:35.806873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.123 qpair failed and we were unable to recover it. 00:29:30.123 [2024-11-20 12:43:35.806995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.123 [2024-11-20 12:43:35.807028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.123 qpair failed and we were unable to recover it. 00:29:30.123 [2024-11-20 12:43:35.807154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.123 [2024-11-20 12:43:35.807187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.123 qpair failed and we were unable to recover it. 00:29:30.123 [2024-11-20 12:43:35.807377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.123 [2024-11-20 12:43:35.807411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.123 qpair failed and we were unable to recover it. 00:29:30.123 [2024-11-20 12:43:35.807593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.123 [2024-11-20 12:43:35.807615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.123 qpair failed and we were unable to recover it. 00:29:30.123 [2024-11-20 12:43:35.807867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.123 [2024-11-20 12:43:35.807900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.123 qpair failed and we were unable to recover it. 00:29:30.123 [2024-11-20 12:43:35.808032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.123 [2024-11-20 12:43:35.808065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.123 qpair failed and we were unable to recover it. 00:29:30.123 [2024-11-20 12:43:35.808254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.123 [2024-11-20 12:43:35.808301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.123 qpair failed and we were unable to recover it. 00:29:30.123 [2024-11-20 12:43:35.808400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.123 [2024-11-20 12:43:35.808424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.123 qpair failed and we were unable to recover it. 00:29:30.123 [2024-11-20 12:43:35.808528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.123 [2024-11-20 12:43:35.808551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.123 qpair failed and we were unable to recover it. 00:29:30.123 [2024-11-20 12:43:35.808716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.123 [2024-11-20 12:43:35.808738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.123 qpair failed and we were unable to recover it. 00:29:30.123 [2024-11-20 12:43:35.808904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.123 [2024-11-20 12:43:35.808936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.123 qpair failed and we were unable to recover it. 00:29:30.123 [2024-11-20 12:43:35.809070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.123 [2024-11-20 12:43:35.809103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.123 qpair failed and we were unable to recover it. 00:29:30.123 [2024-11-20 12:43:35.809229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.123 [2024-11-20 12:43:35.809263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.123 qpair failed and we were unable to recover it. 00:29:30.123 [2024-11-20 12:43:35.809478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.123 [2024-11-20 12:43:35.809511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.123 qpair failed and we were unable to recover it. 00:29:30.123 [2024-11-20 12:43:35.809648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.123 [2024-11-20 12:43:35.809682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.123 qpair failed and we were unable to recover it. 00:29:30.123 [2024-11-20 12:43:35.809874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.123 [2024-11-20 12:43:35.809907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.123 qpair failed and we were unable to recover it. 00:29:30.123 [2024-11-20 12:43:35.810034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.123 [2024-11-20 12:43:35.810066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.123 qpair failed and we were unable to recover it. 00:29:30.123 [2024-11-20 12:43:35.810208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.123 [2024-11-20 12:43:35.810243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.123 qpair failed and we were unable to recover it. 00:29:30.123 [2024-11-20 12:43:35.810352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.124 [2024-11-20 12:43:35.810385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.124 qpair failed and we were unable to recover it. 00:29:30.124 [2024-11-20 12:43:35.810622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.124 [2024-11-20 12:43:35.810655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.124 qpair failed and we were unable to recover it. 00:29:30.124 [2024-11-20 12:43:35.810832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.124 [2024-11-20 12:43:35.810865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.124 qpair failed and we were unable to recover it. 00:29:30.124 [2024-11-20 12:43:35.811133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.124 [2024-11-20 12:43:35.811166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.124 qpair failed and we were unable to recover it. 00:29:30.124 [2024-11-20 12:43:35.811368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.124 [2024-11-20 12:43:35.811392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.124 qpair failed and we were unable to recover it. 00:29:30.124 [2024-11-20 12:43:35.811499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.124 [2024-11-20 12:43:35.811522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.124 qpair failed and we were unable to recover it. 00:29:30.124 [2024-11-20 12:43:35.811706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.124 [2024-11-20 12:43:35.811729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.124 qpair failed and we were unable to recover it. 00:29:30.124 [2024-11-20 12:43:35.811877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.124 [2024-11-20 12:43:35.811900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.124 qpair failed and we were unable to recover it. 00:29:30.124 [2024-11-20 12:43:35.812072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.124 [2024-11-20 12:43:35.812104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.124 qpair failed and we were unable to recover it. 00:29:30.124 [2024-11-20 12:43:35.812241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.124 [2024-11-20 12:43:35.812275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.124 qpair failed and we were unable to recover it. 00:29:30.124 [2024-11-20 12:43:35.812455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.124 [2024-11-20 12:43:35.812489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.124 qpair failed and we were unable to recover it. 00:29:30.124 [2024-11-20 12:43:35.812683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.124 [2024-11-20 12:43:35.812718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.124 qpair failed and we were unable to recover it. 00:29:30.124 [2024-11-20 12:43:35.812974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.124 [2024-11-20 12:43:35.813006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.124 qpair failed and we were unable to recover it. 00:29:30.124 [2024-11-20 12:43:35.813197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.124 [2024-11-20 12:43:35.813243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.124 qpair failed and we were unable to recover it. 00:29:30.124 [2024-11-20 12:43:35.813471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.124 [2024-11-20 12:43:35.813495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.124 qpair failed and we were unable to recover it. 00:29:30.124 [2024-11-20 12:43:35.813685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.124 [2024-11-20 12:43:35.813717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.124 qpair failed and we were unable to recover it. 00:29:30.124 [2024-11-20 12:43:35.813902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.124 [2024-11-20 12:43:35.813941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.124 qpair failed and we were unable to recover it. 00:29:30.124 [2024-11-20 12:43:35.814064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.124 [2024-11-20 12:43:35.814097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.124 qpair failed and we were unable to recover it. 00:29:30.124 [2024-11-20 12:43:35.814285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.124 [2024-11-20 12:43:35.814310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.124 qpair failed and we were unable to recover it. 00:29:30.124 [2024-11-20 12:43:35.814419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.124 [2024-11-20 12:43:35.814442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.124 qpair failed and we were unable to recover it. 00:29:30.124 [2024-11-20 12:43:35.814707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.124 [2024-11-20 12:43:35.814730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.124 qpair failed and we were unable to recover it. 00:29:30.124 [2024-11-20 12:43:35.814840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.124 [2024-11-20 12:43:35.814863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.124 qpair failed and we were unable to recover it. 00:29:30.124 [2024-11-20 12:43:35.815018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.124 [2024-11-20 12:43:35.815041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.124 qpair failed and we were unable to recover it. 00:29:30.124 [2024-11-20 12:43:35.815222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.124 [2024-11-20 12:43:35.815246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.124 qpair failed and we were unable to recover it. 00:29:30.124 [2024-11-20 12:43:35.815356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.124 [2024-11-20 12:43:35.815379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.124 qpair failed and we were unable to recover it. 00:29:30.124 [2024-11-20 12:43:35.815565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.124 [2024-11-20 12:43:35.815589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.124 qpair failed and we were unable to recover it. 00:29:30.124 [2024-11-20 12:43:35.815747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.124 [2024-11-20 12:43:35.815770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.124 qpair failed and we were unable to recover it. 00:29:30.124 [2024-11-20 12:43:35.815921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.124 [2024-11-20 12:43:35.815944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.124 qpair failed and we were unable to recover it. 00:29:30.124 [2024-11-20 12:43:35.816098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.124 [2024-11-20 12:43:35.816121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.124 qpair failed and we were unable to recover it. 00:29:30.124 [2024-11-20 12:43:35.816367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.124 [2024-11-20 12:43:35.816403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.124 qpair failed and we were unable to recover it. 00:29:30.124 [2024-11-20 12:43:35.816603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.124 [2024-11-20 12:43:35.816636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.124 qpair failed and we were unable to recover it. 00:29:30.124 [2024-11-20 12:43:35.816758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.124 [2024-11-20 12:43:35.816791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.124 qpair failed and we were unable to recover it. 00:29:30.124 [2024-11-20 12:43:35.816920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.124 [2024-11-20 12:43:35.816953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.124 qpair failed and we were unable to recover it. 00:29:30.124 [2024-11-20 12:43:35.817176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.124 [2024-11-20 12:43:35.817218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.125 qpair failed and we were unable to recover it. 00:29:30.125 [2024-11-20 12:43:35.817394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.125 [2024-11-20 12:43:35.817426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.125 qpair failed and we were unable to recover it. 00:29:30.410 [2024-11-20 12:43:35.817620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.410 [2024-11-20 12:43:35.817644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.410 qpair failed and we were unable to recover it. 00:29:30.410 [2024-11-20 12:43:35.817800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.410 [2024-11-20 12:43:35.817834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.410 qpair failed and we were unable to recover it. 00:29:30.410 [2024-11-20 12:43:35.817961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.410 [2024-11-20 12:43:35.817994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.410 qpair failed and we were unable to recover it. 00:29:30.410 [2024-11-20 12:43:35.818130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.410 [2024-11-20 12:43:35.818162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.410 qpair failed and we were unable to recover it. 00:29:30.410 [2024-11-20 12:43:35.818385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.410 [2024-11-20 12:43:35.818420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.410 qpair failed and we were unable to recover it. 00:29:30.410 [2024-11-20 12:43:35.818685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.410 [2024-11-20 12:43:35.818707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.410 qpair failed and we were unable to recover it. 00:29:30.410 [2024-11-20 12:43:35.818815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.410 [2024-11-20 12:43:35.818837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.410 qpair failed and we were unable to recover it. 00:29:30.410 [2024-11-20 12:43:35.819005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.410 [2024-11-20 12:43:35.819042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.410 qpair failed and we were unable to recover it. 00:29:30.410 [2024-11-20 12:43:35.819174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.410 [2024-11-20 12:43:35.819222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.410 qpair failed and we were unable to recover it. 00:29:30.410 [2024-11-20 12:43:35.819411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.410 [2024-11-20 12:43:35.819445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.410 qpair failed and we were unable to recover it. 00:29:30.410 [2024-11-20 12:43:35.819568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.410 [2024-11-20 12:43:35.819609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.410 qpair failed and we were unable to recover it. 00:29:30.410 [2024-11-20 12:43:35.819718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.410 [2024-11-20 12:43:35.819741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.410 qpair failed and we were unable to recover it. 00:29:30.410 [2024-11-20 12:43:35.819985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.410 [2024-11-20 12:43:35.820008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.410 qpair failed and we were unable to recover it. 00:29:30.410 [2024-11-20 12:43:35.820166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.410 [2024-11-20 12:43:35.820199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.410 qpair failed and we were unable to recover it. 00:29:30.410 [2024-11-20 12:43:35.820428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.410 [2024-11-20 12:43:35.820461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.410 qpair failed and we were unable to recover it. 00:29:30.410 [2024-11-20 12:43:35.820649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.410 [2024-11-20 12:43:35.820682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.410 qpair failed and we were unable to recover it. 00:29:30.410 [2024-11-20 12:43:35.820807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.410 [2024-11-20 12:43:35.820840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.410 qpair failed and we were unable to recover it. 00:29:30.410 [2024-11-20 12:43:35.821037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.410 [2024-11-20 12:43:35.821070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.410 qpair failed and we were unable to recover it. 00:29:30.410 [2024-11-20 12:43:35.821215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.410 [2024-11-20 12:43:35.821249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.410 qpair failed and we were unable to recover it. 00:29:30.410 [2024-11-20 12:43:35.821383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.410 [2024-11-20 12:43:35.821406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.410 qpair failed and we were unable to recover it. 00:29:30.410 [2024-11-20 12:43:35.821558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.410 [2024-11-20 12:43:35.821597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.410 qpair failed and we were unable to recover it. 00:29:30.410 [2024-11-20 12:43:35.821722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.410 [2024-11-20 12:43:35.821755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.410 qpair failed and we were unable to recover it. 00:29:30.410 [2024-11-20 12:43:35.821916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.410 [2024-11-20 12:43:35.821988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:30.410 qpair failed and we were unable to recover it. 00:29:30.410 [2024-11-20 12:43:35.822147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.410 [2024-11-20 12:43:35.822183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:30.410 qpair failed and we were unable to recover it. 00:29:30.410 [2024-11-20 12:43:35.822427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.410 [2024-11-20 12:43:35.822536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:30.410 qpair failed and we were unable to recover it. 00:29:30.410 [2024-11-20 12:43:35.822733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.410 [2024-11-20 12:43:35.822768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:30.410 qpair failed and we were unable to recover it. 00:29:30.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 342856 Killed "${NVMF_APP[@]}" "$@" 00:29:30.410 [2024-11-20 12:43:35.823014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.410 [2024-11-20 12:43:35.823046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:30.410 qpair failed and we were unable to recover it. 00:29:30.410 [2024-11-20 12:43:35.823239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.410 [2024-11-20 12:43:35.823275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:30.410 qpair failed and we were unable to recover it. 00:29:30.411 [2024-11-20 12:43:35.823397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.411 [2024-11-20 12:43:35.823429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:30.411 qpair failed and we were unable to recover it. 00:29:30.411 [2024-11-20 12:43:35.823625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.411 [2024-11-20 12:43:35.823656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:30.411 12:43:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:30.411 qpair failed and we were unable to recover it. 00:29:30.411 [2024-11-20 12:43:35.823893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.411 [2024-11-20 12:43:35.823925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:30.411 qpair failed and we were unable to recover it. 00:29:30.411 [2024-11-20 12:43:35.824032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.411 12:43:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:30.411 [2024-11-20 12:43:35.824065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:30.411 qpair failed and we were unable to recover it. 00:29:30.411 [2024-11-20 12:43:35.824247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.411 [2024-11-20 12:43:35.824282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:30.411 qpair failed and we were unable to recover it. 00:29:30.411 12:43:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:30.411 [2024-11-20 12:43:35.824408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.411 [2024-11-20 12:43:35.824439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:30.411 qpair failed and we were unable to recover it. 00:29:30.411 12:43:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:30.411 [2024-11-20 12:43:35.824709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.411 [2024-11-20 12:43:35.824741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.411 qpair failed and we were unable to recover it. 00:29:30.411 [2024-11-20 12:43:35.824834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.411 [2024-11-20 12:43:35.824858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.411 qpair failed and we were unable to recover it. 00:29:30.411 12:43:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:30.411 [2024-11-20 12:43:35.825012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.411 [2024-11-20 12:43:35.825037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.411 qpair failed and we were unable to recover it. 00:29:30.411 [2024-11-20 12:43:35.825193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.411 [2024-11-20 12:43:35.825224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.411 qpair failed and we were unable to recover it. 00:29:30.411 [2024-11-20 12:43:35.825390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.411 [2024-11-20 12:43:35.825414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.411 qpair failed and we were unable to recover it. 00:29:30.411 [2024-11-20 12:43:35.825513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.411 [2024-11-20 12:43:35.825536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.411 qpair failed and we were unable to recover it. 00:29:30.411 [2024-11-20 12:43:35.825706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.411 [2024-11-20 12:43:35.825728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.411 qpair failed and we were unable to recover it. 00:29:30.411 [2024-11-20 12:43:35.825887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.411 [2024-11-20 12:43:35.825911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.411 qpair failed and we were unable to recover it. 00:29:30.411 [2024-11-20 12:43:35.826004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.411 [2024-11-20 12:43:35.826027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.411 qpair failed and we were unable to recover it. 00:29:30.411 [2024-11-20 12:43:35.826191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.411 [2024-11-20 12:43:35.826222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.411 qpair failed and we were unable to recover it. 00:29:30.411 [2024-11-20 12:43:35.826460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.411 [2024-11-20 12:43:35.826484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.411 qpair failed and we were unable to recover it. 00:29:30.411 [2024-11-20 12:43:35.826583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.411 [2024-11-20 12:43:35.826607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.411 qpair failed and we were unable to recover it. 00:29:30.411 [2024-11-20 12:43:35.826761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.411 [2024-11-20 12:43:35.826789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.411 qpair failed and we were unable to recover it. 00:29:30.411 [2024-11-20 12:43:35.826900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.411 [2024-11-20 12:43:35.826924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.411 qpair failed and we were unable to recover it. 00:29:30.411 [2024-11-20 12:43:35.827028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.411 [2024-11-20 12:43:35.827051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.411 qpair failed and we were unable to recover it. 00:29:30.411 [2024-11-20 12:43:35.827242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.411 [2024-11-20 12:43:35.827267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.411 qpair failed and we were unable to recover it. 00:29:30.411 [2024-11-20 12:43:35.827427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.411 [2024-11-20 12:43:35.827450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.411 qpair failed and we were unable to recover it. 00:29:30.411 [2024-11-20 12:43:35.827541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.411 [2024-11-20 12:43:35.827565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.411 qpair failed and we were unable to recover it. 00:29:30.411 [2024-11-20 12:43:35.827718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.411 [2024-11-20 12:43:35.827741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.411 qpair failed and we were unable to recover it. 00:29:30.411 [2024-11-20 12:43:35.827898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.411 [2024-11-20 12:43:35.827920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.411 qpair failed and we were unable to recover it. 00:29:30.411 [2024-11-20 12:43:35.828025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.411 [2024-11-20 12:43:35.828048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.411 qpair failed and we were unable to recover it. 00:29:30.411 [2024-11-20 12:43:35.828254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.411 [2024-11-20 12:43:35.828278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.411 qpair failed and we were unable to recover it. 00:29:30.411 [2024-11-20 12:43:35.828445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.411 [2024-11-20 12:43:35.828468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.411 qpair failed and we were unable to recover it. 00:29:30.411 [2024-11-20 12:43:35.828618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.411 [2024-11-20 12:43:35.828642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.411 qpair failed and we were unable to recover it. 00:29:30.411 [2024-11-20 12:43:35.828879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.411 [2024-11-20 12:43:35.828902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.411 qpair failed and we were unable to recover it. 00:29:30.411 [2024-11-20 12:43:35.829095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.411 [2024-11-20 12:43:35.829118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.411 qpair failed and we were unable to recover it. 00:29:30.411 [2024-11-20 12:43:35.829296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.411 [2024-11-20 12:43:35.829321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.411 qpair failed and we were unable to recover it. 00:29:30.411 [2024-11-20 12:43:35.829424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.411 [2024-11-20 12:43:35.829446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.411 qpair failed and we were unable to recover it. 00:29:30.411 [2024-11-20 12:43:35.829596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.412 [2024-11-20 12:43:35.829617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.412 qpair failed and we were unable to recover it. 00:29:30.412 [2024-11-20 12:43:35.829728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.412 [2024-11-20 12:43:35.829751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.412 qpair failed and we were unable to recover it. 00:29:30.412 [2024-11-20 12:43:35.829910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.412 [2024-11-20 12:43:35.829932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.412 qpair failed and we were unable to recover it. 00:29:30.412 [2024-11-20 12:43:35.830124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.412 [2024-11-20 12:43:35.830147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.412 qpair failed and we were unable to recover it. 00:29:30.412 [2024-11-20 12:43:35.830301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.412 [2024-11-20 12:43:35.830325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.412 qpair failed and we were unable to recover it. 00:29:30.412 [2024-11-20 12:43:35.830524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.412 [2024-11-20 12:43:35.830546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.412 qpair failed and we were unable to recover it. 00:29:30.412 [2024-11-20 12:43:35.830658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.412 [2024-11-20 12:43:35.830680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.412 qpair failed and we were unable to recover it. 00:29:30.412 [2024-11-20 12:43:35.830831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.412 [2024-11-20 12:43:35.830854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.412 qpair failed and we were unable to recover it. 00:29:30.412 [2024-11-20 12:43:35.831009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.412 [2024-11-20 12:43:35.831033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.412 qpair failed and we were unable to recover it. 00:29:30.412 [2024-11-20 12:43:35.831128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.412 [2024-11-20 12:43:35.831151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.412 qpair failed and we were unable to recover it. 00:29:30.412 [2024-11-20 12:43:35.831303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.412 [2024-11-20 12:43:35.831327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.412 qpair failed and we were unable to recover it. 00:29:30.412 [2024-11-20 12:43:35.831419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.412 [2024-11-20 12:43:35.831441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.412 qpair failed and we were unable to recover it. 00:29:30.412 [2024-11-20 12:43:35.831595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.412 [2024-11-20 12:43:35.831619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.412 qpair failed and we were unable to recover it. 00:29:30.412 12:43:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=343812 00:29:30.412 [2024-11-20 12:43:35.831844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.412 [2024-11-20 12:43:35.831867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.412 qpair failed and we were unable to recover it. 00:29:30.412 12:43:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 343812 00:29:30.412 [2024-11-20 12:43:35.832057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.412 [2024-11-20 12:43:35.832081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.412 12:43:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:30.412 qpair failed and we were unable to recover it. 00:29:30.412 [2024-11-20 12:43:35.832245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.412 [2024-11-20 12:43:35.832270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.412 qpair failed and we were unable to recover it. 00:29:30.412 12:43:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 343812 ']' 00:29:30.412 [2024-11-20 12:43:35.832424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.412 [2024-11-20 12:43:35.832448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.412 qpair failed and we were unable to recover it. 00:29:30.412 [2024-11-20 12:43:35.832552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.412 [2024-11-20 12:43:35.832575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.412 qpair failed and we were unable to recover it. 00:29:30.412 12:43:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:30.412 [2024-11-20 12:43:35.832744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.412 [2024-11-20 12:43:35.832768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.412 qpair failed and we were unable to recover it. 00:29:30.412 [2024-11-20 12:43:35.832867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.412 [2024-11-20 12:43:35.832890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.412 qpair failed and we were unable to recover it. 00:29:30.412 [2024-11-20 12:43:35.832977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.412 [2024-11-20 12:43:35.833001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with 12:43:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:30.412 addr=10.0.0.2, port=4420 00:29:30.412 qpair failed and we were unable to recover it. 00:29:30.412 [2024-11-20 12:43:35.833097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.412 [2024-11-20 12:43:35.833120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.412 qpair failed and we were unable to recover it. 00:29:30.412 [2024-11-20 12:43:35.833301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.412 12:43:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:30.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:30.412 [2024-11-20 12:43:35.833326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.412 qpair failed and we were unable to recover it. 00:29:30.412 [2024-11-20 12:43:35.833448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.412 [2024-11-20 12:43:35.833472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.412 qpair failed and we were unable to recover it. 00:29:30.412 [2024-11-20 12:43:35.833564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.412 12:43:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:30.412 [2024-11-20 12:43:35.833589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.412 qpair failed and we were unable to recover it. 00:29:30.412 [2024-11-20 12:43:35.833674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.412 [2024-11-20 12:43:35.833698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.412 qpair failed and we were unable to recover it. 00:29:30.412 12:43:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:30.412 [2024-11-20 12:43:35.833866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.412 [2024-11-20 12:43:35.833890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.412 qpair failed and we were unable to recover it. 00:29:30.412 [2024-11-20 12:43:35.833983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.412 [2024-11-20 12:43:35.834006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.412 qpair failed and we were unable to recover it. 00:29:30.412 [2024-11-20 12:43:35.834105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.412 [2024-11-20 12:43:35.834128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.412 qpair failed and we were unable to recover it. 00:29:30.412 [2024-11-20 12:43:35.834215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.412 [2024-11-20 12:43:35.834239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.412 qpair failed and we were unable to recover it. 00:29:30.412 [2024-11-20 12:43:35.834329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.412 [2024-11-20 12:43:35.834352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.412 qpair failed and we were unable to recover it. 00:29:30.412 [2024-11-20 12:43:35.834529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.412 [2024-11-20 12:43:35.834551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.412 qpair failed and we were unable to recover it. 00:29:30.412 [2024-11-20 12:43:35.834656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.412 [2024-11-20 12:43:35.834679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.412 qpair failed and we were unable to recover it. 00:29:30.412 [2024-11-20 12:43:35.834843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.412 [2024-11-20 12:43:35.834866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.412 qpair failed and we were unable to recover it. 00:29:30.412 [2024-11-20 12:43:35.835026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.413 [2024-11-20 12:43:35.835048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.413 qpair failed and we were unable to recover it. 00:29:30.413 [2024-11-20 12:43:35.835147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.413 [2024-11-20 12:43:35.835171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.413 qpair failed and we were unable to recover it. 00:29:30.413 [2024-11-20 12:43:35.835361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.413 [2024-11-20 12:43:35.835386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.413 qpair failed and we were unable to recover it. 00:29:30.413 [2024-11-20 12:43:35.835487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.413 [2024-11-20 12:43:35.835515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.413 qpair failed and we were unable to recover it. 00:29:30.413 [2024-11-20 12:43:35.835618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.413 [2024-11-20 12:43:35.835641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.413 qpair failed and we were unable to recover it. 00:29:30.413 [2024-11-20 12:43:35.835796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.413 [2024-11-20 12:43:35.835819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.413 qpair failed and we were unable to recover it. 00:29:30.413 [2024-11-20 12:43:35.835973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.413 [2024-11-20 12:43:35.835995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.413 qpair failed and we were unable to recover it. 00:29:30.413 [2024-11-20 12:43:35.836093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.413 [2024-11-20 12:43:35.836115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.413 qpair failed and we were unable to recover it. 00:29:30.413 [2024-11-20 12:43:35.836337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.413 [2024-11-20 12:43:35.836361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.413 qpair failed and we were unable to recover it. 00:29:30.413 [2024-11-20 12:43:35.836549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.413 [2024-11-20 12:43:35.836572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.413 qpair failed and we were unable to recover it. 00:29:30.413 [2024-11-20 12:43:35.836666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.413 [2024-11-20 12:43:35.836690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.413 qpair failed and we were unable to recover it. 00:29:30.413 [2024-11-20 12:43:35.836776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.413 [2024-11-20 12:43:35.836799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.413 qpair failed and we were unable to recover it. 00:29:30.413 [2024-11-20 12:43:35.836979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.413 [2024-11-20 12:43:35.837002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.413 qpair failed and we were unable to recover it. 00:29:30.413 [2024-11-20 12:43:35.837248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.413 [2024-11-20 12:43:35.837273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.413 qpair failed and we were unable to recover it. 00:29:30.413 [2024-11-20 12:43:35.837384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.413 [2024-11-20 12:43:35.837407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.413 qpair failed and we were unable to recover it. 00:29:30.413 [2024-11-20 12:43:35.837506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.413 [2024-11-20 12:43:35.837530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.413 qpair failed and we were unable to recover it. 00:29:30.413 [2024-11-20 12:43:35.837695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.413 [2024-11-20 12:43:35.837718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.413 qpair failed and we were unable to recover it. 00:29:30.413 [2024-11-20 12:43:35.837814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.413 [2024-11-20 12:43:35.837837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.413 qpair failed and we were unable to recover it. 00:29:30.413 [2024-11-20 12:43:35.838015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.413 [2024-11-20 12:43:35.838039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.413 qpair failed and we were unable to recover it. 00:29:30.413 [2024-11-20 12:43:35.838215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.413 [2024-11-20 12:43:35.838239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.413 qpair failed and we were unable to recover it. 00:29:30.413 [2024-11-20 12:43:35.838400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.413 [2024-11-20 12:43:35.838423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.413 qpair failed and we were unable to recover it. 00:29:30.413 [2024-11-20 12:43:35.838588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.413 [2024-11-20 12:43:35.838611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.413 qpair failed and we were unable to recover it. 00:29:30.413 [2024-11-20 12:43:35.838780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.413 [2024-11-20 12:43:35.838804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.413 qpair failed and we were unable to recover it. 00:29:30.413 [2024-11-20 12:43:35.838957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.413 [2024-11-20 12:43:35.838981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.413 qpair failed and we were unable to recover it. 00:29:30.413 [2024-11-20 12:43:35.839207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.413 [2024-11-20 12:43:35.839230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.413 qpair failed and we were unable to recover it. 00:29:30.413 [2024-11-20 12:43:35.839399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.413 [2024-11-20 12:43:35.839422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.413 qpair failed and we were unable to recover it. 00:29:30.413 [2024-11-20 12:43:35.839513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.413 [2024-11-20 12:43:35.839536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.413 qpair failed and we were unable to recover it. 00:29:30.413 [2024-11-20 12:43:35.839690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.413 [2024-11-20 12:43:35.839714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.413 qpair failed and we were unable to recover it. 00:29:30.413 [2024-11-20 12:43:35.839866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.413 [2024-11-20 12:43:35.839888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.413 qpair failed and we were unable to recover it. 00:29:30.413 [2024-11-20 12:43:35.839993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.413 [2024-11-20 12:43:35.840017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.413 qpair failed and we were unable to recover it. 00:29:30.413 [2024-11-20 12:43:35.840178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.413 [2024-11-20 12:43:35.840210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.413 qpair failed and we were unable to recover it. 00:29:30.413 [2024-11-20 12:43:35.840324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.413 [2024-11-20 12:43:35.840347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.413 qpair failed and we were unable to recover it. 00:29:30.413 [2024-11-20 12:43:35.840435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.413 [2024-11-20 12:43:35.840457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.413 qpair failed and we were unable to recover it. 00:29:30.413 [2024-11-20 12:43:35.840546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.413 [2024-11-20 12:43:35.840569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.413 qpair failed and we were unable to recover it. 00:29:30.413 [2024-11-20 12:43:35.840653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.413 [2024-11-20 12:43:35.840676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.413 qpair failed and we were unable to recover it. 00:29:30.413 [2024-11-20 12:43:35.840844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.413 [2024-11-20 12:43:35.840867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.413 qpair failed and we were unable to recover it. 00:29:30.413 [2024-11-20 12:43:35.841029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.413 [2024-11-20 12:43:35.841051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.413 qpair failed and we were unable to recover it. 00:29:30.413 [2024-11-20 12:43:35.841212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.413 [2024-11-20 12:43:35.841236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.413 qpair failed and we were unable to recover it. 00:29:30.413 [2024-11-20 12:43:35.841423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.413 [2024-11-20 12:43:35.841446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.414 qpair failed and we were unable to recover it. 00:29:30.414 [2024-11-20 12:43:35.841616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.414 [2024-11-20 12:43:35.841640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.414 qpair failed and we were unable to recover it. 00:29:30.414 [2024-11-20 12:43:35.841838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.414 [2024-11-20 12:43:35.841860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.414 qpair failed and we were unable to recover it. 00:29:30.414 [2024-11-20 12:43:35.842092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.414 [2024-11-20 12:43:35.842116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.414 qpair failed and we were unable to recover it. 00:29:30.414 [2024-11-20 12:43:35.842285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.414 [2024-11-20 12:43:35.842309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.414 qpair failed and we were unable to recover it. 00:29:30.414 [2024-11-20 12:43:35.842394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.414 [2024-11-20 12:43:35.842417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.414 qpair failed and we were unable to recover it. 00:29:30.414 [2024-11-20 12:43:35.842564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.414 [2024-11-20 12:43:35.842588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.414 qpair failed and we were unable to recover it. 00:29:30.414 [2024-11-20 12:43:35.842808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.414 [2024-11-20 12:43:35.842831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.414 qpair failed and we were unable to recover it. 00:29:30.414 [2024-11-20 12:43:35.843018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.414 [2024-11-20 12:43:35.843042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.414 qpair failed and we were unable to recover it. 00:29:30.414 [2024-11-20 12:43:35.843150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.414 [2024-11-20 12:43:35.843173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.414 qpair failed and we were unable to recover it. 00:29:30.414 [2024-11-20 12:43:35.843282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.414 [2024-11-20 12:43:35.843306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.414 qpair failed and we were unable to recover it. 00:29:30.414 [2024-11-20 12:43:35.843483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.414 [2024-11-20 12:43:35.843505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.414 qpair failed and we were unable to recover it. 00:29:30.414 [2024-11-20 12:43:35.843669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.414 [2024-11-20 12:43:35.843693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.414 qpair failed and we were unable to recover it. 00:29:30.414 [2024-11-20 12:43:35.843884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.414 [2024-11-20 12:43:35.843908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.414 qpair failed and we were unable to recover it. 00:29:30.414 [2024-11-20 12:43:35.843996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.414 [2024-11-20 12:43:35.844019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.414 qpair failed and we were unable to recover it. 00:29:30.414 [2024-11-20 12:43:35.844243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.414 [2024-11-20 12:43:35.844268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.414 qpair failed and we were unable to recover it. 00:29:30.414 [2024-11-20 12:43:35.844373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.414 [2024-11-20 12:43:35.844400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.414 qpair failed and we were unable to recover it. 00:29:30.414 [2024-11-20 12:43:35.844630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.414 [2024-11-20 12:43:35.844653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.414 qpair failed and we were unable to recover it. 00:29:30.414 [2024-11-20 12:43:35.844806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.414 [2024-11-20 12:43:35.844829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.414 qpair failed and we were unable to recover it. 00:29:30.414 [2024-11-20 12:43:35.845046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.414 [2024-11-20 12:43:35.845068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.414 qpair failed and we were unable to recover it. 00:29:30.414 [2024-11-20 12:43:35.845234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.414 [2024-11-20 12:43:35.845258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.414 qpair failed and we were unable to recover it. 00:29:30.414 [2024-11-20 12:43:35.845426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.414 [2024-11-20 12:43:35.845451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.414 qpair failed and we were unable to recover it. 00:29:30.414 [2024-11-20 12:43:35.845618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.414 [2024-11-20 12:43:35.845642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.414 qpair failed and we were unable to recover it. 00:29:30.414 [2024-11-20 12:43:35.845736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.414 [2024-11-20 12:43:35.845759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.414 qpair failed and we were unable to recover it. 00:29:30.414 [2024-11-20 12:43:35.845909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.414 [2024-11-20 12:43:35.845931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.414 qpair failed and we were unable to recover it. 00:29:30.414 [2024-11-20 12:43:35.846095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.414 [2024-11-20 12:43:35.846119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.414 qpair failed and we were unable to recover it. 00:29:30.414 [2024-11-20 12:43:35.846232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.414 [2024-11-20 12:43:35.846256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.414 qpair failed and we were unable to recover it. 00:29:30.414 [2024-11-20 12:43:35.846426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.414 [2024-11-20 12:43:35.846449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.414 qpair failed and we were unable to recover it. 00:29:30.414 [2024-11-20 12:43:35.846654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.414 [2024-11-20 12:43:35.846677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.414 qpair failed and we were unable to recover it. 00:29:30.414 [2024-11-20 12:43:35.846827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.414 [2024-11-20 12:43:35.846850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.414 qpair failed and we were unable to recover it. 00:29:30.414 [2024-11-20 12:43:35.847077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.414 [2024-11-20 12:43:35.847101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.414 qpair failed and we were unable to recover it. 00:29:30.414 [2024-11-20 12:43:35.847254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.414 [2024-11-20 12:43:35.847278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.414 qpair failed and we were unable to recover it. 00:29:30.414 [2024-11-20 12:43:35.847454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.415 [2024-11-20 12:43:35.847477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.415 qpair failed and we were unable to recover it. 00:29:30.415 [2024-11-20 12:43:35.847643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.415 [2024-11-20 12:43:35.847666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.415 qpair failed and we were unable to recover it. 00:29:30.415 [2024-11-20 12:43:35.847849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.415 [2024-11-20 12:43:35.847872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.415 qpair failed and we were unable to recover it. 00:29:30.415 [2024-11-20 12:43:35.848089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.415 [2024-11-20 12:43:35.848113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.415 qpair failed and we were unable to recover it. 00:29:30.415 [2024-11-20 12:43:35.848323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.415 [2024-11-20 12:43:35.848347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.415 qpair failed and we were unable to recover it. 00:29:30.415 [2024-11-20 12:43:35.848449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.415 [2024-11-20 12:43:35.848471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.415 qpair failed and we were unable to recover it. 00:29:30.415 [2024-11-20 12:43:35.848642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.415 [2024-11-20 12:43:35.848666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.415 qpair failed and we were unable to recover it. 00:29:30.415 [2024-11-20 12:43:35.848820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.415 [2024-11-20 12:43:35.848843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.415 qpair failed and we were unable to recover it. 00:29:30.415 [2024-11-20 12:43:35.849005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.415 [2024-11-20 12:43:35.849028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.415 qpair failed and we were unable to recover it. 00:29:30.415 [2024-11-20 12:43:35.849181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.415 [2024-11-20 12:43:35.849209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.415 qpair failed and we were unable to recover it. 00:29:30.415 [2024-11-20 12:43:35.849312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.415 [2024-11-20 12:43:35.849335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.415 qpair failed and we were unable to recover it. 00:29:30.415 [2024-11-20 12:43:35.849564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.415 [2024-11-20 12:43:35.849587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.415 qpair failed and we were unable to recover it. 00:29:30.415 [2024-11-20 12:43:35.849689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.415 [2024-11-20 12:43:35.849712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.415 qpair failed and we were unable to recover it. 00:29:30.415 [2024-11-20 12:43:35.849879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.415 [2024-11-20 12:43:35.849902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.415 qpair failed and we were unable to recover it. 00:29:30.415 [2024-11-20 12:43:35.850006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.415 [2024-11-20 12:43:35.850028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.415 qpair failed and we were unable to recover it. 00:29:30.415 [2024-11-20 12:43:35.850125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.415 [2024-11-20 12:43:35.850147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.415 qpair failed and we were unable to recover it. 00:29:30.415 [2024-11-20 12:43:35.850400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.415 [2024-11-20 12:43:35.850425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.415 qpair failed and we were unable to recover it. 00:29:30.415 [2024-11-20 12:43:35.850508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.415 [2024-11-20 12:43:35.850532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.415 qpair failed and we were unable to recover it. 00:29:30.415 [2024-11-20 12:43:35.850685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.415 [2024-11-20 12:43:35.850708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.415 qpair failed and we were unable to recover it. 00:29:30.415 [2024-11-20 12:43:35.850922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.415 [2024-11-20 12:43:35.850945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.415 qpair failed and we were unable to recover it. 00:29:30.415 [2024-11-20 12:43:35.851031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.415 [2024-11-20 12:43:35.851054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.415 qpair failed and we were unable to recover it. 00:29:30.415 [2024-11-20 12:43:35.851239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.415 [2024-11-20 12:43:35.851264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.415 qpair failed and we were unable to recover it. 00:29:30.415 [2024-11-20 12:43:35.851378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.415 [2024-11-20 12:43:35.851409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.415 qpair failed and we were unable to recover it. 00:29:30.415 [2024-11-20 12:43:35.851509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.415 [2024-11-20 12:43:35.851533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.415 qpair failed and we were unable to recover it. 00:29:30.415 [2024-11-20 12:43:35.851707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.415 [2024-11-20 12:43:35.851729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.415 qpair failed and we were unable to recover it. 00:29:30.415 [2024-11-20 12:43:35.851839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.415 [2024-11-20 12:43:35.851866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.415 qpair failed and we were unable to recover it. 00:29:30.415 [2024-11-20 12:43:35.852130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.415 [2024-11-20 12:43:35.852154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.415 qpair failed and we were unable to recover it. 00:29:30.415 [2024-11-20 12:43:35.852259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.415 [2024-11-20 12:43:35.852284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.415 qpair failed and we were unable to recover it. 00:29:30.415 [2024-11-20 12:43:35.852500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.415 [2024-11-20 12:43:35.852523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.415 qpair failed and we were unable to recover it. 00:29:30.415 [2024-11-20 12:43:35.852693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.415 [2024-11-20 12:43:35.852716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.415 qpair failed and we were unable to recover it. 00:29:30.415 [2024-11-20 12:43:35.852894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.415 [2024-11-20 12:43:35.852917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.415 qpair failed and we were unable to recover it. 00:29:30.415 [2024-11-20 12:43:35.853036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.415 [2024-11-20 12:43:35.853059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.415 qpair failed and we were unable to recover it. 00:29:30.415 [2024-11-20 12:43:35.853218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.415 [2024-11-20 12:43:35.853241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.415 qpair failed and we were unable to recover it. 00:29:30.415 [2024-11-20 12:43:35.853359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.415 [2024-11-20 12:43:35.853381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.415 qpair failed and we were unable to recover it. 00:29:30.415 [2024-11-20 12:43:35.853539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.415 [2024-11-20 12:43:35.853562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.415 qpair failed and we were unable to recover it. 00:29:30.415 [2024-11-20 12:43:35.853799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.415 [2024-11-20 12:43:35.853821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.415 qpair failed and we were unable to recover it. 00:29:30.415 [2024-11-20 12:43:35.853917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.415 [2024-11-20 12:43:35.853940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.415 qpair failed and we were unable to recover it. 00:29:30.415 [2024-11-20 12:43:35.854109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.415 [2024-11-20 12:43:35.854132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.415 qpair failed and we were unable to recover it. 00:29:30.415 [2024-11-20 12:43:35.854223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.415 [2024-11-20 12:43:35.854247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.415 qpair failed and we were unable to recover it. 00:29:30.416 [2024-11-20 12:43:35.854472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.416 [2024-11-20 12:43:35.854494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.416 qpair failed and we were unable to recover it. 00:29:30.416 [2024-11-20 12:43:35.854711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.416 [2024-11-20 12:43:35.854734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.416 qpair failed and we were unable to recover it. 00:29:30.416 [2024-11-20 12:43:35.854911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.416 [2024-11-20 12:43:35.854935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.416 qpair failed and we were unable to recover it. 00:29:30.416 [2024-11-20 12:43:35.855106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.416 [2024-11-20 12:43:35.855129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.416 qpair failed and we were unable to recover it. 00:29:30.416 [2024-11-20 12:43:35.855281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.416 [2024-11-20 12:43:35.855305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.416 qpair failed and we were unable to recover it. 00:29:30.416 [2024-11-20 12:43:35.855545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.416 [2024-11-20 12:43:35.855569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.416 qpair failed and we were unable to recover it. 00:29:30.416 [2024-11-20 12:43:35.855786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.416 [2024-11-20 12:43:35.855809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.416 qpair failed and we were unable to recover it. 00:29:30.416 [2024-11-20 12:43:35.856025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.416 [2024-11-20 12:43:35.856047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.416 qpair failed and we were unable to recover it. 00:29:30.416 [2024-11-20 12:43:35.856135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.416 [2024-11-20 12:43:35.856158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.416 qpair failed and we were unable to recover it. 00:29:30.416 [2024-11-20 12:43:35.856322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.416 [2024-11-20 12:43:35.856346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.416 qpair failed and we were unable to recover it. 00:29:30.416 [2024-11-20 12:43:35.856454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.416 [2024-11-20 12:43:35.856477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.416 qpair failed and we were unable to recover it. 00:29:30.416 [2024-11-20 12:43:35.856572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.416 [2024-11-20 12:43:35.856595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.416 qpair failed and we were unable to recover it. 00:29:30.416 [2024-11-20 12:43:35.856693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.416 [2024-11-20 12:43:35.856717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.416 qpair failed and we were unable to recover it. 00:29:30.416 [2024-11-20 12:43:35.856939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.416 [2024-11-20 12:43:35.856965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.416 qpair failed and we were unable to recover it. 00:29:30.416 [2024-11-20 12:43:35.857158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.416 [2024-11-20 12:43:35.857182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.416 qpair failed and we were unable to recover it. 00:29:30.416 [2024-11-20 12:43:35.857348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.416 [2024-11-20 12:43:35.857371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.416 qpair failed and we were unable to recover it. 00:29:30.416 [2024-11-20 12:43:35.857470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.416 [2024-11-20 12:43:35.857492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.416 qpair failed and we were unable to recover it. 00:29:30.416 [2024-11-20 12:43:35.857660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.416 [2024-11-20 12:43:35.857682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.416 qpair failed and we were unable to recover it. 00:29:30.416 [2024-11-20 12:43:35.857833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.416 [2024-11-20 12:43:35.857856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.416 qpair failed and we were unable to recover it. 00:29:30.416 [2024-11-20 12:43:35.858019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.416 [2024-11-20 12:43:35.858041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.416 qpair failed and we were unable to recover it. 00:29:30.416 [2024-11-20 12:43:35.858218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.416 [2024-11-20 12:43:35.858242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.416 qpair failed and we were unable to recover it. 00:29:30.416 [2024-11-20 12:43:35.858391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.416 [2024-11-20 12:43:35.858415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.416 qpair failed and we were unable to recover it. 00:29:30.416 [2024-11-20 12:43:35.858563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.416 [2024-11-20 12:43:35.858586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.416 qpair failed and we were unable to recover it. 00:29:30.416 [2024-11-20 12:43:35.858803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.416 [2024-11-20 12:43:35.858825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.416 qpair failed and we were unable to recover it. 00:29:30.416 [2024-11-20 12:43:35.858931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.416 [2024-11-20 12:43:35.858954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.416 qpair failed and we were unable to recover it. 00:29:30.416 [2024-11-20 12:43:35.859068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.416 [2024-11-20 12:43:35.859090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.416 qpair failed and we were unable to recover it. 00:29:30.416 [2024-11-20 12:43:35.859310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.416 [2024-11-20 12:43:35.859334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.416 qpair failed and we were unable to recover it. 00:29:30.416 [2024-11-20 12:43:35.859497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.416 [2024-11-20 12:43:35.859520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.416 qpair failed and we were unable to recover it. 00:29:30.416 [2024-11-20 12:43:35.859683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.416 [2024-11-20 12:43:35.859705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.416 qpair failed and we were unable to recover it. 00:29:30.416 [2024-11-20 12:43:35.859922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.416 [2024-11-20 12:43:35.859945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.416 qpair failed and we were unable to recover it. 00:29:30.416 [2024-11-20 12:43:35.860100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.416 [2024-11-20 12:43:35.860123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.416 qpair failed and we were unable to recover it. 00:29:30.416 [2024-11-20 12:43:35.860293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.416 [2024-11-20 12:43:35.860318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.416 qpair failed and we were unable to recover it. 00:29:30.416 [2024-11-20 12:43:35.860480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.416 [2024-11-20 12:43:35.860504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.416 qpair failed and we were unable to recover it. 00:29:30.416 [2024-11-20 12:43:35.860666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.416 [2024-11-20 12:43:35.860690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.416 qpair failed and we were unable to recover it. 00:29:30.416 [2024-11-20 12:43:35.860848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.416 [2024-11-20 12:43:35.860871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.416 qpair failed and we were unable to recover it. 00:29:30.416 [2024-11-20 12:43:35.860961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.416 [2024-11-20 12:43:35.860984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.416 qpair failed and we were unable to recover it. 00:29:30.416 [2024-11-20 12:43:35.861095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.416 [2024-11-20 12:43:35.861117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.416 qpair failed and we were unable to recover it. 00:29:30.416 [2024-11-20 12:43:35.861279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.416 [2024-11-20 12:43:35.861303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.416 qpair failed and we were unable to recover it. 00:29:30.416 [2024-11-20 12:43:35.861398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.417 [2024-11-20 12:43:35.861421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.417 qpair failed and we were unable to recover it. 00:29:30.417 [2024-11-20 12:43:35.861577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.417 [2024-11-20 12:43:35.861600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.417 qpair failed and we were unable to recover it. 00:29:30.417 [2024-11-20 12:43:35.861773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.417 [2024-11-20 12:43:35.861796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.417 qpair failed and we were unable to recover it. 00:29:30.417 [2024-11-20 12:43:35.861909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.417 [2024-11-20 12:43:35.861932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.417 qpair failed and we were unable to recover it. 00:29:30.417 [2024-11-20 12:43:35.862127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.417 [2024-11-20 12:43:35.862150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.417 qpair failed and we were unable to recover it. 00:29:30.417 [2024-11-20 12:43:35.862322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.417 [2024-11-20 12:43:35.862346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.417 qpair failed and we were unable to recover it. 00:29:30.417 [2024-11-20 12:43:35.862592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.417 [2024-11-20 12:43:35.862616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.417 qpair failed and we were unable to recover it. 00:29:30.417 [2024-11-20 12:43:35.862775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.417 [2024-11-20 12:43:35.862798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.417 qpair failed and we were unable to recover it. 00:29:30.417 [2024-11-20 12:43:35.862961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.417 [2024-11-20 12:43:35.862984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.417 qpair failed and we were unable to recover it. 00:29:30.417 [2024-11-20 12:43:35.863132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.417 [2024-11-20 12:43:35.863155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.417 qpair failed and we were unable to recover it. 00:29:30.417 [2024-11-20 12:43:35.863270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.417 [2024-11-20 12:43:35.863294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.417 qpair failed and we were unable to recover it. 00:29:30.417 [2024-11-20 12:43:35.863445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.417 [2024-11-20 12:43:35.863468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.417 qpair failed and we were unable to recover it. 00:29:30.417 [2024-11-20 12:43:35.863619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.417 [2024-11-20 12:43:35.863642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.417 qpair failed and we were unable to recover it. 00:29:30.417 [2024-11-20 12:43:35.863758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.417 [2024-11-20 12:43:35.863780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.417 qpair failed and we were unable to recover it. 00:29:30.417 [2024-11-20 12:43:35.863958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.417 [2024-11-20 12:43:35.863981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.417 qpair failed and we were unable to recover it. 00:29:30.417 [2024-11-20 12:43:35.864136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.417 [2024-11-20 12:43:35.864159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.417 qpair failed and we were unable to recover it. 00:29:30.417 [2024-11-20 12:43:35.864249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.417 [2024-11-20 12:43:35.864276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.417 qpair failed and we were unable to recover it. 00:29:30.417 [2024-11-20 12:43:35.864382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.417 [2024-11-20 12:43:35.864405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.417 qpair failed and we were unable to recover it. 00:29:30.417 [2024-11-20 12:43:35.864496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.417 [2024-11-20 12:43:35.864519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.417 qpair failed and we were unable to recover it. 00:29:30.417 [2024-11-20 12:43:35.864672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.417 [2024-11-20 12:43:35.864695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.417 qpair failed and we were unable to recover it. 00:29:30.417 [2024-11-20 12:43:35.864796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.417 [2024-11-20 12:43:35.864820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.417 qpair failed and we were unable to recover it. 00:29:30.417 [2024-11-20 12:43:35.864990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.417 [2024-11-20 12:43:35.865013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.417 qpair failed and we were unable to recover it. 00:29:30.417 [2024-11-20 12:43:35.865106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.417 [2024-11-20 12:43:35.865129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.417 qpair failed and we were unable to recover it. 00:29:30.417 [2024-11-20 12:43:35.865353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.417 [2024-11-20 12:43:35.865377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.417 qpair failed and we were unable to recover it. 00:29:30.417 [2024-11-20 12:43:35.865527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.417 [2024-11-20 12:43:35.865550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.417 qpair failed and we were unable to recover it. 00:29:30.417 [2024-11-20 12:43:35.865789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.417 [2024-11-20 12:43:35.865811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.417 qpair failed and we were unable to recover it. 00:29:30.417 [2024-11-20 12:43:35.865970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.417 [2024-11-20 12:43:35.865993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.417 qpair failed and we were unable to recover it. 00:29:30.417 [2024-11-20 12:43:35.866175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.417 [2024-11-20 12:43:35.866198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.417 qpair failed and we were unable to recover it. 00:29:30.417 [2024-11-20 12:43:35.866360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.417 [2024-11-20 12:43:35.866383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.417 qpair failed and we were unable to recover it. 00:29:30.417 [2024-11-20 12:43:35.866493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.417 [2024-11-20 12:43:35.866516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.417 qpair failed and we were unable to recover it. 00:29:30.417 [2024-11-20 12:43:35.866689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.417 [2024-11-20 12:43:35.866713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.417 qpair failed and we were unable to recover it. 00:29:30.417 [2024-11-20 12:43:35.866809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.417 [2024-11-20 12:43:35.866832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.417 qpair failed and we were unable to recover it. 00:29:30.417 [2024-11-20 12:43:35.866985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.417 [2024-11-20 12:43:35.867008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.417 qpair failed and we were unable to recover it. 00:29:30.417 [2024-11-20 12:43:35.867100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.417 [2024-11-20 12:43:35.867123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.417 qpair failed and we were unable to recover it. 00:29:30.417 [2024-11-20 12:43:35.867290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.417 [2024-11-20 12:43:35.867315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.417 qpair failed and we were unable to recover it. 00:29:30.417 [2024-11-20 12:43:35.867416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.417 [2024-11-20 12:43:35.867438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.417 qpair failed and we were unable to recover it. 00:29:30.417 [2024-11-20 12:43:35.867599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.417 [2024-11-20 12:43:35.867622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.417 qpair failed and we were unable to recover it. 00:29:30.417 [2024-11-20 12:43:35.867735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.417 [2024-11-20 12:43:35.867759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.417 qpair failed and we were unable to recover it. 00:29:30.417 [2024-11-20 12:43:35.867977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.417 [2024-11-20 12:43:35.867999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.417 qpair failed and we were unable to recover it. 00:29:30.418 [2024-11-20 12:43:35.868149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.418 [2024-11-20 12:43:35.868172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.418 qpair failed and we were unable to recover it. 00:29:30.418 [2024-11-20 12:43:35.868405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.418 [2024-11-20 12:43:35.868430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.418 qpair failed and we were unable to recover it. 00:29:30.418 [2024-11-20 12:43:35.868671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.418 [2024-11-20 12:43:35.868694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.418 qpair failed and we were unable to recover it. 00:29:30.418 [2024-11-20 12:43:35.868859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.418 [2024-11-20 12:43:35.868882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.418 qpair failed and we were unable to recover it. 00:29:30.418 [2024-11-20 12:43:35.869119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.418 [2024-11-20 12:43:35.869146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.418 qpair failed and we were unable to recover it. 00:29:30.418 [2024-11-20 12:43:35.869387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.418 [2024-11-20 12:43:35.869412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.418 qpair failed and we were unable to recover it. 00:29:30.418 [2024-11-20 12:43:35.869649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.418 [2024-11-20 12:43:35.869672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.418 qpair failed and we were unable to recover it. 00:29:30.418 [2024-11-20 12:43:35.869846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.418 [2024-11-20 12:43:35.869870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.418 qpair failed and we were unable to recover it. 00:29:30.418 [2024-11-20 12:43:35.870099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.418 [2024-11-20 12:43:35.870122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.418 qpair failed and we were unable to recover it. 00:29:30.418 [2024-11-20 12:43:35.870312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.418 [2024-11-20 12:43:35.870336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.418 qpair failed and we were unable to recover it. 00:29:30.418 [2024-11-20 12:43:35.870421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.418 [2024-11-20 12:43:35.870444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.418 qpair failed and we were unable to recover it. 00:29:30.418 [2024-11-20 12:43:35.870658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.418 [2024-11-20 12:43:35.870681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.418 qpair failed and we were unable to recover it. 00:29:30.418 [2024-11-20 12:43:35.870853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.418 [2024-11-20 12:43:35.870876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.418 qpair failed and we were unable to recover it. 00:29:30.418 [2024-11-20 12:43:35.870988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.418 [2024-11-20 12:43:35.871011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.418 qpair failed and we were unable to recover it. 00:29:30.418 [2024-11-20 12:43:35.871159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.418 [2024-11-20 12:43:35.871182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.418 qpair failed and we were unable to recover it. 00:29:30.418 [2024-11-20 12:43:35.871383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.418 [2024-11-20 12:43:35.871456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.418 qpair failed and we were unable to recover it. 00:29:30.418 [2024-11-20 12:43:35.871805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.418 [2024-11-20 12:43:35.871841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.418 qpair failed and we were unable to recover it. 00:29:30.418 [2024-11-20 12:43:35.872082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.418 [2024-11-20 12:43:35.872116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.418 qpair failed and we were unable to recover it. 00:29:30.418 [2024-11-20 12:43:35.872360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.418 [2024-11-20 12:43:35.872397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.418 qpair failed and we were unable to recover it. 00:29:30.418 [2024-11-20 12:43:35.872527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.418 [2024-11-20 12:43:35.872560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.418 qpair failed and we were unable to recover it. 00:29:30.418 [2024-11-20 12:43:35.872738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.418 [2024-11-20 12:43:35.872771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.418 qpair failed and we were unable to recover it. 00:29:30.418 [2024-11-20 12:43:35.872891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.418 [2024-11-20 12:43:35.872924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.418 qpair failed and we were unable to recover it. 00:29:30.418 [2024-11-20 12:43:35.873035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.418 [2024-11-20 12:43:35.873067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.418 qpair failed and we were unable to recover it. 00:29:30.418 [2024-11-20 12:43:35.873187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.418 [2024-11-20 12:43:35.873232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.418 qpair failed and we were unable to recover it. 00:29:30.418 [2024-11-20 12:43:35.873408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.418 [2024-11-20 12:43:35.873433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.418 qpair failed and we were unable to recover it. 00:29:30.418 [2024-11-20 12:43:35.873687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.418 [2024-11-20 12:43:35.873709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.418 qpair failed and we were unable to recover it. 00:29:30.418 [2024-11-20 12:43:35.873802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.418 [2024-11-20 12:43:35.873825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.418 qpair failed and we were unable to recover it. 00:29:30.418 [2024-11-20 12:43:35.873980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.418 [2024-11-20 12:43:35.874003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.418 qpair failed and we were unable to recover it. 00:29:30.418 [2024-11-20 12:43:35.874159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.418 [2024-11-20 12:43:35.874182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.418 qpair failed and we were unable to recover it. 00:29:30.418 [2024-11-20 12:43:35.874429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.418 [2024-11-20 12:43:35.874501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:30.418 qpair failed and we were unable to recover it. 00:29:30.418 [2024-11-20 12:43:35.874643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.418 [2024-11-20 12:43:35.874679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.418 qpair failed and we were unable to recover it. 00:29:30.418 [2024-11-20 12:43:35.874887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.418 [2024-11-20 12:43:35.874926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.418 qpair failed and we were unable to recover it. 00:29:30.418 [2024-11-20 12:43:35.875034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.418 [2024-11-20 12:43:35.875068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.418 qpair failed and we were unable to recover it. 00:29:30.418 [2024-11-20 12:43:35.875245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.418 [2024-11-20 12:43:35.875279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.418 qpair failed and we were unable to recover it. 00:29:30.418 [2024-11-20 12:43:35.875395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.418 [2024-11-20 12:43:35.875428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.418 qpair failed and we were unable to recover it. 00:29:30.418 [2024-11-20 12:43:35.875628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.418 [2024-11-20 12:43:35.875654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.418 qpair failed and we were unable to recover it. 00:29:30.418 [2024-11-20 12:43:35.875872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.418 [2024-11-20 12:43:35.875896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.418 qpair failed and we were unable to recover it. 00:29:30.418 [2024-11-20 12:43:35.876045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.418 [2024-11-20 12:43:35.876068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.418 qpair failed and we were unable to recover it. 00:29:30.419 [2024-11-20 12:43:35.876230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.419 [2024-11-20 12:43:35.876254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.419 qpair failed and we were unable to recover it. 00:29:30.419 [2024-11-20 12:43:35.876416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.419 [2024-11-20 12:43:35.876439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.419 qpair failed and we were unable to recover it. 00:29:30.419 [2024-11-20 12:43:35.876591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.419 [2024-11-20 12:43:35.876614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.419 qpair failed and we were unable to recover it. 00:29:30.419 [2024-11-20 12:43:35.876716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.419 [2024-11-20 12:43:35.876739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.419 qpair failed and we were unable to recover it. 00:29:30.419 [2024-11-20 12:43:35.876896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.419 [2024-11-20 12:43:35.876919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.419 qpair failed and we were unable to recover it. 00:29:30.419 [2024-11-20 12:43:35.877027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.419 [2024-11-20 12:43:35.877049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.419 qpair failed and we were unable to recover it. 00:29:30.419 [2024-11-20 12:43:35.877212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.419 [2024-11-20 12:43:35.877235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.419 qpair failed and we were unable to recover it. 00:29:30.419 [2024-11-20 12:43:35.877469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.419 [2024-11-20 12:43:35.877492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.419 qpair failed and we were unable to recover it. 00:29:30.419 [2024-11-20 12:43:35.877674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.419 [2024-11-20 12:43:35.877697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.419 qpair failed and we were unable to recover it. 00:29:30.419 [2024-11-20 12:43:35.877807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.419 [2024-11-20 12:43:35.877830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.419 qpair failed and we were unable to recover it. 00:29:30.419 [2024-11-20 12:43:35.877992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.419 [2024-11-20 12:43:35.878016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.419 qpair failed and we were unable to recover it. 00:29:30.419 [2024-11-20 12:43:35.878165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.419 [2024-11-20 12:43:35.878187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.419 qpair failed and we were unable to recover it. 00:29:30.419 [2024-11-20 12:43:35.878275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.419 [2024-11-20 12:43:35.878299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.419 qpair failed and we were unable to recover it. 00:29:30.419 [2024-11-20 12:43:35.878416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.419 [2024-11-20 12:43:35.878439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.419 qpair failed and we were unable to recover it. 00:29:30.419 [2024-11-20 12:43:35.878703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.419 [2024-11-20 12:43:35.878726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.419 qpair failed and we were unable to recover it. 00:29:30.419 [2024-11-20 12:43:35.878874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.419 [2024-11-20 12:43:35.878897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.419 qpair failed and we were unable to recover it. 00:29:30.419 [2024-11-20 12:43:35.879000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.419 [2024-11-20 12:43:35.879023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.419 qpair failed and we were unable to recover it. 00:29:30.419 [2024-11-20 12:43:35.879116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.419 [2024-11-20 12:43:35.879138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.419 qpair failed and we were unable to recover it. 00:29:30.419 [2024-11-20 12:43:35.879289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.419 [2024-11-20 12:43:35.879313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.419 qpair failed and we were unable to recover it. 00:29:30.419 [2024-11-20 12:43:35.879533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.419 [2024-11-20 12:43:35.879556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.419 qpair failed and we were unable to recover it. 00:29:30.419 [2024-11-20 12:43:35.879773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.419 [2024-11-20 12:43:35.879796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.419 qpair failed and we were unable to recover it. 00:29:30.419 [2024-11-20 12:43:35.879956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.419 [2024-11-20 12:43:35.879979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.419 qpair failed and we were unable to recover it. 00:29:30.419 [2024-11-20 12:43:35.880077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.419 [2024-11-20 12:43:35.880100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.419 qpair failed and we were unable to recover it. 00:29:30.419 [2024-11-20 12:43:35.880250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.419 [2024-11-20 12:43:35.880275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.419 qpair failed and we were unable to recover it. 00:29:30.419 [2024-11-20 12:43:35.880364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.419 [2024-11-20 12:43:35.880389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.419 qpair failed and we were unable to recover it. 00:29:30.419 [2024-11-20 12:43:35.880646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.419 [2024-11-20 12:43:35.880669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.419 qpair failed and we were unable to recover it. 00:29:30.419 [2024-11-20 12:43:35.880822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.419 [2024-11-20 12:43:35.880845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.419 qpair failed and we were unable to recover it. 00:29:30.419 [2024-11-20 12:43:35.881007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.419 [2024-11-20 12:43:35.881031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.419 qpair failed and we were unable to recover it. 00:29:30.419 [2024-11-20 12:43:35.881184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.419 [2024-11-20 12:43:35.881214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.419 qpair failed and we were unable to recover it. 00:29:30.419 [2024-11-20 12:43:35.881375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.419 [2024-11-20 12:43:35.881398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.419 qpair failed and we were unable to recover it. 00:29:30.419 [2024-11-20 12:43:35.881599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.419 [2024-11-20 12:43:35.881623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.419 qpair failed and we were unable to recover it. 00:29:30.419 [2024-11-20 12:43:35.881653] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:29:30.419 [2024-11-20 12:43:35.881694] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:30.419 [2024-11-20 12:43:35.881814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.419 [2024-11-20 12:43:35.881836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.419 qpair failed and we were unable to recover it. 00:29:30.419 [2024-11-20 12:43:35.881994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.419 [2024-11-20 12:43:35.882014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.419 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 12:43:35.882185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.420 [2024-11-20 12:43:35.882211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 12:43:35.882369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.420 [2024-11-20 12:43:35.882392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 12:43:35.882654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.420 [2024-11-20 12:43:35.882677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 12:43:35.882859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.420 [2024-11-20 12:43:35.882882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 12:43:35.883044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.420 [2024-11-20 12:43:35.883067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 12:43:35.883178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.420 [2024-11-20 12:43:35.883206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 12:43:35.883370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.420 [2024-11-20 12:43:35.883393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 12:43:35.883556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.420 [2024-11-20 12:43:35.883579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 12:43:35.883820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.420 [2024-11-20 12:43:35.883843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 12:43:35.883955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.420 [2024-11-20 12:43:35.883978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 12:43:35.884089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.420 [2024-11-20 12:43:35.884111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 12:43:35.884265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.420 [2024-11-20 12:43:35.884289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 12:43:35.884397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.420 [2024-11-20 12:43:35.884419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 12:43:35.884579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.420 [2024-11-20 12:43:35.884602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 12:43:35.884704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.420 [2024-11-20 12:43:35.884727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 12:43:35.884945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.420 [2024-11-20 12:43:35.884968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 12:43:35.885150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.420 [2024-11-20 12:43:35.885174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 12:43:35.885402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.420 [2024-11-20 12:43:35.885426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 12:43:35.885576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.420 [2024-11-20 12:43:35.885599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 12:43:35.885770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.420 [2024-11-20 12:43:35.885793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 12:43:35.885942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.420 [2024-11-20 12:43:35.885965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 12:43:35.886077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.420 [2024-11-20 12:43:35.886100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 12:43:35.886290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.420 [2024-11-20 12:43:35.886314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 12:43:35.886418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.420 [2024-11-20 12:43:35.886441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 12:43:35.886615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.420 [2024-11-20 12:43:35.886638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 12:43:35.886726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.420 [2024-11-20 12:43:35.886750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 12:43:35.886912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.420 [2024-11-20 12:43:35.886935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 12:43:35.887098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.420 [2024-11-20 12:43:35.887121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 12:43:35.887279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.420 [2024-11-20 12:43:35.887303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 12:43:35.887393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.420 [2024-11-20 12:43:35.887416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 12:43:35.887504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.420 [2024-11-20 12:43:35.887527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 12:43:35.887741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.420 [2024-11-20 12:43:35.887763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 12:43:35.887976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.420 [2024-11-20 12:43:35.887999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 12:43:35.888152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.420 [2024-11-20 12:43:35.888174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 12:43:35.888378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.420 [2024-11-20 12:43:35.888402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 12:43:35.888499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.420 [2024-11-20 12:43:35.888523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 12:43:35.888712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.420 [2024-11-20 12:43:35.888734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.420 [2024-11-20 12:43:35.888898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.420 [2024-11-20 12:43:35.888921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.420 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 12:43:35.889112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.421 [2024-11-20 12:43:35.889135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 12:43:35.889299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.421 [2024-11-20 12:43:35.889324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 12:43:35.889570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.421 [2024-11-20 12:43:35.889593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 12:43:35.889766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.421 [2024-11-20 12:43:35.889789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 12:43:35.889938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.421 [2024-11-20 12:43:35.889960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 12:43:35.890055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.421 [2024-11-20 12:43:35.890078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 12:43:35.890236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.421 [2024-11-20 12:43:35.890260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 12:43:35.890367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.421 [2024-11-20 12:43:35.890390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 12:43:35.890613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.421 [2024-11-20 12:43:35.890637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 12:43:35.890828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.421 [2024-11-20 12:43:35.890851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 12:43:35.890947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.421 [2024-11-20 12:43:35.890969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 12:43:35.891215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.421 [2024-11-20 12:43:35.891240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 12:43:35.891458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.421 [2024-11-20 12:43:35.891481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 12:43:35.891580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.421 [2024-11-20 12:43:35.891603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 12:43:35.891771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.421 [2024-11-20 12:43:35.891794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 12:43:35.892041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.421 [2024-11-20 12:43:35.892063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 12:43:35.892158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.421 [2024-11-20 12:43:35.892185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 12:43:35.892365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.421 [2024-11-20 12:43:35.892389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 12:43:35.892538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.421 [2024-11-20 12:43:35.892560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 12:43:35.892708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.421 [2024-11-20 12:43:35.892730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 12:43:35.892888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.421 [2024-11-20 12:43:35.892911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 12:43:35.893062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.421 [2024-11-20 12:43:35.893084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 12:43:35.893236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.421 [2024-11-20 12:43:35.893260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 12:43:35.893450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.421 [2024-11-20 12:43:35.893474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 12:43:35.893712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.421 [2024-11-20 12:43:35.893735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 12:43:35.893831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.421 [2024-11-20 12:43:35.893854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 12:43:35.894021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.421 [2024-11-20 12:43:35.894044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 12:43:35.894146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.421 [2024-11-20 12:43:35.894169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 12:43:35.894359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.421 [2024-11-20 12:43:35.894384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 12:43:35.894537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.421 [2024-11-20 12:43:35.894560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 12:43:35.894728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.421 [2024-11-20 12:43:35.894751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 12:43:35.894969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.421 [2024-11-20 12:43:35.894992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 12:43:35.895077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.421 [2024-11-20 12:43:35.895099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 12:43:35.895214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.421 [2024-11-20 12:43:35.895237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 12:43:35.895386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.421 [2024-11-20 12:43:35.895408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 12:43:35.895580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.421 [2024-11-20 12:43:35.895603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 12:43:35.895810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.421 [2024-11-20 12:43:35.895833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 12:43:35.896052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.421 [2024-11-20 12:43:35.896075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.421 qpair failed and we were unable to recover it. 00:29:30.421 [2024-11-20 12:43:35.896242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.422 [2024-11-20 12:43:35.896267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.422 qpair failed and we were unable to recover it. 00:29:30.422 [2024-11-20 12:43:35.896435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.422 [2024-11-20 12:43:35.896460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.422 qpair failed and we were unable to recover it. 00:29:30.422 [2024-11-20 12:43:35.896631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.422 [2024-11-20 12:43:35.896653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.422 qpair failed and we were unable to recover it. 00:29:30.422 [2024-11-20 12:43:35.896813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.422 [2024-11-20 12:43:35.896836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.422 qpair failed and we were unable to recover it. 00:29:30.422 [2024-11-20 12:43:35.896934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.422 [2024-11-20 12:43:35.896957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.422 qpair failed and we were unable to recover it. 00:29:30.422 [2024-11-20 12:43:35.897085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.422 [2024-11-20 12:43:35.897107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.422 qpair failed and we were unable to recover it. 00:29:30.422 [2024-11-20 12:43:35.897217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.422 [2024-11-20 12:43:35.897241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.422 qpair failed and we were unable to recover it. 00:29:30.422 [2024-11-20 12:43:35.897356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.422 [2024-11-20 12:43:35.897379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.422 qpair failed and we were unable to recover it. 00:29:30.422 [2024-11-20 12:43:35.897465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.422 [2024-11-20 12:43:35.897488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.422 qpair failed and we were unable to recover it. 00:29:30.422 [2024-11-20 12:43:35.897649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.422 [2024-11-20 12:43:35.897673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.422 qpair failed and we were unable to recover it. 00:29:30.422 [2024-11-20 12:43:35.897915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.422 [2024-11-20 12:43:35.897938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.422 qpair failed and we were unable to recover it. 00:29:30.422 [2024-11-20 12:43:35.898102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.422 [2024-11-20 12:43:35.898125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.422 qpair failed and we were unable to recover it. 00:29:30.422 [2024-11-20 12:43:35.898221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.422 [2024-11-20 12:43:35.898246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.422 qpair failed and we were unable to recover it. 00:29:30.422 [2024-11-20 12:43:35.898367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.422 [2024-11-20 12:43:35.898390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.422 qpair failed and we were unable to recover it. 00:29:30.422 [2024-11-20 12:43:35.898492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.422 [2024-11-20 12:43:35.898514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.422 qpair failed and we were unable to recover it. 00:29:30.422 [2024-11-20 12:43:35.898611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.422 [2024-11-20 12:43:35.898634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.422 qpair failed and we were unable to recover it. 00:29:30.422 [2024-11-20 12:43:35.898728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.422 [2024-11-20 12:43:35.898752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.422 qpair failed and we were unable to recover it. 00:29:30.422 [2024-11-20 12:43:35.899001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.422 [2024-11-20 12:43:35.899024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.422 qpair failed and we were unable to recover it. 00:29:30.422 [2024-11-20 12:43:35.899245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.422 [2024-11-20 12:43:35.899269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.422 qpair failed and we were unable to recover it. 00:29:30.422 [2024-11-20 12:43:35.899360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.422 [2024-11-20 12:43:35.899387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.422 qpair failed and we were unable to recover it. 00:29:30.422 [2024-11-20 12:43:35.899625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.422 [2024-11-20 12:43:35.899648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.422 qpair failed and we were unable to recover it. 00:29:30.422 [2024-11-20 12:43:35.899863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.422 [2024-11-20 12:43:35.899887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.422 qpair failed and we were unable to recover it. 00:29:30.422 [2024-11-20 12:43:35.900054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.422 [2024-11-20 12:43:35.900077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.422 qpair failed and we were unable to recover it. 00:29:30.422 [2024-11-20 12:43:35.900241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.422 [2024-11-20 12:43:35.900266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.422 qpair failed and we were unable to recover it. 00:29:30.422 [2024-11-20 12:43:35.900450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.422 [2024-11-20 12:43:35.900473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.422 qpair failed and we were unable to recover it. 00:29:30.422 [2024-11-20 12:43:35.900661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.422 [2024-11-20 12:43:35.900684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.422 qpair failed and we were unable to recover it. 00:29:30.422 [2024-11-20 12:43:35.900859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.422 [2024-11-20 12:43:35.900881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.422 qpair failed and we were unable to recover it. 00:29:30.422 [2024-11-20 12:43:35.900988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.422 [2024-11-20 12:43:35.901012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.422 qpair failed and we were unable to recover it. 00:29:30.422 [2024-11-20 12:43:35.901117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.422 [2024-11-20 12:43:35.901139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.422 qpair failed and we were unable to recover it. 00:29:30.422 [2024-11-20 12:43:35.901251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.422 [2024-11-20 12:43:35.901276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.422 qpair failed and we were unable to recover it. 00:29:30.422 [2024-11-20 12:43:35.901451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.422 [2024-11-20 12:43:35.901482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.422 qpair failed and we were unable to recover it. 00:29:30.422 [2024-11-20 12:43:35.901586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.422 [2024-11-20 12:43:35.901609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.422 qpair failed and we were unable to recover it. 00:29:30.422 [2024-11-20 12:43:35.901774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.422 [2024-11-20 12:43:35.901797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.422 qpair failed and we were unable to recover it. 00:29:30.422 [2024-11-20 12:43:35.901974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.422 [2024-11-20 12:43:35.901998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.422 qpair failed and we were unable to recover it. 00:29:30.422 [2024-11-20 12:43:35.902155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.422 [2024-11-20 12:43:35.902178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.422 qpair failed and we were unable to recover it. 00:29:30.422 [2024-11-20 12:43:35.902362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.422 [2024-11-20 12:43:35.902387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.422 qpair failed and we were unable to recover it. 00:29:30.422 [2024-11-20 12:43:35.902603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.422 [2024-11-20 12:43:35.902627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.422 qpair failed and we were unable to recover it. 00:29:30.422 [2024-11-20 12:43:35.902856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.422 [2024-11-20 12:43:35.902879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.422 qpair failed and we were unable to recover it. 00:29:30.422 [2024-11-20 12:43:35.903045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.422 [2024-11-20 12:43:35.903068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.422 qpair failed and we were unable to recover it. 00:29:30.423 [2024-11-20 12:43:35.903157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.423 [2024-11-20 12:43:35.903180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.423 qpair failed and we were unable to recover it. 00:29:30.423 [2024-11-20 12:43:35.903368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.423 [2024-11-20 12:43:35.903392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.423 qpair failed and we were unable to recover it. 00:29:30.423 [2024-11-20 12:43:35.903545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.423 [2024-11-20 12:43:35.903568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.423 qpair failed and we were unable to recover it. 00:29:30.423 [2024-11-20 12:43:35.903700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.423 [2024-11-20 12:43:35.903723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.423 qpair failed and we were unable to recover it. 00:29:30.423 [2024-11-20 12:43:35.903817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.423 [2024-11-20 12:43:35.903841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.423 qpair failed and we were unable to recover it. 00:29:30.423 [2024-11-20 12:43:35.904015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.423 [2024-11-20 12:43:35.904039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.423 qpair failed and we were unable to recover it. 00:29:30.423 [2024-11-20 12:43:35.904193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.423 [2024-11-20 12:43:35.904224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.423 qpair failed and we were unable to recover it. 00:29:30.423 [2024-11-20 12:43:35.904399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.423 [2024-11-20 12:43:35.904425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.423 qpair failed and we were unable to recover it. 00:29:30.423 [2024-11-20 12:43:35.904667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.423 [2024-11-20 12:43:35.904690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.423 qpair failed and we were unable to recover it. 00:29:30.423 [2024-11-20 12:43:35.904915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.423 [2024-11-20 12:43:35.904938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.423 qpair failed and we were unable to recover it. 00:29:30.423 [2024-11-20 12:43:35.905175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.423 [2024-11-20 12:43:35.905198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.423 qpair failed and we were unable to recover it. 00:29:30.423 [2024-11-20 12:43:35.905318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.423 [2024-11-20 12:43:35.905341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.423 qpair failed and we were unable to recover it. 00:29:30.423 [2024-11-20 12:43:35.905583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.423 [2024-11-20 12:43:35.905606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.423 qpair failed and we were unable to recover it. 00:29:30.423 [2024-11-20 12:43:35.905690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.423 [2024-11-20 12:43:35.905713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.423 qpair failed and we were unable to recover it. 00:29:30.423 [2024-11-20 12:43:35.905970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.423 [2024-11-20 12:43:35.905993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.423 qpair failed and we were unable to recover it. 00:29:30.423 [2024-11-20 12:43:35.906099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.423 [2024-11-20 12:43:35.906122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.423 qpair failed and we were unable to recover it. 00:29:30.423 [2024-11-20 12:43:35.906222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.423 [2024-11-20 12:43:35.906245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.423 qpair failed and we were unable to recover it. 00:29:30.423 [2024-11-20 12:43:35.906394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.423 [2024-11-20 12:43:35.906417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.423 qpair failed and we were unable to recover it. 00:29:30.423 [2024-11-20 12:43:35.906575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.423 [2024-11-20 12:43:35.906597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.423 qpair failed and we were unable to recover it. 00:29:30.423 [2024-11-20 12:43:35.906772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.423 [2024-11-20 12:43:35.906796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.423 qpair failed and we were unable to recover it. 00:29:30.423 [2024-11-20 12:43:35.906982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.423 [2024-11-20 12:43:35.907005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.423 qpair failed and we were unable to recover it. 00:29:30.423 [2024-11-20 12:43:35.907122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.423 [2024-11-20 12:43:35.907146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.423 qpair failed and we were unable to recover it. 00:29:30.423 [2024-11-20 12:43:35.907267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.423 [2024-11-20 12:43:35.907292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.423 qpair failed and we were unable to recover it. 00:29:30.423 [2024-11-20 12:43:35.907456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.423 [2024-11-20 12:43:35.907479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.423 qpair failed and we were unable to recover it. 00:29:30.423 [2024-11-20 12:43:35.907576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.423 [2024-11-20 12:43:35.907600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.423 qpair failed and we were unable to recover it. 00:29:30.423 [2024-11-20 12:43:35.907710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.423 [2024-11-20 12:43:35.907732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.423 qpair failed and we were unable to recover it. 00:29:30.423 [2024-11-20 12:43:35.907845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.423 [2024-11-20 12:43:35.907869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.423 qpair failed and we were unable to recover it. 00:29:30.423 [2024-11-20 12:43:35.908055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.423 [2024-11-20 12:43:35.908079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.423 qpair failed and we were unable to recover it. 00:29:30.423 [2024-11-20 12:43:35.908180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.423 [2024-11-20 12:43:35.908208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.423 qpair failed and we were unable to recover it. 00:29:30.423 [2024-11-20 12:43:35.908307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.423 [2024-11-20 12:43:35.908331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.423 qpair failed and we were unable to recover it. 00:29:30.423 [2024-11-20 12:43:35.908426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.423 [2024-11-20 12:43:35.908449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.423 qpair failed and we were unable to recover it. 00:29:30.423 [2024-11-20 12:43:35.908623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.423 [2024-11-20 12:43:35.908646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.423 qpair failed and we were unable to recover it. 00:29:30.423 [2024-11-20 12:43:35.908748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.423 [2024-11-20 12:43:35.908771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.423 qpair failed and we were unable to recover it. 00:29:30.423 [2024-11-20 12:43:35.908888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.423 [2024-11-20 12:43:35.908911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.423 qpair failed and we were unable to recover it. 00:29:30.423 [2024-11-20 12:43:35.909090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.423 [2024-11-20 12:43:35.909113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.423 qpair failed and we were unable to recover it. 00:29:30.423 [2024-11-20 12:43:35.909297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.423 [2024-11-20 12:43:35.909322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.423 qpair failed and we were unable to recover it. 00:29:30.423 [2024-11-20 12:43:35.909486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.423 [2024-11-20 12:43:35.909508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.423 qpair failed and we were unable to recover it. 00:29:30.423 [2024-11-20 12:43:35.909622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.423 [2024-11-20 12:43:35.909645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.424 qpair failed and we were unable to recover it. 00:29:30.424 [2024-11-20 12:43:35.909818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.424 [2024-11-20 12:43:35.909841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.424 qpair failed and we were unable to recover it. 00:29:30.424 [2024-11-20 12:43:35.910008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.424 [2024-11-20 12:43:35.910030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.424 qpair failed and we were unable to recover it. 00:29:30.424 [2024-11-20 12:43:35.910254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.424 [2024-11-20 12:43:35.910278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.424 qpair failed and we were unable to recover it. 00:29:30.424 [2024-11-20 12:43:35.910394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.424 [2024-11-20 12:43:35.910417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.424 qpair failed and we were unable to recover it. 00:29:30.424 [2024-11-20 12:43:35.910512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.424 [2024-11-20 12:43:35.910535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.424 qpair failed and we were unable to recover it. 00:29:30.424 [2024-11-20 12:43:35.910730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.424 [2024-11-20 12:43:35.910753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.424 qpair failed and we were unable to recover it. 00:29:30.424 [2024-11-20 12:43:35.910921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.424 [2024-11-20 12:43:35.910944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.424 qpair failed and we were unable to recover it. 00:29:30.424 [2024-11-20 12:43:35.911055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.424 [2024-11-20 12:43:35.911078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.424 qpair failed and we were unable to recover it. 00:29:30.424 [2024-11-20 12:43:35.911299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.424 [2024-11-20 12:43:35.911322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.424 qpair failed and we were unable to recover it. 00:29:30.424 [2024-11-20 12:43:35.911417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.424 [2024-11-20 12:43:35.911440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.424 qpair failed and we were unable to recover it. 00:29:30.424 [2024-11-20 12:43:35.911600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.424 [2024-11-20 12:43:35.911627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.424 qpair failed and we were unable to recover it. 00:29:30.424 [2024-11-20 12:43:35.911721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.424 [2024-11-20 12:43:35.911743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.424 qpair failed and we were unable to recover it. 00:29:30.424 [2024-11-20 12:43:35.911905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.424 [2024-11-20 12:43:35.911928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.424 qpair failed and we were unable to recover it. 00:29:30.424 [2024-11-20 12:43:35.912093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.424 [2024-11-20 12:43:35.912116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.424 qpair failed and we were unable to recover it. 00:29:30.424 [2024-11-20 12:43:35.912396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.424 [2024-11-20 12:43:35.912420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.424 qpair failed and we were unable to recover it. 00:29:30.424 [2024-11-20 12:43:35.912510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.424 [2024-11-20 12:43:35.912534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.424 qpair failed and we were unable to recover it. 00:29:30.424 [2024-11-20 12:43:35.912688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.424 [2024-11-20 12:43:35.912711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.424 qpair failed and we were unable to recover it. 00:29:30.424 [2024-11-20 12:43:35.912820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.424 [2024-11-20 12:43:35.912843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.424 qpair failed and we were unable to recover it. 00:29:30.424 [2024-11-20 12:43:35.912960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.424 [2024-11-20 12:43:35.912983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.424 qpair failed and we were unable to recover it. 00:29:30.424 [2024-11-20 12:43:35.913140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.424 [2024-11-20 12:43:35.913164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.424 qpair failed and we were unable to recover it. 00:29:30.424 [2024-11-20 12:43:35.913385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.424 [2024-11-20 12:43:35.913408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.424 qpair failed and we were unable to recover it. 00:29:30.424 [2024-11-20 12:43:35.913587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.424 [2024-11-20 12:43:35.913610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.424 qpair failed and we were unable to recover it. 00:29:30.424 [2024-11-20 12:43:35.913708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.424 [2024-11-20 12:43:35.913732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.424 qpair failed and we were unable to recover it. 00:29:30.424 [2024-11-20 12:43:35.913828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.424 [2024-11-20 12:43:35.913852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.424 qpair failed and we were unable to recover it. 00:29:30.424 [2024-11-20 12:43:35.913968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.424 [2024-11-20 12:43:35.913991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.424 qpair failed and we were unable to recover it. 00:29:30.424 [2024-11-20 12:43:35.914142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.424 [2024-11-20 12:43:35.914165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.424 qpair failed and we were unable to recover it. 00:29:30.424 [2024-11-20 12:43:35.914370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.424 [2024-11-20 12:43:35.914394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.424 qpair failed and we were unable to recover it. 00:29:30.424 [2024-11-20 12:43:35.914643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.424 [2024-11-20 12:43:35.914666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.424 qpair failed and we were unable to recover it. 00:29:30.424 [2024-11-20 12:43:35.914782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.424 [2024-11-20 12:43:35.914805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.424 qpair failed and we were unable to recover it. 00:29:30.424 [2024-11-20 12:43:35.914900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.424 [2024-11-20 12:43:35.914923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.424 qpair failed and we were unable to recover it. 00:29:30.424 [2024-11-20 12:43:35.915085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.424 [2024-11-20 12:43:35.915108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.424 qpair failed and we were unable to recover it. 00:29:30.424 [2024-11-20 12:43:35.915213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.424 [2024-11-20 12:43:35.915238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.424 qpair failed and we were unable to recover it. 00:29:30.424 [2024-11-20 12:43:35.915455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.424 [2024-11-20 12:43:35.915478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.424 qpair failed and we were unable to recover it. 00:29:30.424 [2024-11-20 12:43:35.915573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 12:43:35.915596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.425 qpair failed and we were unable to recover it. 00:29:30.425 [2024-11-20 12:43:35.915768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 12:43:35.915792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.425 qpair failed and we were unable to recover it. 00:29:30.425 [2024-11-20 12:43:35.915955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 12:43:35.915978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.425 qpair failed and we were unable to recover it. 00:29:30.425 [2024-11-20 12:43:35.916150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 12:43:35.916173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.425 qpair failed and we were unable to recover it. 00:29:30.425 [2024-11-20 12:43:35.916436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 12:43:35.916464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.425 qpair failed and we were unable to recover it. 00:29:30.425 [2024-11-20 12:43:35.916564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 12:43:35.916586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.425 qpair failed and we were unable to recover it. 00:29:30.425 [2024-11-20 12:43:35.916685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 12:43:35.916708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.425 qpair failed and we were unable to recover it. 00:29:30.425 [2024-11-20 12:43:35.916855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 12:43:35.916879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.425 qpair failed and we were unable to recover it. 00:29:30.425 [2024-11-20 12:43:35.916974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 12:43:35.916997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.425 qpair failed and we were unable to recover it. 00:29:30.425 [2024-11-20 12:43:35.917165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 12:43:35.917189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.425 qpair failed and we were unable to recover it. 00:29:30.425 [2024-11-20 12:43:35.917423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 12:43:35.917446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.425 qpair failed and we were unable to recover it. 00:29:30.425 [2024-11-20 12:43:35.917613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 12:43:35.917636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.425 qpair failed and we were unable to recover it. 00:29:30.425 [2024-11-20 12:43:35.917727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 12:43:35.917749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.425 qpair failed and we were unable to recover it. 00:29:30.425 [2024-11-20 12:43:35.917844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 12:43:35.917867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.425 qpair failed and we were unable to recover it. 00:29:30.425 [2024-11-20 12:43:35.918028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 12:43:35.918051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.425 qpair failed and we were unable to recover it. 00:29:30.425 [2024-11-20 12:43:35.918287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 12:43:35.918311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.425 qpair failed and we were unable to recover it. 00:29:30.425 [2024-11-20 12:43:35.918411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 12:43:35.918434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.425 qpair failed and we were unable to recover it. 00:29:30.425 [2024-11-20 12:43:35.918518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 12:43:35.918541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.425 qpair failed and we were unable to recover it. 00:29:30.425 [2024-11-20 12:43:35.918764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 12:43:35.918787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.425 qpair failed and we were unable to recover it. 00:29:30.425 [2024-11-20 12:43:35.918881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 12:43:35.918905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.425 qpair failed and we were unable to recover it. 00:29:30.425 [2024-11-20 12:43:35.919087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 12:43:35.919109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.425 qpair failed and we were unable to recover it. 00:29:30.425 [2024-11-20 12:43:35.919214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 12:43:35.919238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.425 qpair failed and we were unable to recover it. 00:29:30.425 [2024-11-20 12:43:35.919404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 12:43:35.919427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.425 qpair failed and we were unable to recover it. 00:29:30.425 [2024-11-20 12:43:35.919511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 12:43:35.919535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.425 qpair failed and we were unable to recover it. 00:29:30.425 [2024-11-20 12:43:35.919633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 12:43:35.919656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.425 qpair failed and we were unable to recover it. 00:29:30.425 [2024-11-20 12:43:35.919749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 12:43:35.919772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.425 qpair failed and we were unable to recover it. 00:29:30.425 [2024-11-20 12:43:35.919943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 12:43:35.919967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.425 qpair failed and we were unable to recover it. 00:29:30.425 [2024-11-20 12:43:35.920151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 12:43:35.920174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.425 qpair failed and we were unable to recover it. 00:29:30.425 [2024-11-20 12:43:35.920285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 12:43:35.920309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.425 qpair failed and we were unable to recover it. 00:29:30.425 [2024-11-20 12:43:35.920480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 12:43:35.920503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.425 qpair failed and we were unable to recover it. 00:29:30.425 [2024-11-20 12:43:35.920656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 12:43:35.920679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.425 qpair failed and we were unable to recover it. 00:29:30.425 [2024-11-20 12:43:35.920791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 12:43:35.920814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.425 qpair failed and we were unable to recover it. 00:29:30.425 [2024-11-20 12:43:35.921039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 12:43:35.921063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.425 qpair failed and we were unable to recover it. 00:29:30.425 [2024-11-20 12:43:35.921282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 12:43:35.921306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.425 qpair failed and we were unable to recover it. 00:29:30.425 [2024-11-20 12:43:35.921393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 12:43:35.921415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.425 qpair failed and we were unable to recover it. 00:29:30.425 [2024-11-20 12:43:35.921680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 12:43:35.921703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.425 qpair failed and we were unable to recover it. 00:29:30.425 [2024-11-20 12:43:35.921854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 12:43:35.921876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.425 qpair failed and we were unable to recover it. 00:29:30.425 [2024-11-20 12:43:35.922050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 12:43:35.922073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.425 qpair failed and we were unable to recover it. 00:29:30.425 [2024-11-20 12:43:35.922247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 12:43:35.922271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.425 qpair failed and we were unable to recover it. 00:29:30.426 [2024-11-20 12:43:35.922491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.426 [2024-11-20 12:43:35.922515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.426 qpair failed and we were unable to recover it. 00:29:30.426 [2024-11-20 12:43:35.922679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.426 [2024-11-20 12:43:35.922703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.426 qpair failed and we were unable to recover it. 00:29:30.426 [2024-11-20 12:43:35.922936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.426 [2024-11-20 12:43:35.922959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.426 qpair failed and we were unable to recover it. 00:29:30.426 [2024-11-20 12:43:35.923045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.426 [2024-11-20 12:43:35.923068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.426 qpair failed and we were unable to recover it. 00:29:30.426 [2024-11-20 12:43:35.923228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.426 [2024-11-20 12:43:35.923251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.426 qpair failed and we were unable to recover it. 00:29:30.426 [2024-11-20 12:43:35.923409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.426 [2024-11-20 12:43:35.923431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.426 qpair failed and we were unable to recover it. 00:29:30.426 [2024-11-20 12:43:35.923582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.426 [2024-11-20 12:43:35.923610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.426 qpair failed and we were unable to recover it. 00:29:30.426 [2024-11-20 12:43:35.923705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.426 [2024-11-20 12:43:35.923729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.426 qpair failed and we were unable to recover it. 00:29:30.426 [2024-11-20 12:43:35.923824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.426 [2024-11-20 12:43:35.923848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.426 qpair failed and we were unable to recover it. 00:29:30.426 [2024-11-20 12:43:35.923959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.426 [2024-11-20 12:43:35.923983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.426 qpair failed and we were unable to recover it. 00:29:30.426 [2024-11-20 12:43:35.924130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.426 [2024-11-20 12:43:35.924153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.426 qpair failed and we were unable to recover it. 00:29:30.426 [2024-11-20 12:43:35.924328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.426 [2024-11-20 12:43:35.924351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.426 qpair failed and we were unable to recover it. 00:29:30.426 [2024-11-20 12:43:35.924589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.426 [2024-11-20 12:43:35.924612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.426 qpair failed and we were unable to recover it. 00:29:30.426 [2024-11-20 12:43:35.924760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.426 [2024-11-20 12:43:35.924783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.426 qpair failed and we were unable to recover it. 00:29:30.426 [2024-11-20 12:43:35.924899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.426 [2024-11-20 12:43:35.924921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.426 qpair failed and we were unable to recover it. 00:29:30.426 [2024-11-20 12:43:35.925071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.426 [2024-11-20 12:43:35.925095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.426 qpair failed and we were unable to recover it. 00:29:30.426 [2024-11-20 12:43:35.925264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.426 [2024-11-20 12:43:35.925289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.426 qpair failed and we were unable to recover it. 00:29:30.426 [2024-11-20 12:43:35.925379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.426 [2024-11-20 12:43:35.925402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.426 qpair failed and we were unable to recover it. 00:29:30.426 [2024-11-20 12:43:35.925569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.426 [2024-11-20 12:43:35.925592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.426 qpair failed and we were unable to recover it. 00:29:30.426 [2024-11-20 12:43:35.925781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.426 [2024-11-20 12:43:35.925805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.426 qpair failed and we were unable to recover it. 00:29:30.426 [2024-11-20 12:43:35.926056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.426 [2024-11-20 12:43:35.926080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.426 qpair failed and we were unable to recover it. 00:29:30.426 [2024-11-20 12:43:35.926247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.426 [2024-11-20 12:43:35.926271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.426 qpair failed and we were unable to recover it. 00:29:30.426 [2024-11-20 12:43:35.926492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.426 [2024-11-20 12:43:35.926514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.426 qpair failed and we were unable to recover it. 00:29:30.426 [2024-11-20 12:43:35.926678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.426 [2024-11-20 12:43:35.926701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.426 qpair failed and we were unable to recover it. 00:29:30.426 [2024-11-20 12:43:35.926802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.426 [2024-11-20 12:43:35.926825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.426 qpair failed and we were unable to recover it. 00:29:30.426 [2024-11-20 12:43:35.926988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.426 [2024-11-20 12:43:35.927010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.426 qpair failed and we were unable to recover it. 00:29:30.426 [2024-11-20 12:43:35.927104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.426 [2024-11-20 12:43:35.927127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.426 qpair failed and we were unable to recover it. 00:29:30.426 [2024-11-20 12:43:35.927226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.426 [2024-11-20 12:43:35.927249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.426 qpair failed and we were unable to recover it. 00:29:30.426 [2024-11-20 12:43:35.927355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.426 [2024-11-20 12:43:35.927379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.426 qpair failed and we were unable to recover it. 00:29:30.426 [2024-11-20 12:43:35.927529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.426 [2024-11-20 12:43:35.927552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.426 qpair failed and we were unable to recover it. 00:29:30.426 [2024-11-20 12:43:35.927633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.426 [2024-11-20 12:43:35.927656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.426 qpair failed and we were unable to recover it. 00:29:30.426 [2024-11-20 12:43:35.927808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.426 [2024-11-20 12:43:35.927830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.426 qpair failed and we were unable to recover it. 00:29:30.426 [2024-11-20 12:43:35.927976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.426 [2024-11-20 12:43:35.927999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.426 qpair failed and we were unable to recover it. 00:29:30.426 [2024-11-20 12:43:35.928161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.426 [2024-11-20 12:43:35.928184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.426 qpair failed and we were unable to recover it. 00:29:30.426 [2024-11-20 12:43:35.928362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.426 [2024-11-20 12:43:35.928385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.426 qpair failed and we were unable to recover it. 00:29:30.426 [2024-11-20 12:43:35.928488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.426 [2024-11-20 12:43:35.928512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.426 qpair failed and we were unable to recover it. 00:29:30.426 [2024-11-20 12:43:35.928597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.426 [2024-11-20 12:43:35.928621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.426 qpair failed and we were unable to recover it. 00:29:30.426 [2024-11-20 12:43:35.928859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.426 [2024-11-20 12:43:35.928882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.426 qpair failed and we were unable to recover it. 00:29:30.426 [2024-11-20 12:43:35.929033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.427 [2024-11-20 12:43:35.929055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.427 qpair failed and we were unable to recover it. 00:29:30.427 [2024-11-20 12:43:35.929236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.427 [2024-11-20 12:43:35.929259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.427 qpair failed and we were unable to recover it. 00:29:30.427 [2024-11-20 12:43:35.929481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.427 [2024-11-20 12:43:35.929504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.427 qpair failed and we were unable to recover it. 00:29:30.427 [2024-11-20 12:43:35.929701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.427 [2024-11-20 12:43:35.929724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.427 qpair failed and we were unable to recover it. 00:29:30.427 [2024-11-20 12:43:35.929831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.427 [2024-11-20 12:43:35.929854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.427 qpair failed and we were unable to recover it. 00:29:30.427 [2024-11-20 12:43:35.930017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.427 [2024-11-20 12:43:35.930039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.427 qpair failed and we were unable to recover it. 00:29:30.427 [2024-11-20 12:43:35.930147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.427 [2024-11-20 12:43:35.930170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.427 qpair failed and we were unable to recover it. 00:29:30.427 [2024-11-20 12:43:35.930329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.427 [2024-11-20 12:43:35.930353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.427 qpair failed and we were unable to recover it. 00:29:30.427 [2024-11-20 12:43:35.930455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.427 [2024-11-20 12:43:35.930477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.427 qpair failed and we were unable to recover it. 00:29:30.427 [2024-11-20 12:43:35.930638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.427 [2024-11-20 12:43:35.930660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.427 qpair failed and we were unable to recover it. 00:29:30.427 [2024-11-20 12:43:35.930761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.427 [2024-11-20 12:43:35.930784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.427 qpair failed and we were unable to recover it. 00:29:30.427 [2024-11-20 12:43:35.930886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.427 [2024-11-20 12:43:35.930909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.427 qpair failed and we were unable to recover it. 00:29:30.427 [2024-11-20 12:43:35.931007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.427 [2024-11-20 12:43:35.931030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.427 qpair failed and we were unable to recover it. 00:29:30.427 [2024-11-20 12:43:35.931251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.427 [2024-11-20 12:43:35.931275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.427 qpair failed and we were unable to recover it. 00:29:30.427 [2024-11-20 12:43:35.931435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.427 [2024-11-20 12:43:35.931457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.427 qpair failed and we were unable to recover it. 00:29:30.427 [2024-11-20 12:43:35.931555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.427 [2024-11-20 12:43:35.931578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.427 qpair failed and we were unable to recover it. 00:29:30.427 [2024-11-20 12:43:35.931752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.427 [2024-11-20 12:43:35.931775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.427 qpair failed and we were unable to recover it. 00:29:30.427 [2024-11-20 12:43:35.931935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.427 [2024-11-20 12:43:35.931958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.427 qpair failed and we were unable to recover it. 00:29:30.427 [2024-11-20 12:43:35.932052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.427 [2024-11-20 12:43:35.932075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.427 qpair failed and we were unable to recover it. 00:29:30.427 [2024-11-20 12:43:35.932292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.427 [2024-11-20 12:43:35.932317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.427 qpair failed and we were unable to recover it. 00:29:30.427 [2024-11-20 12:43:35.932478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.427 [2024-11-20 12:43:35.932502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.427 qpair failed and we were unable to recover it. 00:29:30.427 [2024-11-20 12:43:35.932662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.427 [2024-11-20 12:43:35.932684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.427 qpair failed and we were unable to recover it. 00:29:30.427 [2024-11-20 12:43:35.932844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.427 [2024-11-20 12:43:35.932866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.427 qpair failed and we were unable to recover it. 00:29:30.427 [2024-11-20 12:43:35.932967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.427 [2024-11-20 12:43:35.932991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.427 qpair failed and we were unable to recover it. 00:29:30.427 [2024-11-20 12:43:35.933074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.427 [2024-11-20 12:43:35.933097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.427 qpair failed and we were unable to recover it. 00:29:30.427 [2024-11-20 12:43:35.933245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.427 [2024-11-20 12:43:35.933270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.427 qpair failed and we were unable to recover it. 00:29:30.427 [2024-11-20 12:43:35.933515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.427 [2024-11-20 12:43:35.933539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.427 qpair failed and we were unable to recover it. 00:29:30.427 [2024-11-20 12:43:35.933719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.427 [2024-11-20 12:43:35.933742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.427 qpair failed and we were unable to recover it. 00:29:30.427 [2024-11-20 12:43:35.933984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.427 [2024-11-20 12:43:35.934007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.427 qpair failed and we were unable to recover it. 00:29:30.427 [2024-11-20 12:43:35.934225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.427 [2024-11-20 12:43:35.934249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.427 qpair failed and we were unable to recover it. 00:29:30.427 [2024-11-20 12:43:35.934432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.427 [2024-11-20 12:43:35.934455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.427 qpair failed and we were unable to recover it. 00:29:30.427 [2024-11-20 12:43:35.934615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.427 [2024-11-20 12:43:35.934638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.427 qpair failed and we were unable to recover it. 00:29:30.427 [2024-11-20 12:43:35.934879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.427 [2024-11-20 12:43:35.934902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.427 qpair failed and we were unable to recover it. 00:29:30.427 [2024-11-20 12:43:35.935070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.427 [2024-11-20 12:43:35.935093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.427 qpair failed and we were unable to recover it. 00:29:30.427 [2024-11-20 12:43:35.935261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.427 [2024-11-20 12:43:35.935286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.427 qpair failed and we were unable to recover it. 00:29:30.427 [2024-11-20 12:43:35.935390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.427 [2024-11-20 12:43:35.935412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.427 qpair failed and we were unable to recover it. 00:29:30.427 [2024-11-20 12:43:35.935515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.427 [2024-11-20 12:43:35.935542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.427 qpair failed and we were unable to recover it. 00:29:30.427 [2024-11-20 12:43:35.935642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.427 [2024-11-20 12:43:35.935665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.427 qpair failed and we were unable to recover it. 00:29:30.427 [2024-11-20 12:43:35.935754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.427 [2024-11-20 12:43:35.935777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.428 qpair failed and we were unable to recover it. 00:29:30.428 [2024-11-20 12:43:35.935947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.428 [2024-11-20 12:43:35.935970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.428 qpair failed and we were unable to recover it. 00:29:30.428 [2024-11-20 12:43:35.936146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.428 [2024-11-20 12:43:35.936170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.428 qpair failed and we were unable to recover it. 00:29:30.428 [2024-11-20 12:43:35.936353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.428 [2024-11-20 12:43:35.936377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.428 qpair failed and we were unable to recover it. 00:29:30.428 [2024-11-20 12:43:35.936548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.428 [2024-11-20 12:43:35.936572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.428 qpair failed and we were unable to recover it. 00:29:30.428 [2024-11-20 12:43:35.936810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.428 [2024-11-20 12:43:35.936834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.428 qpair failed and we were unable to recover it. 00:29:30.428 [2024-11-20 12:43:35.936988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.428 [2024-11-20 12:43:35.937011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.428 qpair failed and we were unable to recover it. 00:29:30.428 [2024-11-20 12:43:35.937188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.428 [2024-11-20 12:43:35.937218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.428 qpair failed and we were unable to recover it. 00:29:30.428 [2024-11-20 12:43:35.937372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.428 [2024-11-20 12:43:35.937395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.428 qpair failed and we were unable to recover it. 00:29:30.428 [2024-11-20 12:43:35.937613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.428 [2024-11-20 12:43:35.937637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.428 qpair failed and we were unable to recover it. 00:29:30.428 [2024-11-20 12:43:35.937720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.428 [2024-11-20 12:43:35.937743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.428 qpair failed and we were unable to recover it. 00:29:30.428 [2024-11-20 12:43:35.937835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.428 [2024-11-20 12:43:35.937858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.428 qpair failed and we were unable to recover it. 00:29:30.428 [2024-11-20 12:43:35.938029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.428 [2024-11-20 12:43:35.938053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.428 qpair failed and we were unable to recover it. 00:29:30.428 [2024-11-20 12:43:35.938279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.428 [2024-11-20 12:43:35.938303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.428 qpair failed and we were unable to recover it. 00:29:30.428 [2024-11-20 12:43:35.938456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.428 [2024-11-20 12:43:35.938479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.428 qpair failed and we were unable to recover it. 00:29:30.428 [2024-11-20 12:43:35.938578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.428 [2024-11-20 12:43:35.938600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.428 qpair failed and we were unable to recover it. 00:29:30.428 [2024-11-20 12:43:35.938779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.428 [2024-11-20 12:43:35.938806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.428 qpair failed and we were unable to recover it. 00:29:30.428 [2024-11-20 12:43:35.938998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.428 [2024-11-20 12:43:35.939021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.428 qpair failed and we were unable to recover it. 00:29:30.428 [2024-11-20 12:43:35.939188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.428 [2024-11-20 12:43:35.939217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.428 qpair failed and we were unable to recover it. 00:29:30.428 [2024-11-20 12:43:35.939367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.428 [2024-11-20 12:43:35.939391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.428 qpair failed and we were unable to recover it. 00:29:30.428 [2024-11-20 12:43:35.939613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.428 [2024-11-20 12:43:35.939637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.428 qpair failed and we were unable to recover it. 00:29:30.428 [2024-11-20 12:43:35.939730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.428 [2024-11-20 12:43:35.939753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.428 qpair failed and we were unable to recover it. 00:29:30.428 [2024-11-20 12:43:35.939968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.428 [2024-11-20 12:43:35.939992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.428 qpair failed and we were unable to recover it. 00:29:30.428 [2024-11-20 12:43:35.940236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.428 [2024-11-20 12:43:35.940260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.428 qpair failed and we were unable to recover it. 00:29:30.428 [2024-11-20 12:43:35.940357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.428 [2024-11-20 12:43:35.940381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.428 qpair failed and we were unable to recover it. 00:29:30.428 [2024-11-20 12:43:35.940474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.428 [2024-11-20 12:43:35.940498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.428 qpair failed and we were unable to recover it. 00:29:30.428 [2024-11-20 12:43:35.940666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.428 [2024-11-20 12:43:35.940689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.428 qpair failed and we were unable to recover it. 00:29:30.428 [2024-11-20 12:43:35.940793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.428 [2024-11-20 12:43:35.940816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.428 qpair failed and we were unable to recover it. 00:29:30.428 [2024-11-20 12:43:35.940996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.428 [2024-11-20 12:43:35.941019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.428 qpair failed and we were unable to recover it. 00:29:30.428 [2024-11-20 12:43:35.941170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.428 [2024-11-20 12:43:35.941193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.428 qpair failed and we were unable to recover it. 00:29:30.428 [2024-11-20 12:43:35.941425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.428 [2024-11-20 12:43:35.941449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.428 qpair failed and we were unable to recover it. 00:29:30.428 [2024-11-20 12:43:35.941533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.428 [2024-11-20 12:43:35.941556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.428 qpair failed and we were unable to recover it. 00:29:30.428 [2024-11-20 12:43:35.941723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.428 [2024-11-20 12:43:35.941747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.428 qpair failed and we were unable to recover it. 00:29:30.428 [2024-11-20 12:43:35.941909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.428 [2024-11-20 12:43:35.941932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.428 qpair failed and we were unable to recover it. 00:29:30.428 [2024-11-20 12:43:35.942098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.429 [2024-11-20 12:43:35.942122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.429 qpair failed and we were unable to recover it. 00:29:30.429 [2024-11-20 12:43:35.942276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.429 [2024-11-20 12:43:35.942300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.429 qpair failed and we were unable to recover it. 00:29:30.429 [2024-11-20 12:43:35.942470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.429 [2024-11-20 12:43:35.942493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.429 qpair failed and we were unable to recover it. 00:29:30.429 [2024-11-20 12:43:35.942715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.429 [2024-11-20 12:43:35.942738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.429 qpair failed and we were unable to recover it. 00:29:30.429 [2024-11-20 12:43:35.942908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.429 [2024-11-20 12:43:35.942932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.429 qpair failed and we were unable to recover it. 00:29:30.429 [2024-11-20 12:43:35.943093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.429 [2024-11-20 12:43:35.943117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.429 qpair failed and we were unable to recover it. 00:29:30.429 [2024-11-20 12:43:35.943215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.429 [2024-11-20 12:43:35.943238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.429 qpair failed and we were unable to recover it. 00:29:30.429 [2024-11-20 12:43:35.943460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.429 [2024-11-20 12:43:35.943484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.429 qpair failed and we were unable to recover it. 00:29:30.429 [2024-11-20 12:43:35.943724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.429 [2024-11-20 12:43:35.943747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.429 qpair failed and we were unable to recover it. 00:29:30.429 [2024-11-20 12:43:35.943844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.429 [2024-11-20 12:43:35.943867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.429 qpair failed and we were unable to recover it. 00:29:30.429 [2024-11-20 12:43:35.943961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.429 [2024-11-20 12:43:35.943984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.429 qpair failed and we were unable to recover it. 00:29:30.429 [2024-11-20 12:43:35.944207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.429 [2024-11-20 12:43:35.944231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.429 qpair failed and we were unable to recover it. 00:29:30.429 [2024-11-20 12:43:35.944329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.429 [2024-11-20 12:43:35.944353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.429 qpair failed and we were unable to recover it. 00:29:30.429 [2024-11-20 12:43:35.944455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.429 [2024-11-20 12:43:35.944479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.429 qpair failed and we were unable to recover it. 00:29:30.429 [2024-11-20 12:43:35.944672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.429 [2024-11-20 12:43:35.944695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.429 qpair failed and we were unable to recover it. 00:29:30.429 [2024-11-20 12:43:35.944790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.429 [2024-11-20 12:43:35.944813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.429 qpair failed and we were unable to recover it. 00:29:30.429 [2024-11-20 12:43:35.944903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.429 [2024-11-20 12:43:35.944926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.429 qpair failed and we were unable to recover it. 00:29:30.429 [2024-11-20 12:43:35.945181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.429 [2024-11-20 12:43:35.945222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.429 qpair failed and we were unable to recover it. 00:29:30.429 [2024-11-20 12:43:35.945473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.429 [2024-11-20 12:43:35.945496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.429 qpair failed and we were unable to recover it. 00:29:30.429 [2024-11-20 12:43:35.945615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.429 [2024-11-20 12:43:35.945638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.429 qpair failed and we were unable to recover it. 00:29:30.429 [2024-11-20 12:43:35.945738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.429 [2024-11-20 12:43:35.945762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.429 qpair failed and we were unable to recover it. 00:29:30.429 [2024-11-20 12:43:35.945922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.429 [2024-11-20 12:43:35.945946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.429 qpair failed and we were unable to recover it. 00:29:30.429 [2024-11-20 12:43:35.946114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.429 [2024-11-20 12:43:35.946137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.429 qpair failed and we were unable to recover it. 00:29:30.429 [2024-11-20 12:43:35.946249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.429 [2024-11-20 12:43:35.946273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.429 qpair failed and we were unable to recover it. 00:29:30.429 [2024-11-20 12:43:35.946448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.429 [2024-11-20 12:43:35.946471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.429 qpair failed and we were unable to recover it. 00:29:30.429 [2024-11-20 12:43:35.946577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.429 [2024-11-20 12:43:35.946600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.429 qpair failed and we were unable to recover it. 00:29:30.429 [2024-11-20 12:43:35.946691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.429 [2024-11-20 12:43:35.946714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.429 qpair failed and we were unable to recover it. 00:29:30.429 [2024-11-20 12:43:35.946798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.429 [2024-11-20 12:43:35.946821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.429 qpair failed and we were unable to recover it. 00:29:30.429 [2024-11-20 12:43:35.946941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.429 [2024-11-20 12:43:35.946964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.429 qpair failed and we were unable to recover it. 00:29:30.429 [2024-11-20 12:43:35.947071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.429 [2024-11-20 12:43:35.947093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.429 qpair failed and we were unable to recover it. 00:29:30.429 [2024-11-20 12:43:35.947273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.429 [2024-11-20 12:43:35.947297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.429 qpair failed and we were unable to recover it. 00:29:30.429 [2024-11-20 12:43:35.947502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.429 [2024-11-20 12:43:35.947526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.429 qpair failed and we were unable to recover it. 00:29:30.429 [2024-11-20 12:43:35.947633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.429 [2024-11-20 12:43:35.947660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.429 qpair failed and we were unable to recover it. 00:29:30.429 [2024-11-20 12:43:35.947768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.429 [2024-11-20 12:43:35.947791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.429 qpair failed and we were unable to recover it. 00:29:30.429 [2024-11-20 12:43:35.947897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.429 [2024-11-20 12:43:35.947920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.429 qpair failed and we were unable to recover it. 00:29:30.429 [2024-11-20 12:43:35.948070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.429 [2024-11-20 12:43:35.948094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.429 qpair failed and we were unable to recover it. 00:29:30.429 [2024-11-20 12:43:35.948324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.429 [2024-11-20 12:43:35.948348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.429 qpair failed and we were unable to recover it. 00:29:30.429 [2024-11-20 12:43:35.948502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.429 [2024-11-20 12:43:35.948524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.429 qpair failed and we were unable to recover it. 00:29:30.430 [2024-11-20 12:43:35.948675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.430 [2024-11-20 12:43:35.948698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.430 qpair failed and we were unable to recover it. 00:29:30.430 [2024-11-20 12:43:35.948860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.430 [2024-11-20 12:43:35.948883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.430 qpair failed and we were unable to recover it. 00:29:30.430 [2024-11-20 12:43:35.948990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.430 [2024-11-20 12:43:35.949013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.430 qpair failed and we were unable to recover it. 00:29:30.430 [2024-11-20 12:43:35.949098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.430 [2024-11-20 12:43:35.949122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.430 qpair failed and we were unable to recover it. 00:29:30.430 [2024-11-20 12:43:35.949290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.430 [2024-11-20 12:43:35.949314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.430 qpair failed and we were unable to recover it. 00:29:30.430 [2024-11-20 12:43:35.949415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.430 [2024-11-20 12:43:35.949438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.430 qpair failed and we were unable to recover it. 00:29:30.430 [2024-11-20 12:43:35.949591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.430 [2024-11-20 12:43:35.949614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.430 qpair failed and we were unable to recover it. 00:29:30.430 [2024-11-20 12:43:35.949695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.430 [2024-11-20 12:43:35.949718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.430 qpair failed and we were unable to recover it. 00:29:30.430 [2024-11-20 12:43:35.949941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.430 [2024-11-20 12:43:35.949964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.430 qpair failed and we were unable to recover it. 00:29:30.430 [2024-11-20 12:43:35.950069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.430 [2024-11-20 12:43:35.950092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.430 qpair failed and we were unable to recover it. 00:29:30.430 [2024-11-20 12:43:35.950261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.430 [2024-11-20 12:43:35.950285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.430 qpair failed and we were unable to recover it. 00:29:30.430 [2024-11-20 12:43:35.950461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.430 [2024-11-20 12:43:35.950484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.430 qpair failed and we were unable to recover it. 00:29:30.430 [2024-11-20 12:43:35.950567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.430 [2024-11-20 12:43:35.950590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.430 qpair failed and we were unable to recover it. 00:29:30.430 [2024-11-20 12:43:35.950682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.430 [2024-11-20 12:43:35.950705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.430 qpair failed and we were unable to recover it. 00:29:30.430 [2024-11-20 12:43:35.950802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.430 [2024-11-20 12:43:35.950825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.430 qpair failed and we were unable to recover it. 00:29:30.430 [2024-11-20 12:43:35.950912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.430 [2024-11-20 12:43:35.950936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.430 qpair failed and we were unable to recover it. 00:29:30.430 [2024-11-20 12:43:35.951020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.430 [2024-11-20 12:43:35.951043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.430 qpair failed and we were unable to recover it. 00:29:30.430 [2024-11-20 12:43:35.951212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.430 [2024-11-20 12:43:35.951236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.430 qpair failed and we were unable to recover it. 00:29:30.430 [2024-11-20 12:43:35.951339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.430 [2024-11-20 12:43:35.951362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.430 qpair failed and we were unable to recover it. 00:29:30.430 [2024-11-20 12:43:35.951451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.430 [2024-11-20 12:43:35.951474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.430 qpair failed and we were unable to recover it. 00:29:30.430 [2024-11-20 12:43:35.951658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.430 [2024-11-20 12:43:35.951683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.430 qpair failed and we were unable to recover it. 00:29:30.430 [2024-11-20 12:43:35.951902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.430 [2024-11-20 12:43:35.951925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.430 qpair failed and we were unable to recover it. 00:29:30.430 [2024-11-20 12:43:35.952100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.430 [2024-11-20 12:43:35.952123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.430 qpair failed and we were unable to recover it. 00:29:30.430 [2024-11-20 12:43:35.952283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.430 [2024-11-20 12:43:35.952308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.430 qpair failed and we were unable to recover it. 00:29:30.430 [2024-11-20 12:43:35.952477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.430 [2024-11-20 12:43:35.952501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.430 qpair failed and we were unable to recover it. 00:29:30.430 [2024-11-20 12:43:35.952604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.430 [2024-11-20 12:43:35.952627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.430 qpair failed and we were unable to recover it. 00:29:30.430 [2024-11-20 12:43:35.952726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.430 [2024-11-20 12:43:35.952750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.430 qpair failed and we were unable to recover it. 00:29:30.430 [2024-11-20 12:43:35.952924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.430 [2024-11-20 12:43:35.952947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.430 qpair failed and we were unable to recover it. 00:29:30.430 [2024-11-20 12:43:35.953108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.430 [2024-11-20 12:43:35.953131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.430 qpair failed and we were unable to recover it. 00:29:30.430 [2024-11-20 12:43:35.953286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.430 [2024-11-20 12:43:35.953310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.430 qpair failed and we were unable to recover it. 00:29:30.430 [2024-11-20 12:43:35.953405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.430 [2024-11-20 12:43:35.953428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.430 qpair failed and we were unable to recover it. 00:29:30.430 [2024-11-20 12:43:35.953645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.430 [2024-11-20 12:43:35.953670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.430 qpair failed and we were unable to recover it. 00:29:30.430 [2024-11-20 12:43:35.953758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.430 [2024-11-20 12:43:35.953781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.430 qpair failed and we were unable to recover it. 00:29:30.430 [2024-11-20 12:43:35.953955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.430 [2024-11-20 12:43:35.953978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.430 qpair failed and we were unable to recover it. 00:29:30.430 [2024-11-20 12:43:35.954154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.430 [2024-11-20 12:43:35.954178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.430 qpair failed and we were unable to recover it. 00:29:30.430 [2024-11-20 12:43:35.954428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.430 [2024-11-20 12:43:35.954457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.430 qpair failed and we were unable to recover it. 00:29:30.430 [2024-11-20 12:43:35.954620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.430 [2024-11-20 12:43:35.954644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.430 qpair failed and we were unable to recover it. 00:29:30.430 [2024-11-20 12:43:35.954800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.430 [2024-11-20 12:43:35.954824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.430 qpair failed and we were unable to recover it. 00:29:30.430 [2024-11-20 12:43:35.954995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.431 [2024-11-20 12:43:35.955019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.431 qpair failed and we were unable to recover it. 00:29:30.431 [2024-11-20 12:43:35.955237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.431 [2024-11-20 12:43:35.955262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.431 qpair failed and we were unable to recover it. 00:29:30.431 [2024-11-20 12:43:35.955506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.431 [2024-11-20 12:43:35.955530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.431 qpair failed and we were unable to recover it. 00:29:30.431 [2024-11-20 12:43:35.955618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.431 [2024-11-20 12:43:35.955641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.431 qpair failed and we were unable to recover it. 00:29:30.431 [2024-11-20 12:43:35.955844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.431 [2024-11-20 12:43:35.955867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.431 qpair failed and we were unable to recover it. 00:29:30.431 [2024-11-20 12:43:35.956017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.431 [2024-11-20 12:43:35.956040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.431 qpair failed and we were unable to recover it. 00:29:30.431 [2024-11-20 12:43:35.956187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.431 [2024-11-20 12:43:35.956216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.431 qpair failed and we were unable to recover it. 00:29:30.431 [2024-11-20 12:43:35.956390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.431 [2024-11-20 12:43:35.956414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.431 qpair failed and we were unable to recover it. 00:29:30.431 [2024-11-20 12:43:35.956576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.431 [2024-11-20 12:43:35.956599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.431 qpair failed and we were unable to recover it. 00:29:30.431 [2024-11-20 12:43:35.956753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.431 [2024-11-20 12:43:35.956776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.431 qpair failed and we were unable to recover it. 00:29:30.431 [2024-11-20 12:43:35.956936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.431 [2024-11-20 12:43:35.956960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.431 qpair failed and we were unable to recover it. 00:29:30.431 [2024-11-20 12:43:35.957155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.431 [2024-11-20 12:43:35.957179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.431 qpair failed and we were unable to recover it. 00:29:30.431 [2024-11-20 12:43:35.957270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.431 [2024-11-20 12:43:35.957294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.431 qpair failed and we were unable to recover it. 00:29:30.431 [2024-11-20 12:43:35.957513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.431 [2024-11-20 12:43:35.957536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.431 qpair failed and we were unable to recover it. 00:29:30.431 [2024-11-20 12:43:35.957802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.431 [2024-11-20 12:43:35.957825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.431 qpair failed and we were unable to recover it. 00:29:30.431 [2024-11-20 12:43:35.957975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.431 [2024-11-20 12:43:35.957998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.431 qpair failed and we were unable to recover it. 00:29:30.431 [2024-11-20 12:43:35.958215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.431 [2024-11-20 12:43:35.958238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.431 qpair failed and we were unable to recover it. 00:29:30.431 [2024-11-20 12:43:35.958496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.431 [2024-11-20 12:43:35.958520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.431 qpair failed and we were unable to recover it. 00:29:30.431 [2024-11-20 12:43:35.958700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.431 [2024-11-20 12:43:35.958724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.431 qpair failed and we were unable to recover it. 00:29:30.431 [2024-11-20 12:43:35.958964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.431 [2024-11-20 12:43:35.958988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.431 qpair failed and we were unable to recover it. 00:29:30.431 [2024-11-20 12:43:35.959104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.431 [2024-11-20 12:43:35.959127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.431 qpair failed and we were unable to recover it. 00:29:30.431 [2024-11-20 12:43:35.959313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.431 [2024-11-20 12:43:35.959338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.431 qpair failed and we were unable to recover it. 00:29:30.431 [2024-11-20 12:43:35.959442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.431 [2024-11-20 12:43:35.959465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.431 qpair failed and we were unable to recover it. 00:29:30.431 [2024-11-20 12:43:35.959569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.431 [2024-11-20 12:43:35.959592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.431 qpair failed and we were unable to recover it. 00:29:30.431 [2024-11-20 12:43:35.959744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.431 [2024-11-20 12:43:35.959772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.431 qpair failed and we were unable to recover it. 00:29:30.431 [2024-11-20 12:43:35.959937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.431 [2024-11-20 12:43:35.959961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.431 qpair failed and we were unable to recover it. 00:29:30.431 [2024-11-20 12:43:35.960058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.431 [2024-11-20 12:43:35.960081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.431 qpair failed and we were unable to recover it. 00:29:30.431 [2024-11-20 12:43:35.960299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.431 [2024-11-20 12:43:35.960323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.431 qpair failed and we were unable to recover it. 00:29:30.431 [2024-11-20 12:43:35.960553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.431 [2024-11-20 12:43:35.960576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.431 qpair failed and we were unable to recover it. 00:29:30.431 [2024-11-20 12:43:35.960826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.431 [2024-11-20 12:43:35.960849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.431 qpair failed and we were unable to recover it. 00:29:30.431 [2024-11-20 12:43:35.960998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.431 [2024-11-20 12:43:35.961022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.431 qpair failed and we were unable to recover it. 00:29:30.431 [2024-11-20 12:43:35.961124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.431 [2024-11-20 12:43:35.961148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.431 qpair failed and we were unable to recover it. 00:29:30.431 [2024-11-20 12:43:35.961308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.431 [2024-11-20 12:43:35.961333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.431 qpair failed and we were unable to recover it. 00:29:30.431 [2024-11-20 12:43:35.961418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.431 [2024-11-20 12:43:35.961441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.431 qpair failed and we were unable to recover it. 00:29:30.431 [2024-11-20 12:43:35.961607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.431 [2024-11-20 12:43:35.961630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.431 qpair failed and we were unable to recover it. 00:29:30.431 [2024-11-20 12:43:35.961791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.431 [2024-11-20 12:43:35.961814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.431 qpair failed and we were unable to recover it. 00:29:30.431 [2024-11-20 12:43:35.961898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.431 [2024-11-20 12:43:35.961921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.431 qpair failed and we were unable to recover it. 00:29:30.431 [2024-11-20 12:43:35.962172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.431 [2024-11-20 12:43:35.962196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.431 qpair failed and we were unable to recover it. 00:29:30.431 [2024-11-20 12:43:35.962370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.431 [2024-11-20 12:43:35.962395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.431 qpair failed and we were unable to recover it. 00:29:30.432 [2024-11-20 12:43:35.962581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-11-20 12:43:35.962605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-11-20 12:43:35.962845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-11-20 12:43:35.962868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-11-20 12:43:35.963052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-11-20 12:43:35.963075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-11-20 12:43:35.963315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-11-20 12:43:35.963339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-11-20 12:43:35.963450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-11-20 12:43:35.963473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-11-20 12:43:35.963624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-11-20 12:43:35.963648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-11-20 12:43:35.963740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:30.432 [2024-11-20 12:43:35.963838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-11-20 12:43:35.963861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-11-20 12:43:35.964017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-11-20 12:43:35.964041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-11-20 12:43:35.964148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-11-20 12:43:35.964170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-11-20 12:43:35.964327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-11-20 12:43:35.964351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-11-20 12:43:35.964453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-11-20 12:43:35.964477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-11-20 12:43:35.964591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-11-20 12:43:35.964614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-11-20 12:43:35.964725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-11-20 12:43:35.964752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-11-20 12:43:35.964858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-11-20 12:43:35.964881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-11-20 12:43:35.964968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-11-20 12:43:35.964991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-11-20 12:43:35.965153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-11-20 12:43:35.965176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-11-20 12:43:35.965282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-11-20 12:43:35.965307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-11-20 12:43:35.965465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-11-20 12:43:35.965488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-11-20 12:43:35.965587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-11-20 12:43:35.965610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-11-20 12:43:35.965764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-11-20 12:43:35.965788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-11-20 12:43:35.965885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-11-20 12:43:35.965908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-11-20 12:43:35.966008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-11-20 12:43:35.966031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-11-20 12:43:35.966137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-11-20 12:43:35.966161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-11-20 12:43:35.966273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-11-20 12:43:35.966298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-11-20 12:43:35.966398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-11-20 12:43:35.966422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-11-20 12:43:35.966529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-11-20 12:43:35.966553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-11-20 12:43:35.966739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-11-20 12:43:35.966762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-11-20 12:43:35.966851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-11-20 12:43:35.966874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-11-20 12:43:35.966958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-11-20 12:43:35.966982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-11-20 12:43:35.967080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-11-20 12:43:35.967104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-11-20 12:43:35.967188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-11-20 12:43:35.967220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-11-20 12:43:35.967320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-11-20 12:43:35.967344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-11-20 12:43:35.967560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-11-20 12:43:35.967585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-11-20 12:43:35.967679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-11-20 12:43:35.967702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-11-20 12:43:35.967862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-11-20 12:43:35.967886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-11-20 12:43:35.967971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-11-20 12:43:35.967994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-11-20 12:43:35.968216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-11-20 12:43:35.968241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.432 [2024-11-20 12:43:35.968407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.432 [2024-11-20 12:43:35.968431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.432 qpair failed and we were unable to recover it. 00:29:30.433 [2024-11-20 12:43:35.968527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-11-20 12:43:35.968551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-11-20 12:43:35.968702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-11-20 12:43:35.968725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-11-20 12:43:35.968881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-11-20 12:43:35.968906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-11-20 12:43:35.968991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-11-20 12:43:35.969015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-11-20 12:43:35.969163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-11-20 12:43:35.969187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-11-20 12:43:35.969311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-11-20 12:43:35.969335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-11-20 12:43:35.969423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-11-20 12:43:35.969446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-11-20 12:43:35.969608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-11-20 12:43:35.969632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-11-20 12:43:35.969816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-11-20 12:43:35.969840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-11-20 12:43:35.969933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-11-20 12:43:35.969956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-11-20 12:43:35.970145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-11-20 12:43:35.970168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-11-20 12:43:35.970339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-11-20 12:43:35.970364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-11-20 12:43:35.970520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-11-20 12:43:35.970544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-11-20 12:43:35.970744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-11-20 12:43:35.970769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-11-20 12:43:35.970874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-11-20 12:43:35.970898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-11-20 12:43:35.971018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-11-20 12:43:35.971042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-11-20 12:43:35.971131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-11-20 12:43:35.971155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-11-20 12:43:35.971244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-11-20 12:43:35.971268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-11-20 12:43:35.971421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-11-20 12:43:35.971445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-11-20 12:43:35.971550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-11-20 12:43:35.971574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-11-20 12:43:35.971733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-11-20 12:43:35.971757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-11-20 12:43:35.971920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-11-20 12:43:35.971945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-11-20 12:43:35.972033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-11-20 12:43:35.972057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-11-20 12:43:35.972278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-11-20 12:43:35.972303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-11-20 12:43:35.972408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-11-20 12:43:35.972432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-11-20 12:43:35.972581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-11-20 12:43:35.972605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-11-20 12:43:35.972783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-11-20 12:43:35.972808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-11-20 12:43:35.972919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-11-20 12:43:35.972941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-11-20 12:43:35.973189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-11-20 12:43:35.973233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-11-20 12:43:35.973341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-11-20 12:43:35.973365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-11-20 12:43:35.973582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-11-20 12:43:35.973606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-11-20 12:43:35.973693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-11-20 12:43:35.973717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-11-20 12:43:35.973815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-11-20 12:43:35.973839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.433 qpair failed and we were unable to recover it. 00:29:30.433 [2024-11-20 12:43:35.973933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.433 [2024-11-20 12:43:35.973958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-11-20 12:43:35.974055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-11-20 12:43:35.974080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-11-20 12:43:35.974249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-11-20 12:43:35.974274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-11-20 12:43:35.974456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-11-20 12:43:35.974482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-11-20 12:43:35.974579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-11-20 12:43:35.974603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-11-20 12:43:35.974767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-11-20 12:43:35.974792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-11-20 12:43:35.974960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-11-20 12:43:35.974984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-11-20 12:43:35.975088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-11-20 12:43:35.975112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-11-20 12:43:35.975273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-11-20 12:43:35.975298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-11-20 12:43:35.975454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-11-20 12:43:35.975482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-11-20 12:43:35.975701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-11-20 12:43:35.975725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-11-20 12:43:35.975910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-11-20 12:43:35.975934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-11-20 12:43:35.976086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-11-20 12:43:35.976109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-11-20 12:43:35.976211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-11-20 12:43:35.976236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-11-20 12:43:35.976323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-11-20 12:43:35.976347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-11-20 12:43:35.976504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-11-20 12:43:35.976527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-11-20 12:43:35.976681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-11-20 12:43:35.976704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-11-20 12:43:35.976793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-11-20 12:43:35.976816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-11-20 12:43:35.976913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-11-20 12:43:35.976936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-11-20 12:43:35.977098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-11-20 12:43:35.977121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-11-20 12:43:35.977300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-11-20 12:43:35.977325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-11-20 12:43:35.977426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-11-20 12:43:35.977449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-11-20 12:43:35.977599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-11-20 12:43:35.977622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-11-20 12:43:35.977842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-11-20 12:43:35.977865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-11-20 12:43:35.978065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-11-20 12:43:35.978088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-11-20 12:43:35.978244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-11-20 12:43:35.978268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-11-20 12:43:35.978489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-11-20 12:43:35.978513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-11-20 12:43:35.978604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-11-20 12:43:35.978627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-11-20 12:43:35.978733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-11-20 12:43:35.978757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-11-20 12:43:35.978945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-11-20 12:43:35.978968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-11-20 12:43:35.979224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-11-20 12:43:35.979248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-11-20 12:43:35.979483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-11-20 12:43:35.979506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-11-20 12:43:35.979625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-11-20 12:43:35.979649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-11-20 12:43:35.979745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-11-20 12:43:35.979768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-11-20 12:43:35.979950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-11-20 12:43:35.979972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-11-20 12:43:35.980146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-11-20 12:43:35.980169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-11-20 12:43:35.980365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-11-20 12:43:35.980390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-11-20 12:43:35.980572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-11-20 12:43:35.980595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.434 [2024-11-20 12:43:35.980749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.434 [2024-11-20 12:43:35.980772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.434 qpair failed and we were unable to recover it. 00:29:30.435 [2024-11-20 12:43:35.980855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-11-20 12:43:35.980878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-11-20 12:43:35.980963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-11-20 12:43:35.980986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-11-20 12:43:35.981068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-11-20 12:43:35.981090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-11-20 12:43:35.981197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-11-20 12:43:35.981227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-11-20 12:43:35.981332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-11-20 12:43:35.981356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-11-20 12:43:35.981524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-11-20 12:43:35.981547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-11-20 12:43:35.981728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-11-20 12:43:35.981752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-11-20 12:43:35.981905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-11-20 12:43:35.981929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-11-20 12:43:35.982077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-11-20 12:43:35.982100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-11-20 12:43:35.982247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-11-20 12:43:35.982271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-11-20 12:43:35.982368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-11-20 12:43:35.982391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-11-20 12:43:35.982501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-11-20 12:43:35.982528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-11-20 12:43:35.982626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-11-20 12:43:35.982649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-11-20 12:43:35.982807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-11-20 12:43:35.982832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-11-20 12:43:35.982943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-11-20 12:43:35.982967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-11-20 12:43:35.983136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-11-20 12:43:35.983159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-11-20 12:43:35.983264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-11-20 12:43:35.983289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-11-20 12:43:35.983470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-11-20 12:43:35.983493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-11-20 12:43:35.983588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-11-20 12:43:35.983611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-11-20 12:43:35.983844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-11-20 12:43:35.983867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-11-20 12:43:35.984085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-11-20 12:43:35.984108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-11-20 12:43:35.984268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-11-20 12:43:35.984292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-11-20 12:43:35.984457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-11-20 12:43:35.984480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-11-20 12:43:35.984578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-11-20 12:43:35.984601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-11-20 12:43:35.984827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-11-20 12:43:35.984851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-11-20 12:43:35.984941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-11-20 12:43:35.984965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-11-20 12:43:35.985052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-11-20 12:43:35.985076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-11-20 12:43:35.985271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-11-20 12:43:35.985295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-11-20 12:43:35.985472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-11-20 12:43:35.985495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-11-20 12:43:35.985652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-11-20 12:43:35.985676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-11-20 12:43:35.985840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-11-20 12:43:35.985863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-11-20 12:43:35.986078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-11-20 12:43:35.986101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-11-20 12:43:35.986301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-11-20 12:43:35.986326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-11-20 12:43:35.986548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-11-20 12:43:35.986572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-11-20 12:43:35.986654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-11-20 12:43:35.986677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-11-20 12:43:35.986911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-11-20 12:43:35.986934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-11-20 12:43:35.987050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-11-20 12:43:35.987073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-11-20 12:43:35.987167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.435 [2024-11-20 12:43:35.987190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.435 qpair failed and we were unable to recover it. 00:29:30.435 [2024-11-20 12:43:35.987349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-11-20 12:43:35.987375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-11-20 12:43:35.987470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-11-20 12:43:35.987492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-11-20 12:43:35.987655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-11-20 12:43:35.987677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-11-20 12:43:35.987763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-11-20 12:43:35.987786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-11-20 12:43:35.987936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-11-20 12:43:35.987959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-11-20 12:43:35.988116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-11-20 12:43:35.988139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-11-20 12:43:35.988288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-11-20 12:43:35.988312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-11-20 12:43:35.988538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-11-20 12:43:35.988561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-11-20 12:43:35.988751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-11-20 12:43:35.988774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-11-20 12:43:35.988942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-11-20 12:43:35.988965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-11-20 12:43:35.989060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-11-20 12:43:35.989083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-11-20 12:43:35.989270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-11-20 12:43:35.989297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-11-20 12:43:35.989400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-11-20 12:43:35.989423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-11-20 12:43:35.989578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-11-20 12:43:35.989602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-11-20 12:43:35.989756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-11-20 12:43:35.989779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-11-20 12:43:35.989895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-11-20 12:43:35.989918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-11-20 12:43:35.990068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-11-20 12:43:35.990093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-11-20 12:43:35.990190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-11-20 12:43:35.990237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-11-20 12:43:35.990410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-11-20 12:43:35.990433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-11-20 12:43:35.990599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-11-20 12:43:35.990623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-11-20 12:43:35.990746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-11-20 12:43:35.990770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-11-20 12:43:35.990886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-11-20 12:43:35.990909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-11-20 12:43:35.991011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-11-20 12:43:35.991035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-11-20 12:43:35.991217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-11-20 12:43:35.991241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-11-20 12:43:35.991398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-11-20 12:43:35.991421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-11-20 12:43:35.991574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-11-20 12:43:35.991598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-11-20 12:43:35.991699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-11-20 12:43:35.991722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-11-20 12:43:35.991875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-11-20 12:43:35.991898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-11-20 12:43:35.992071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-11-20 12:43:35.992095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-11-20 12:43:35.992190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-11-20 12:43:35.992219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-11-20 12:43:35.992441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-11-20 12:43:35.992465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-11-20 12:43:35.992570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-11-20 12:43:35.992593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-11-20 12:43:35.992760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-11-20 12:43:35.992783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-11-20 12:43:35.992946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-11-20 12:43:35.992969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-11-20 12:43:35.993080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-11-20 12:43:35.993103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-11-20 12:43:35.993326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-11-20 12:43:35.993351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-11-20 12:43:35.993593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-11-20 12:43:35.993617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-11-20 12:43:35.993717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-11-20 12:43:35.993741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.436 [2024-11-20 12:43:35.993843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.436 [2024-11-20 12:43:35.993866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.436 qpair failed and we were unable to recover it. 00:29:30.437 [2024-11-20 12:43:35.993955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-11-20 12:43:35.993978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-11-20 12:43:35.994193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-11-20 12:43:35.994223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-11-20 12:43:35.994439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-11-20 12:43:35.994468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-11-20 12:43:35.994577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-11-20 12:43:35.994601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-11-20 12:43:35.994791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-11-20 12:43:35.994815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-11-20 12:43:35.994913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-11-20 12:43:35.994937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-11-20 12:43:35.995048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-11-20 12:43:35.995071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-11-20 12:43:35.995183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-11-20 12:43:35.995213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-11-20 12:43:35.995435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-11-20 12:43:35.995459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-11-20 12:43:35.995622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-11-20 12:43:35.995646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-11-20 12:43:35.995809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-11-20 12:43:35.995833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-11-20 12:43:35.996051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-11-20 12:43:35.996075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-11-20 12:43:35.996184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-11-20 12:43:35.996221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-11-20 12:43:35.996324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-11-20 12:43:35.996347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-11-20 12:43:35.996451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-11-20 12:43:35.996474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-11-20 12:43:35.996580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-11-20 12:43:35.996603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-11-20 12:43:35.996703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-11-20 12:43:35.996726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-11-20 12:43:35.996831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-11-20 12:43:35.996854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-11-20 12:43:35.996952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-11-20 12:43:35.996976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-11-20 12:43:35.997193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-11-20 12:43:35.997224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-11-20 12:43:35.997321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-11-20 12:43:35.997344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-11-20 12:43:35.997452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-11-20 12:43:35.997475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-11-20 12:43:35.997575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-11-20 12:43:35.997598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-11-20 12:43:35.997770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-11-20 12:43:35.997794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-11-20 12:43:35.997952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-11-20 12:43:35.997976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-11-20 12:43:35.998126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-11-20 12:43:35.998149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-11-20 12:43:35.998302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-11-20 12:43:35.998338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-11-20 12:43:35.998432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-11-20 12:43:35.998456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-11-20 12:43:35.998571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-11-20 12:43:35.998596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-11-20 12:43:35.998775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-11-20 12:43:35.998804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-11-20 12:43:35.998896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-11-20 12:43:35.998920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-11-20 12:43:35.999024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-11-20 12:43:35.999046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-11-20 12:43:35.999139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-11-20 12:43:35.999161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.437 [2024-11-20 12:43:35.999332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.437 [2024-11-20 12:43:35.999356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.437 qpair failed and we were unable to recover it. 00:29:30.438 [2024-11-20 12:43:35.999537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-11-20 12:43:35.999560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-11-20 12:43:35.999748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-11-20 12:43:35.999771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-11-20 12:43:35.999921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-11-20 12:43:35.999944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-11-20 12:43:36.000113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-11-20 12:43:36.000136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-11-20 12:43:36.000332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-11-20 12:43:36.000356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-11-20 12:43:36.000523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-11-20 12:43:36.000546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-11-20 12:43:36.000693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-11-20 12:43:36.000716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-11-20 12:43:36.000823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-11-20 12:43:36.000847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-11-20 12:43:36.000998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-11-20 12:43:36.001020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-11-20 12:43:36.001131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-11-20 12:43:36.001155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-11-20 12:43:36.001384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-11-20 12:43:36.001408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-11-20 12:43:36.001493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-11-20 12:43:36.001517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-11-20 12:43:36.001626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-11-20 12:43:36.001652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-11-20 12:43:36.001798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-11-20 12:43:36.001822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-11-20 12:43:36.001931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-11-20 12:43:36.001956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-11-20 12:43:36.002129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-11-20 12:43:36.002153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-11-20 12:43:36.002329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-11-20 12:43:36.002357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-11-20 12:43:36.002459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-11-20 12:43:36.002482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-11-20 12:43:36.002699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-11-20 12:43:36.002723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-11-20 12:43:36.002831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-11-20 12:43:36.002854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-11-20 12:43:36.002934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-11-20 12:43:36.002959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-11-20 12:43:36.003125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-11-20 12:43:36.003148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-11-20 12:43:36.003244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-11-20 12:43:36.003269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-11-20 12:43:36.003511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-11-20 12:43:36.003535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-11-20 12:43:36.003718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-11-20 12:43:36.003743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-11-20 12:43:36.003912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-11-20 12:43:36.003936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-11-20 12:43:36.004100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-11-20 12:43:36.004127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-11-20 12:43:36.004292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-11-20 12:43:36.004317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-11-20 12:43:36.004489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-11-20 12:43:36.004514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-11-20 12:43:36.004683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-11-20 12:43:36.004706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-11-20 12:43:36.004872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-11-20 12:43:36.004870] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:30.438 [2024-11-20 12:43:36.004898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.438 [2024-11-20 12:43:36.004904] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:30.438 [2024-11-20 12:43:36.004913] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-11-20 12:43:36.004920] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:30.438 [2024-11-20 12:43:36.004928] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:30.438 [2024-11-20 12:43:36.004989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-11-20 12:43:36.005009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-11-20 12:43:36.005097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-11-20 12:43:36.005118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-11-20 12:43:36.005224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-11-20 12:43:36.005246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-11-20 12:43:36.005363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-11-20 12:43:36.005391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-11-20 12:43:36.005496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.438 [2024-11-20 12:43:36.005519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.438 qpair failed and we were unable to recover it. 00:29:30.438 [2024-11-20 12:43:36.005696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.439 [2024-11-20 12:43:36.005719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.439 qpair failed and we were unable to recover it. 00:29:30.439 [2024-11-20 12:43:36.005811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.439 [2024-11-20 12:43:36.005834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.439 qpair failed and we were unable to recover it. 00:29:30.439 [2024-11-20 12:43:36.005931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.439 [2024-11-20 12:43:36.005955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.439 qpair failed and we were unable to recover it. 00:29:30.439 [2024-11-20 12:43:36.006039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.439 [2024-11-20 12:43:36.006062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.439 qpair failed and we were unable to recover it. 00:29:30.439 [2024-11-20 12:43:36.006166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.439 [2024-11-20 12:43:36.006189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.439 qpair failed and we were unable to recover it. 00:29:30.439 [2024-11-20 12:43:36.006300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.439 [2024-11-20 12:43:36.006323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.439 qpair failed and we were unable to recover it. 00:29:30.439 [2024-11-20 12:43:36.006569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.439 [2024-11-20 12:43:36.006592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.439 [2024-11-20 12:43:36.006504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:30.439 qpair failed and we were unable to recover it. 00:29:30.439 [2024-11-20 12:43:36.006607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:30.439 [2024-11-20 12:43:36.006689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.439 [2024-11-20 12:43:36.006716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with[2024-11-20 12:43:36.006623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:30.439 addr=10.0.0.2, port=4420 00:29:30.439 [2024-11-20 12:43:36.006628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:30.439 qpair failed and we were unable to recover it. 00:29:30.439 [2024-11-20 12:43:36.006923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.439 [2024-11-20 12:43:36.006947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.439 qpair failed and we were unable to recover it. 00:29:30.439 [2024-11-20 12:43:36.007110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.439 [2024-11-20 12:43:36.007134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.439 qpair failed and we were unable to recover it. 00:29:30.439 [2024-11-20 12:43:36.008940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.439 [2024-11-20 12:43:36.008991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.439 qpair failed and we were unable to recover it. 00:29:30.439 [2024-11-20 12:43:36.009232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.439 [2024-11-20 12:43:36.009259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.439 qpair failed and we were unable to recover it. 00:29:30.439 [2024-11-20 12:43:36.009390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.439 [2024-11-20 12:43:36.009414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.439 qpair failed and we were unable to recover it. 00:29:30.439 [2024-11-20 12:43:36.009595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.439 [2024-11-20 12:43:36.009619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.439 qpair failed and we were unable to recover it. 00:29:30.439 [2024-11-20 12:43:36.009701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.439 [2024-11-20 12:43:36.009724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.439 qpair failed and we were unable to recover it. 00:29:30.439 [2024-11-20 12:43:36.009833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.439 [2024-11-20 12:43:36.009857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.439 qpair failed and we were unable to recover it. 00:29:30.439 [2024-11-20 12:43:36.010007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.439 [2024-11-20 12:43:36.010031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.439 qpair failed and we were unable to recover it. 00:29:30.439 [2024-11-20 12:43:36.010142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.439 [2024-11-20 12:43:36.010166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.439 qpair failed and we were unable to recover it. 00:29:30.439 [2024-11-20 12:43:36.010393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.439 [2024-11-20 12:43:36.010418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.439 qpair failed and we were unable to recover it. 00:29:30.439 [2024-11-20 12:43:36.010511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.439 [2024-11-20 12:43:36.010533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.439 qpair failed and we were unable to recover it. 00:29:30.439 [2024-11-20 12:43:36.010701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.439 [2024-11-20 12:43:36.010724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.439 qpair failed and we were unable to recover it. 00:29:30.439 [2024-11-20 12:43:36.010834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.439 [2024-11-20 12:43:36.010856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.439 qpair failed and we were unable to recover it. 00:29:30.439 [2024-11-20 12:43:36.010958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.439 [2024-11-20 12:43:36.010980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.439 qpair failed and we were unable to recover it. 00:29:30.439 [2024-11-20 12:43:36.011071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.439 [2024-11-20 12:43:36.011093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.439 qpair failed and we were unable to recover it. 00:29:30.439 [2024-11-20 12:43:36.011259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.439 [2024-11-20 12:43:36.011288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.439 qpair failed and we were unable to recover it. 00:29:30.439 [2024-11-20 12:43:36.011438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.439 [2024-11-20 12:43:36.011461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.439 qpair failed and we were unable to recover it. 00:29:30.439 [2024-11-20 12:43:36.011558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.439 [2024-11-20 12:43:36.011580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.439 qpair failed and we were unable to recover it. 00:29:30.439 [2024-11-20 12:43:36.011751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.439 [2024-11-20 12:43:36.011774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.439 qpair failed and we were unable to recover it. 00:29:30.439 [2024-11-20 12:43:36.011876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.439 [2024-11-20 12:43:36.011899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.439 qpair failed and we were unable to recover it. 00:29:30.439 [2024-11-20 12:43:36.012011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.439 [2024-11-20 12:43:36.012034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.439 qpair failed and we were unable to recover it. 00:29:30.439 [2024-11-20 12:43:36.012236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.439 [2024-11-20 12:43:36.012261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.439 qpair failed and we were unable to recover it. 00:29:30.439 [2024-11-20 12:43:36.012414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.439 [2024-11-20 12:43:36.012439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.439 qpair failed and we were unable to recover it. 00:29:30.439 [2024-11-20 12:43:36.012548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.439 [2024-11-20 12:43:36.012570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.439 qpair failed and we were unable to recover it. 00:29:30.439 [2024-11-20 12:43:36.012661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.439 [2024-11-20 12:43:36.012684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.439 qpair failed and we were unable to recover it. 00:29:30.439 [2024-11-20 12:43:36.012790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.439 [2024-11-20 12:43:36.012814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.439 qpair failed and we were unable to recover it. 00:29:30.439 [2024-11-20 12:43:36.012903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.439 [2024-11-20 12:43:36.012926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.439 qpair failed and we were unable to recover it. 00:29:30.439 [2024-11-20 12:43:36.013029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.439 [2024-11-20 12:43:36.013053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.439 qpair failed and we were unable to recover it. 00:29:30.439 [2024-11-20 12:43:36.013154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.440 [2024-11-20 12:43:36.013178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.440 qpair failed and we were unable to recover it. 00:29:30.440 [2024-11-20 12:43:36.013333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.440 [2024-11-20 12:43:36.013357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.440 qpair failed and we were unable to recover it. 00:29:30.440 [2024-11-20 12:43:36.013448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.440 [2024-11-20 12:43:36.013470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.440 qpair failed and we were unable to recover it. 00:29:30.440 [2024-11-20 12:43:36.013556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.440 [2024-11-20 12:43:36.013578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.440 qpair failed and we were unable to recover it. 00:29:30.440 [2024-11-20 12:43:36.013663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.440 [2024-11-20 12:43:36.013686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.440 qpair failed and we were unable to recover it. 00:29:30.440 [2024-11-20 12:43:36.013839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.440 [2024-11-20 12:43:36.013861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.440 qpair failed and we were unable to recover it. 00:29:30.440 [2024-11-20 12:43:36.013943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.440 [2024-11-20 12:43:36.013966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.440 qpair failed and we were unable to recover it. 00:29:30.440 [2024-11-20 12:43:36.014115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.440 [2024-11-20 12:43:36.014137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.440 qpair failed and we were unable to recover it. 00:29:30.440 [2024-11-20 12:43:36.014237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.440 [2024-11-20 12:43:36.014262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.440 qpair failed and we were unable to recover it. 00:29:30.440 [2024-11-20 12:43:36.014423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.440 [2024-11-20 12:43:36.014446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.440 qpair failed and we were unable to recover it. 00:29:30.440 [2024-11-20 12:43:36.014545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.440 [2024-11-20 12:43:36.014568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.440 qpair failed and we were unable to recover it. 00:29:30.440 [2024-11-20 12:43:36.014735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.440 [2024-11-20 12:43:36.014758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.440 qpair failed and we were unable to recover it. 00:29:30.440 [2024-11-20 12:43:36.014857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.440 [2024-11-20 12:43:36.014881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.440 qpair failed and we were unable to recover it. 00:29:30.440 [2024-11-20 12:43:36.015027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.440 [2024-11-20 12:43:36.015051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.440 qpair failed and we were unable to recover it. 00:29:30.440 [2024-11-20 12:43:36.015276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.440 [2024-11-20 12:43:36.015305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.440 qpair failed and we were unable to recover it. 00:29:30.440 [2024-11-20 12:43:36.015460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.440 [2024-11-20 12:43:36.015484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.440 qpair failed and we were unable to recover it. 00:29:30.440 [2024-11-20 12:43:36.015589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.440 [2024-11-20 12:43:36.015612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.440 qpair failed and we were unable to recover it. 00:29:30.440 [2024-11-20 12:43:36.015763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.440 [2024-11-20 12:43:36.015786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.440 qpair failed and we were unable to recover it. 00:29:30.440 [2024-11-20 12:43:36.015882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.440 [2024-11-20 12:43:36.015905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.440 qpair failed and we were unable to recover it. 00:29:30.440 [2024-11-20 12:43:36.016120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.440 [2024-11-20 12:43:36.016144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.440 qpair failed and we were unable to recover it. 00:29:30.440 [2024-11-20 12:43:36.016328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.440 [2024-11-20 12:43:36.016353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.440 qpair failed and we were unable to recover it. 00:29:30.440 [2024-11-20 12:43:36.016447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.440 [2024-11-20 12:43:36.016470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.440 qpair failed and we were unable to recover it. 00:29:30.440 [2024-11-20 12:43:36.016643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.440 [2024-11-20 12:43:36.016667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.440 qpair failed and we were unable to recover it. 00:29:30.440 [2024-11-20 12:43:36.016819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.440 [2024-11-20 12:43:36.016842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.440 qpair failed and we were unable to recover it. 00:29:30.440 [2024-11-20 12:43:36.016945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.440 [2024-11-20 12:43:36.016968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.440 qpair failed and we were unable to recover it. 00:29:30.440 [2024-11-20 12:43:36.017054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.440 [2024-11-20 12:43:36.017077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.440 qpair failed and we were unable to recover it. 00:29:30.440 [2024-11-20 12:43:36.017168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.440 [2024-11-20 12:43:36.017191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.440 qpair failed and we were unable to recover it. 00:29:30.440 [2024-11-20 12:43:36.017303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.440 [2024-11-20 12:43:36.017324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.440 qpair failed and we were unable to recover it. 00:29:30.440 [2024-11-20 12:43:36.017492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.440 [2024-11-20 12:43:36.017570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:30.440 qpair failed and we were unable to recover it. 00:29:30.440 [2024-11-20 12:43:36.017743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.440 [2024-11-20 12:43:36.017815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:30.440 qpair failed and we were unable to recover it. 00:29:30.440 [2024-11-20 12:43:36.017998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.440 [2024-11-20 12:43:36.018072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.440 qpair failed and we were unable to recover it. 00:29:30.440 [2024-11-20 12:43:36.018269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.440 [2024-11-20 12:43:36.018296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.440 qpair failed and we were unable to recover it. 00:29:30.440 [2024-11-20 12:43:36.018522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.440 [2024-11-20 12:43:36.018544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.440 qpair failed and we were unable to recover it. 00:29:30.440 [2024-11-20 12:43:36.018710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.440 [2024-11-20 12:43:36.018733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.440 qpair failed and we were unable to recover it. 00:29:30.440 [2024-11-20 12:43:36.018828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.440 [2024-11-20 12:43:36.018851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.440 qpair failed and we were unable to recover it. 00:29:30.440 [2024-11-20 12:43:36.019003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.440 [2024-11-20 12:43:36.019025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.440 qpair failed and we were unable to recover it. 00:29:30.440 [2024-11-20 12:43:36.019112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.440 [2024-11-20 12:43:36.019136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.440 qpair failed and we were unable to recover it. 00:29:30.440 [2024-11-20 12:43:36.019220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.440 [2024-11-20 12:43:36.019256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.440 qpair failed and we were unable to recover it. 00:29:30.440 [2024-11-20 12:43:36.019353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.440 [2024-11-20 12:43:36.019375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.440 qpair failed and we were unable to recover it. 00:29:30.441 [2024-11-20 12:43:36.019479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.441 [2024-11-20 12:43:36.019503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.441 qpair failed and we were unable to recover it. 00:29:30.441 [2024-11-20 12:43:36.019594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.441 [2024-11-20 12:43:36.019616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.441 qpair failed and we were unable to recover it. 00:29:30.441 [2024-11-20 12:43:36.019767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.441 [2024-11-20 12:43:36.019791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.441 qpair failed and we were unable to recover it. 00:29:30.441 [2024-11-20 12:43:36.019948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.441 [2024-11-20 12:43:36.019971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.441 qpair failed and we were unable to recover it. 00:29:30.441 [2024-11-20 12:43:36.020144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.441 [2024-11-20 12:43:36.020167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.441 qpair failed and we were unable to recover it. 00:29:30.441 [2024-11-20 12:43:36.020271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.441 [2024-11-20 12:43:36.020295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.441 qpair failed and we were unable to recover it. 00:29:30.441 [2024-11-20 12:43:36.020387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.441 [2024-11-20 12:43:36.020410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.441 qpair failed and we were unable to recover it. 00:29:30.441 [2024-11-20 12:43:36.020492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.441 [2024-11-20 12:43:36.020515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.441 qpair failed and we were unable to recover it. 00:29:30.441 [2024-11-20 12:43:36.020604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.441 [2024-11-20 12:43:36.020627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.441 qpair failed and we were unable to recover it. 00:29:30.441 [2024-11-20 12:43:36.020776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.441 [2024-11-20 12:43:36.020799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.441 qpair failed and we were unable to recover it. 00:29:30.441 [2024-11-20 12:43:36.020957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.441 [2024-11-20 12:43:36.020980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.441 qpair failed and we were unable to recover it. 00:29:30.441 [2024-11-20 12:43:36.021074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.441 [2024-11-20 12:43:36.021096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.441 qpair failed and we were unable to recover it. 00:29:30.441 [2024-11-20 12:43:36.021254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.441 [2024-11-20 12:43:36.021278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.441 qpair failed and we were unable to recover it. 00:29:30.441 [2024-11-20 12:43:36.021431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.441 [2024-11-20 12:43:36.021453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.441 qpair failed and we were unable to recover it. 00:29:30.441 [2024-11-20 12:43:36.021537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.441 [2024-11-20 12:43:36.021560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.441 qpair failed and we were unable to recover it. 00:29:30.441 [2024-11-20 12:43:36.021725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.441 [2024-11-20 12:43:36.021747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.441 qpair failed and we were unable to recover it. 00:29:30.441 [2024-11-20 12:43:36.021849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.441 [2024-11-20 12:43:36.021875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.441 qpair failed and we were unable to recover it. 00:29:30.441 [2024-11-20 12:43:36.021966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.441 [2024-11-20 12:43:36.021990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.441 qpair failed and we were unable to recover it. 00:29:30.441 [2024-11-20 12:43:36.022214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.441 [2024-11-20 12:43:36.022238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.441 qpair failed and we were unable to recover it. 00:29:30.441 [2024-11-20 12:43:36.022403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.441 [2024-11-20 12:43:36.022427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.441 qpair failed and we were unable to recover it. 00:29:30.441 [2024-11-20 12:43:36.022530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.441 [2024-11-20 12:43:36.022553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.441 qpair failed and we were unable to recover it. 00:29:30.441 [2024-11-20 12:43:36.022656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.441 [2024-11-20 12:43:36.022679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.441 qpair failed and we were unable to recover it. 00:29:30.441 [2024-11-20 12:43:36.022770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.441 [2024-11-20 12:43:36.022793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.441 qpair failed and we were unable to recover it. 00:29:30.441 [2024-11-20 12:43:36.022955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.441 [2024-11-20 12:43:36.022978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.441 qpair failed and we were unable to recover it. 00:29:30.441 [2024-11-20 12:43:36.023150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.441 [2024-11-20 12:43:36.023172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.441 qpair failed and we were unable to recover it. 00:29:30.441 [2024-11-20 12:43:36.023265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.441 [2024-11-20 12:43:36.023289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.441 qpair failed and we were unable to recover it. 00:29:30.441 [2024-11-20 12:43:36.023440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.441 [2024-11-20 12:43:36.023463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.441 qpair failed and we were unable to recover it. 00:29:30.441 [2024-11-20 12:43:36.023626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.441 [2024-11-20 12:43:36.023649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.441 qpair failed and we were unable to recover it. 00:29:30.441 [2024-11-20 12:43:36.023736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.441 [2024-11-20 12:43:36.023759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.441 qpair failed and we were unable to recover it. 00:29:30.441 [2024-11-20 12:43:36.023927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.441 [2024-11-20 12:43:36.023950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.441 qpair failed and we were unable to recover it. 00:29:30.441 [2024-11-20 12:43:36.024042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.441 [2024-11-20 12:43:36.024065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.441 qpair failed and we were unable to recover it. 00:29:30.441 [2024-11-20 12:43:36.024166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.441 [2024-11-20 12:43:36.024189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.441 qpair failed and we were unable to recover it. 00:29:30.441 [2024-11-20 12:43:36.024348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.441 [2024-11-20 12:43:36.024371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.441 qpair failed and we were unable to recover it. 00:29:30.441 [2024-11-20 12:43:36.024471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.441 [2024-11-20 12:43:36.024494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.441 qpair failed and we were unable to recover it. 00:29:30.441 [2024-11-20 12:43:36.024594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.441 [2024-11-20 12:43:36.024617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.441 qpair failed and we were unable to recover it. 00:29:30.441 [2024-11-20 12:43:36.024714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.441 [2024-11-20 12:43:36.024737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.441 qpair failed and we were unable to recover it. 00:29:30.441 [2024-11-20 12:43:36.024827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.441 [2024-11-20 12:43:36.024851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.441 qpair failed and we were unable to recover it. 00:29:30.441 [2024-11-20 12:43:36.024938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.441 [2024-11-20 12:43:36.024961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.441 qpair failed and we were unable to recover it. 00:29:30.441 [2024-11-20 12:43:36.025113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.442 [2024-11-20 12:43:36.025134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.442 qpair failed and we were unable to recover it. 00:29:30.442 [2024-11-20 12:43:36.025294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.442 [2024-11-20 12:43:36.025319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.442 qpair failed and we were unable to recover it. 00:29:30.442 [2024-11-20 12:43:36.025475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.442 [2024-11-20 12:43:36.025498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.442 qpair failed and we were unable to recover it. 00:29:30.442 [2024-11-20 12:43:36.025581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.442 [2024-11-20 12:43:36.025604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.442 qpair failed and we were unable to recover it. 00:29:30.442 [2024-11-20 12:43:36.025693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.442 [2024-11-20 12:43:36.025716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.442 qpair failed and we were unable to recover it. 00:29:30.442 [2024-11-20 12:43:36.025815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.442 [2024-11-20 12:43:36.025843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.442 qpair failed and we were unable to recover it. 00:29:30.442 [2024-11-20 12:43:36.025995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.442 [2024-11-20 12:43:36.026018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.442 qpair failed and we were unable to recover it. 00:29:30.442 [2024-11-20 12:43:36.026136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.442 [2024-11-20 12:43:36.026158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.442 qpair failed and we were unable to recover it. 00:29:30.442 [2024-11-20 12:43:36.026323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.442 [2024-11-20 12:43:36.026347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.442 qpair failed and we were unable to recover it. 00:29:30.442 [2024-11-20 12:43:36.026433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.442 [2024-11-20 12:43:36.026455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.442 qpair failed and we were unable to recover it. 00:29:30.442 [2024-11-20 12:43:36.026625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.442 [2024-11-20 12:43:36.026649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.442 qpair failed and we were unable to recover it. 00:29:30.442 [2024-11-20 12:43:36.026803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.442 [2024-11-20 12:43:36.026826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.442 qpair failed and we were unable to recover it. 00:29:30.442 [2024-11-20 12:43:36.026991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.442 [2024-11-20 12:43:36.027014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.442 qpair failed and we were unable to recover it. 00:29:30.442 [2024-11-20 12:43:36.027105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.442 [2024-11-20 12:43:36.027128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.442 qpair failed and we were unable to recover it. 00:29:30.442 [2024-11-20 12:43:36.027310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.442 [2024-11-20 12:43:36.027334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.442 qpair failed and we were unable to recover it. 00:29:30.442 [2024-11-20 12:43:36.027454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.442 [2024-11-20 12:43:36.027477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.442 qpair failed and we were unable to recover it. 00:29:30.442 [2024-11-20 12:43:36.027646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.442 [2024-11-20 12:43:36.027669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.442 qpair failed and we were unable to recover it. 00:29:30.442 [2024-11-20 12:43:36.027842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.442 [2024-11-20 12:43:36.027866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.442 qpair failed and we were unable to recover it. 00:29:30.442 [2024-11-20 12:43:36.028039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.442 [2024-11-20 12:43:36.028062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.442 qpair failed and we were unable to recover it. 00:29:30.442 [2024-11-20 12:43:36.028199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.442 [2024-11-20 12:43:36.028266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:30.442 qpair failed and we were unable to recover it. 00:29:30.442 [2024-11-20 12:43:36.028396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.442 [2024-11-20 12:43:36.028430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:30.442 qpair failed and we were unable to recover it. 00:29:30.442 [2024-11-20 12:43:36.028701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.442 [2024-11-20 12:43:36.028735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:30.442 qpair failed and we were unable to recover it. 00:29:30.442 [2024-11-20 12:43:36.028918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.442 [2024-11-20 12:43:36.028951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:30.442 qpair failed and we were unable to recover it. 00:29:30.442 [2024-11-20 12:43:36.029223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.442 [2024-11-20 12:43:36.029259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:30.442 qpair failed and we were unable to recover it. 00:29:30.442 [2024-11-20 12:43:36.029384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.442 [2024-11-20 12:43:36.029416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:30.442 qpair failed and we were unable to recover it. 00:29:30.442 [2024-11-20 12:43:36.029518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.442 [2024-11-20 12:43:36.029545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.442 qpair failed and we were unable to recover it. 00:29:30.442 [2024-11-20 12:43:36.029819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.442 [2024-11-20 12:43:36.029843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.442 qpair failed and we were unable to recover it. 00:29:30.442 [2024-11-20 12:43:36.029951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.442 [2024-11-20 12:43:36.029974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.442 qpair failed and we were unable to recover it. 00:29:30.442 [2024-11-20 12:43:36.030061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.442 [2024-11-20 12:43:36.030085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.442 qpair failed and we were unable to recover it. 00:29:30.442 [2024-11-20 12:43:36.030169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.442 [2024-11-20 12:43:36.030192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.442 qpair failed and we were unable to recover it. 00:29:30.442 [2024-11-20 12:43:36.030353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.442 [2024-11-20 12:43:36.030377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.442 qpair failed and we were unable to recover it. 00:29:30.442 [2024-11-20 12:43:36.030550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.442 [2024-11-20 12:43:36.030574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.442 qpair failed and we were unable to recover it. 00:29:30.442 [2024-11-20 12:43:36.030738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.442 [2024-11-20 12:43:36.030763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.442 qpair failed and we were unable to recover it. 00:29:30.442 [2024-11-20 12:43:36.030955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.442 [2024-11-20 12:43:36.030980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.442 qpair failed and we were unable to recover it. 00:29:30.443 [2024-11-20 12:43:36.031078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.443 [2024-11-20 12:43:36.031101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.443 qpair failed and we were unable to recover it. 00:29:30.443 [2024-11-20 12:43:36.031213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.443 [2024-11-20 12:43:36.031238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.443 qpair failed and we were unable to recover it. 00:29:30.443 [2024-11-20 12:43:36.031343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.443 [2024-11-20 12:43:36.031367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.443 qpair failed and we were unable to recover it. 00:29:30.443 [2024-11-20 12:43:36.031523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.443 [2024-11-20 12:43:36.031546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.443 qpair failed and we were unable to recover it. 00:29:30.443 [2024-11-20 12:43:36.031646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.443 [2024-11-20 12:43:36.031671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.443 qpair failed and we were unable to recover it. 00:29:30.443 [2024-11-20 12:43:36.031840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.443 [2024-11-20 12:43:36.031864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.443 qpair failed and we were unable to recover it. 00:29:30.443 [2024-11-20 12:43:36.032104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.443 [2024-11-20 12:43:36.032130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.443 qpair failed and we were unable to recover it. 00:29:30.443 [2024-11-20 12:43:36.032223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.443 [2024-11-20 12:43:36.032247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.443 qpair failed and we were unable to recover it. 00:29:30.443 [2024-11-20 12:43:36.032356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.443 [2024-11-20 12:43:36.032380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.443 qpair failed and we were unable to recover it. 00:29:30.443 [2024-11-20 12:43:36.032558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.443 [2024-11-20 12:43:36.032582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.443 qpair failed and we were unable to recover it. 00:29:30.443 [2024-11-20 12:43:36.032748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.443 [2024-11-20 12:43:36.032773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.443 qpair failed and we were unable to recover it. 00:29:30.443 [2024-11-20 12:43:36.032939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.443 [2024-11-20 12:43:36.032962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.443 qpair failed and we were unable to recover it. 00:29:30.443 [2024-11-20 12:43:36.033142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.443 [2024-11-20 12:43:36.033186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:30.443 qpair failed and we were unable to recover it. 00:29:30.443 [2024-11-20 12:43:36.033401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.443 [2024-11-20 12:43:36.033436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:30.443 qpair failed and we were unable to recover it. 00:29:30.443 [2024-11-20 12:43:36.033551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.443 [2024-11-20 12:43:36.033588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad8000b90 with addr=10.0.0.2, port=4420 00:29:30.443 qpair failed and we were unable to recover it. 00:29:30.443 [2024-11-20 12:43:36.033693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.443 [2024-11-20 12:43:36.033722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.443 qpair failed and we were unable to recover it. 00:29:30.443 [2024-11-20 12:43:36.033824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.443 [2024-11-20 12:43:36.033848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.443 qpair failed and we were unable to recover it. 00:29:30.443 [2024-11-20 12:43:36.033933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.443 [2024-11-20 12:43:36.033956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.443 qpair failed and we were unable to recover it. 00:29:30.443 [2024-11-20 12:43:36.034055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.443 [2024-11-20 12:43:36.034078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.443 qpair failed and we were unable to recover it. 00:29:30.443 [2024-11-20 12:43:36.034281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.443 [2024-11-20 12:43:36.034308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.443 qpair failed and we were unable to recover it. 00:29:30.443 [2024-11-20 12:43:36.034461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.443 [2024-11-20 12:43:36.034487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.443 qpair failed and we were unable to recover it. 00:29:30.443 [2024-11-20 12:43:36.034609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.443 [2024-11-20 12:43:36.034635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.443 qpair failed and we were unable to recover it. 00:29:30.443 [2024-11-20 12:43:36.034718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.443 [2024-11-20 12:43:36.034742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.443 qpair failed and we were unable to recover it. 00:29:30.443 [2024-11-20 12:43:36.034828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.443 [2024-11-20 12:43:36.034852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.443 qpair failed and we were unable to recover it. 00:29:30.443 [2024-11-20 12:43:36.034946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.443 [2024-11-20 12:43:36.034969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.443 qpair failed and we were unable to recover it. 00:29:30.443 [2024-11-20 12:43:36.035073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.443 [2024-11-20 12:43:36.035097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.443 qpair failed and we were unable to recover it. 00:29:30.443 [2024-11-20 12:43:36.035279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.443 [2024-11-20 12:43:36.035305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.443 qpair failed and we were unable to recover it. 00:29:30.443 [2024-11-20 12:43:36.035462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.443 [2024-11-20 12:43:36.035485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.443 qpair failed and we were unable to recover it. 00:29:30.443 [2024-11-20 12:43:36.035571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.443 [2024-11-20 12:43:36.035594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.443 qpair failed and we were unable to recover it. 00:29:30.443 [2024-11-20 12:43:36.035847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.443 [2024-11-20 12:43:36.035874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.443 qpair failed and we were unable to recover it. 00:29:30.443 [2024-11-20 12:43:36.036032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.443 [2024-11-20 12:43:36.036057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.443 qpair failed and we were unable to recover it. 00:29:30.443 [2024-11-20 12:43:36.036211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.443 [2024-11-20 12:43:36.036237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.443 qpair failed and we were unable to recover it. 00:29:30.443 [2024-11-20 12:43:36.036425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.443 [2024-11-20 12:43:36.036449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.443 qpair failed and we were unable to recover it. 00:29:30.443 [2024-11-20 12:43:36.036562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.443 [2024-11-20 12:43:36.036586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.443 qpair failed and we were unable to recover it. 00:29:30.443 [2024-11-20 12:43:36.036781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.443 [2024-11-20 12:43:36.036806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.443 qpair failed and we were unable to recover it. 00:29:30.443 [2024-11-20 12:43:36.037025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.443 [2024-11-20 12:43:36.037050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.443 qpair failed and we were unable to recover it. 00:29:30.443 [2024-11-20 12:43:36.037139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.443 [2024-11-20 12:43:36.037162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.443 qpair failed and we were unable to recover it. 00:29:30.443 [2024-11-20 12:43:36.037319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.443 [2024-11-20 12:43:36.037343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.443 qpair failed and we were unable to recover it. 00:29:30.443 [2024-11-20 12:43:36.037456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.444 [2024-11-20 12:43:36.037480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.444 qpair failed and we were unable to recover it. 00:29:30.444 [2024-11-20 12:43:36.037632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.444 [2024-11-20 12:43:36.037661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.444 qpair failed and we were unable to recover it. 00:29:30.444 [2024-11-20 12:43:36.037745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.444 [2024-11-20 12:43:36.037770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.444 qpair failed and we were unable to recover it. 00:29:30.444 [2024-11-20 12:43:36.037856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.444 [2024-11-20 12:43:36.037878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.444 qpair failed and we were unable to recover it. 00:29:30.444 [2024-11-20 12:43:36.037961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.444 [2024-11-20 12:43:36.037985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.444 qpair failed and we were unable to recover it. 00:29:30.444 [2024-11-20 12:43:36.038164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.444 [2024-11-20 12:43:36.038188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.444 qpair failed and we were unable to recover it. 00:29:30.444 [2024-11-20 12:43:36.038420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.444 [2024-11-20 12:43:36.038447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.444 qpair failed and we were unable to recover it. 00:29:30.444 [2024-11-20 12:43:36.038603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.444 [2024-11-20 12:43:36.038630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.444 qpair failed and we were unable to recover it. 00:29:30.444 [2024-11-20 12:43:36.038729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.444 [2024-11-20 12:43:36.038754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.444 qpair failed and we were unable to recover it. 00:29:30.444 [2024-11-20 12:43:36.038931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.444 [2024-11-20 12:43:36.038956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.444 qpair failed and we were unable to recover it. 00:29:30.444 [2024-11-20 12:43:36.039122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.444 [2024-11-20 12:43:36.039147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.444 qpair failed and we were unable to recover it. 00:29:30.444 [2024-11-20 12:43:36.039263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.444 [2024-11-20 12:43:36.039288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.444 qpair failed and we were unable to recover it. 00:29:30.444 [2024-11-20 12:43:36.039483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.444 [2024-11-20 12:43:36.039507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.444 qpair failed and we were unable to recover it. 00:29:30.444 [2024-11-20 12:43:36.039617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.444 [2024-11-20 12:43:36.039642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.444 qpair failed and we were unable to recover it. 00:29:30.444 [2024-11-20 12:43:36.039839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.444 [2024-11-20 12:43:36.039864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.444 qpair failed and we were unable to recover it. 00:29:30.444 [2024-11-20 12:43:36.040039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.444 [2024-11-20 12:43:36.040065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.444 qpair failed and we were unable to recover it. 00:29:30.444 [2024-11-20 12:43:36.040159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.444 [2024-11-20 12:43:36.040184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.444 qpair failed and we were unable to recover it. 00:29:30.444 [2024-11-20 12:43:36.040410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.444 [2024-11-20 12:43:36.040435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.444 qpair failed and we were unable to recover it. 00:29:30.444 [2024-11-20 12:43:36.040546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.444 [2024-11-20 12:43:36.040569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.444 qpair failed and we were unable to recover it. 00:29:30.444 [2024-11-20 12:43:36.040681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.444 [2024-11-20 12:43:36.040705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.444 qpair failed and we were unable to recover it. 00:29:30.444 [2024-11-20 12:43:36.040808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.444 [2024-11-20 12:43:36.040831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.444 qpair failed and we were unable to recover it. 00:29:30.444 [2024-11-20 12:43:36.040986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.444 [2024-11-20 12:43:36.041011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.444 qpair failed and we were unable to recover it. 00:29:30.444 [2024-11-20 12:43:36.041231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.444 [2024-11-20 12:43:36.041257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.444 qpair failed and we were unable to recover it. 00:29:30.444 [2024-11-20 12:43:36.041481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.444 [2024-11-20 12:43:36.041505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.444 qpair failed and we were unable to recover it. 00:29:30.444 [2024-11-20 12:43:36.041669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.444 [2024-11-20 12:43:36.041694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.444 qpair failed and we were unable to recover it. 00:29:30.444 [2024-11-20 12:43:36.041791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.444 [2024-11-20 12:43:36.041815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.444 qpair failed and we were unable to recover it. 00:29:30.444 [2024-11-20 12:43:36.042003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.444 [2024-11-20 12:43:36.042028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.444 qpair failed and we were unable to recover it. 00:29:30.444 [2024-11-20 12:43:36.042141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.444 [2024-11-20 12:43:36.042165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.444 qpair failed and we were unable to recover it. 00:29:30.444 [2024-11-20 12:43:36.042386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.444 [2024-11-20 12:43:36.042414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.444 qpair failed and we were unable to recover it. 00:29:30.444 [2024-11-20 12:43:36.042585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.444 [2024-11-20 12:43:36.042608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.444 qpair failed and we were unable to recover it. 00:29:30.444 [2024-11-20 12:43:36.042707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.444 [2024-11-20 12:43:36.042730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.444 qpair failed and we were unable to recover it. 00:29:30.444 [2024-11-20 12:43:36.042900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.444 [2024-11-20 12:43:36.042923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.444 qpair failed and we were unable to recover it. 00:29:30.444 [2024-11-20 12:43:36.043166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.444 [2024-11-20 12:43:36.043189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.444 qpair failed and we were unable to recover it. 00:29:30.444 [2024-11-20 12:43:36.043370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.444 [2024-11-20 12:43:36.043394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.444 qpair failed and we were unable to recover it. 00:29:30.444 [2024-11-20 12:43:36.043579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.444 [2024-11-20 12:43:36.043602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.444 qpair failed and we were unable to recover it. 00:29:30.444 [2024-11-20 12:43:36.043703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.444 [2024-11-20 12:43:36.043726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.444 qpair failed and we were unable to recover it. 00:29:30.444 [2024-11-20 12:43:36.043952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.444 [2024-11-20 12:43:36.043975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.444 qpair failed and we were unable to recover it. 00:29:30.444 [2024-11-20 12:43:36.044144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.444 [2024-11-20 12:43:36.044167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.445 qpair failed and we were unable to recover it. 00:29:30.445 [2024-11-20 12:43:36.044339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.445 [2024-11-20 12:43:36.044363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.445 qpair failed and we were unable to recover it. 00:29:30.445 [2024-11-20 12:43:36.044526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.445 [2024-11-20 12:43:36.044550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.445 qpair failed and we were unable to recover it. 00:29:30.445 [2024-11-20 12:43:36.044725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.445 [2024-11-20 12:43:36.044747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.445 qpair failed and we were unable to recover it. 00:29:30.445 [2024-11-20 12:43:36.044919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.445 [2024-11-20 12:43:36.044942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.445 qpair failed and we were unable to recover it. 00:29:30.445 [2024-11-20 12:43:36.045098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.445 [2024-11-20 12:43:36.045121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.445 qpair failed and we were unable to recover it. 00:29:30.445 [2024-11-20 12:43:36.045238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.445 [2024-11-20 12:43:36.045262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.445 qpair failed and we were unable to recover it. 00:29:30.445 [2024-11-20 12:43:36.045357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.445 [2024-11-20 12:43:36.045380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.445 qpair failed and we were unable to recover it. 00:29:30.445 [2024-11-20 12:43:36.045597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.445 [2024-11-20 12:43:36.045621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.445 qpair failed and we were unable to recover it. 00:29:30.445 [2024-11-20 12:43:36.045771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.445 [2024-11-20 12:43:36.045794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.445 qpair failed and we were unable to recover it. 00:29:30.445 [2024-11-20 12:43:36.045958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.445 [2024-11-20 12:43:36.045982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.445 qpair failed and we were unable to recover it. 00:29:30.445 [2024-11-20 12:43:36.046134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.445 [2024-11-20 12:43:36.046159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.445 qpair failed and we were unable to recover it. 00:29:30.445 [2024-11-20 12:43:36.046312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.445 [2024-11-20 12:43:36.046337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.445 qpair failed and we were unable to recover it. 00:29:30.445 [2024-11-20 12:43:36.046499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.445 [2024-11-20 12:43:36.046523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.445 qpair failed and we were unable to recover it. 00:29:30.445 [2024-11-20 12:43:36.046616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.445 [2024-11-20 12:43:36.046640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.445 qpair failed and we were unable to recover it. 00:29:30.445 [2024-11-20 12:43:36.046803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.445 [2024-11-20 12:43:36.046827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.445 qpair failed and we were unable to recover it. 00:29:30.445 [2024-11-20 12:43:36.046978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.445 [2024-11-20 12:43:36.047002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.445 qpair failed and we were unable to recover it. 00:29:30.445 [2024-11-20 12:43:36.047169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.445 [2024-11-20 12:43:36.047192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.445 qpair failed and we were unable to recover it. 00:29:30.445 [2024-11-20 12:43:36.047357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.445 [2024-11-20 12:43:36.047381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.445 qpair failed and we were unable to recover it. 00:29:30.445 [2024-11-20 12:43:36.047609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.445 [2024-11-20 12:43:36.047633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.445 qpair failed and we were unable to recover it. 00:29:30.445 [2024-11-20 12:43:36.047875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.445 [2024-11-20 12:43:36.047899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.445 qpair failed and we were unable to recover it. 00:29:30.445 [2024-11-20 12:43:36.047986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.445 [2024-11-20 12:43:36.048010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.445 qpair failed and we were unable to recover it. 00:29:30.445 [2024-11-20 12:43:36.048163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.445 [2024-11-20 12:43:36.048187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.445 qpair failed and we were unable to recover it. 00:29:30.445 [2024-11-20 12:43:36.048347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.445 [2024-11-20 12:43:36.048370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.445 qpair failed and we were unable to recover it. 00:29:30.445 [2024-11-20 12:43:36.048562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.445 [2024-11-20 12:43:36.048585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.445 qpair failed and we were unable to recover it. 00:29:30.445 [2024-11-20 12:43:36.048762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.445 [2024-11-20 12:43:36.048786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.445 qpair failed and we were unable to recover it. 00:29:30.445 [2024-11-20 12:43:36.048937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.445 [2024-11-20 12:43:36.048959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.445 qpair failed and we were unable to recover it. 00:29:30.445 [2024-11-20 12:43:36.049129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.445 [2024-11-20 12:43:36.049153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.445 qpair failed and we were unable to recover it. 00:29:30.445 [2024-11-20 12:43:36.049330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.445 [2024-11-20 12:43:36.049356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.445 qpair failed and we were unable to recover it. 00:29:30.445 [2024-11-20 12:43:36.049507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.445 [2024-11-20 12:43:36.049531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.445 qpair failed and we were unable to recover it. 00:29:30.445 [2024-11-20 12:43:36.049654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.445 [2024-11-20 12:43:36.049678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.445 qpair failed and we were unable to recover it. 00:29:30.445 [2024-11-20 12:43:36.049781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.445 [2024-11-20 12:43:36.049806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.445 qpair failed and we were unable to recover it. 00:29:30.445 [2024-11-20 12:43:36.050027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.445 [2024-11-20 12:43:36.050056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.445 qpair failed and we were unable to recover it. 00:29:30.445 [2024-11-20 12:43:36.050160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.445 [2024-11-20 12:43:36.050184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.445 qpair failed and we were unable to recover it. 00:29:30.445 [2024-11-20 12:43:36.050309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.445 [2024-11-20 12:43:36.050334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.445 qpair failed and we were unable to recover it. 00:29:30.445 [2024-11-20 12:43:36.050418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.445 [2024-11-20 12:43:36.050442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.445 qpair failed and we were unable to recover it. 00:29:30.445 [2024-11-20 12:43:36.050553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.445 [2024-11-20 12:43:36.050576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.445 qpair failed and we were unable to recover it. 00:29:30.445 [2024-11-20 12:43:36.050757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.445 [2024-11-20 12:43:36.050781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.445 qpair failed and we were unable to recover it. 00:29:30.445 [2024-11-20 12:43:36.050938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.445 [2024-11-20 12:43:36.050962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.445 qpair failed and we were unable to recover it. 00:29:30.446 [2024-11-20 12:43:36.051114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.446 [2024-11-20 12:43:36.051140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-11-20 12:43:36.051316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.446 [2024-11-20 12:43:36.051341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-11-20 12:43:36.051522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.446 [2024-11-20 12:43:36.051547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-11-20 12:43:36.051661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.446 [2024-11-20 12:43:36.051691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-11-20 12:43:36.051805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.446 [2024-11-20 12:43:36.051830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-11-20 12:43:36.051925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.446 [2024-11-20 12:43:36.051948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-11-20 12:43:36.052099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.446 [2024-11-20 12:43:36.052124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-11-20 12:43:36.052354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.446 [2024-11-20 12:43:36.052380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-11-20 12:43:36.052486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.446 [2024-11-20 12:43:36.052510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-11-20 12:43:36.052662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.446 [2024-11-20 12:43:36.052686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-11-20 12:43:36.052798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.446 [2024-11-20 12:43:36.052821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-11-20 12:43:36.052972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.446 [2024-11-20 12:43:36.052995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-11-20 12:43:36.053253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.446 [2024-11-20 12:43:36.053278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-11-20 12:43:36.053369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.446 [2024-11-20 12:43:36.053392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-11-20 12:43:36.053553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.446 [2024-11-20 12:43:36.053578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-11-20 12:43:36.053754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.446 [2024-11-20 12:43:36.053778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-11-20 12:43:36.053868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.446 [2024-11-20 12:43:36.053892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-11-20 12:43:36.054040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.446 [2024-11-20 12:43:36.054064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-11-20 12:43:36.054165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.446 [2024-11-20 12:43:36.054188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-11-20 12:43:36.054385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.446 [2024-11-20 12:43:36.054410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-11-20 12:43:36.054514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.446 [2024-11-20 12:43:36.054538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-11-20 12:43:36.054646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.446 [2024-11-20 12:43:36.054668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-11-20 12:43:36.054855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.446 [2024-11-20 12:43:36.054878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-11-20 12:43:36.054962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.446 [2024-11-20 12:43:36.054987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-11-20 12:43:36.055139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.446 [2024-11-20 12:43:36.055161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-11-20 12:43:36.055329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.446 [2024-11-20 12:43:36.055354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-11-20 12:43:36.055508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.446 [2024-11-20 12:43:36.055532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-11-20 12:43:36.055774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.446 [2024-11-20 12:43:36.055797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-11-20 12:43:36.055961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.446 [2024-11-20 12:43:36.055986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-11-20 12:43:36.056078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.446 [2024-11-20 12:43:36.056101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-11-20 12:43:36.056293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.446 [2024-11-20 12:43:36.056317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-11-20 12:43:36.056437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.446 [2024-11-20 12:43:36.056460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-11-20 12:43:36.056645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.446 [2024-11-20 12:43:36.056669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-11-20 12:43:36.056768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.446 [2024-11-20 12:43:36.056791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-11-20 12:43:36.057018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.446 [2024-11-20 12:43:36.057043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-11-20 12:43:36.057196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.446 [2024-11-20 12:43:36.057228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-11-20 12:43:36.057444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.446 [2024-11-20 12:43:36.057468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.447 qpair failed and we were unable to recover it. 00:29:30.447 [2024-11-20 12:43:36.057566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.447 [2024-11-20 12:43:36.057589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.447 qpair failed and we were unable to recover it. 00:29:30.447 [2024-11-20 12:43:36.057690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.447 [2024-11-20 12:43:36.057715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.447 qpair failed and we were unable to recover it. 00:29:30.447 [2024-11-20 12:43:36.057878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.447 [2024-11-20 12:43:36.057901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.447 qpair failed and we were unable to recover it. 00:29:30.447 [2024-11-20 12:43:36.058064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.447 [2024-11-20 12:43:36.058087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.447 qpair failed and we were unable to recover it. 00:29:30.447 [2024-11-20 12:43:36.058245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.447 [2024-11-20 12:43:36.058270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.447 qpair failed and we were unable to recover it. 00:29:30.447 [2024-11-20 12:43:36.058428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.447 [2024-11-20 12:43:36.058451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.447 qpair failed and we were unable to recover it. 00:29:30.447 [2024-11-20 12:43:36.058630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.447 [2024-11-20 12:43:36.058656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.447 qpair failed and we were unable to recover it. 00:29:30.447 [2024-11-20 12:43:36.058744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.447 [2024-11-20 12:43:36.058767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.447 qpair failed and we were unable to recover it. 00:29:30.447 [2024-11-20 12:43:36.058866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.447 [2024-11-20 12:43:36.058889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.447 qpair failed and we were unable to recover it. 00:29:30.447 [2024-11-20 12:43:36.059104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.447 [2024-11-20 12:43:36.059128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.447 qpair failed and we were unable to recover it. 00:29:30.447 [2024-11-20 12:43:36.059291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.447 [2024-11-20 12:43:36.059315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.447 qpair failed and we were unable to recover it. 00:29:30.447 [2024-11-20 12:43:36.059482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.447 [2024-11-20 12:43:36.059506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.447 qpair failed and we were unable to recover it. 00:29:30.447 [2024-11-20 12:43:36.059731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.447 [2024-11-20 12:43:36.059755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.447 qpair failed and we were unable to recover it. 00:29:30.447 [2024-11-20 12:43:36.059947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.447 [2024-11-20 12:43:36.059972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.447 qpair failed and we were unable to recover it. 00:29:30.447 [2024-11-20 12:43:36.060054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.447 [2024-11-20 12:43:36.060077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.447 qpair failed and we were unable to recover it. 00:29:30.447 [2024-11-20 12:43:36.060241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.447 [2024-11-20 12:43:36.060265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.447 qpair failed and we were unable to recover it. 00:29:30.447 [2024-11-20 12:43:36.060411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.447 [2024-11-20 12:43:36.060434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.447 qpair failed and we were unable to recover it. 00:29:30.447 [2024-11-20 12:43:36.060597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.447 [2024-11-20 12:43:36.060621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.447 qpair failed and we were unable to recover it. 00:29:30.447 [2024-11-20 12:43:36.060775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.447 [2024-11-20 12:43:36.060799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.447 qpair failed and we were unable to recover it. 00:29:30.447 [2024-11-20 12:43:36.060873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.447 [2024-11-20 12:43:36.060896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.447 qpair failed and we were unable to recover it. 00:29:30.447 [2024-11-20 12:43:36.061048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.447 [2024-11-20 12:43:36.061071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.447 qpair failed and we were unable to recover it. 00:29:30.447 [2024-11-20 12:43:36.061181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.447 [2024-11-20 12:43:36.061210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.447 qpair failed and we were unable to recover it. 00:29:30.447 [2024-11-20 12:43:36.061375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.447 [2024-11-20 12:43:36.061398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.447 qpair failed and we were unable to recover it. 00:29:30.447 [2024-11-20 12:43:36.061498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.447 [2024-11-20 12:43:36.061522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.447 qpair failed and we were unable to recover it. 00:29:30.447 [2024-11-20 12:43:36.061777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.447 [2024-11-20 12:43:36.061806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.447 qpair failed and we were unable to recover it. 00:29:30.447 [2024-11-20 12:43:36.061890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.447 [2024-11-20 12:43:36.061914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.447 qpair failed and we were unable to recover it. 00:29:30.447 [2024-11-20 12:43:36.062019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.447 [2024-11-20 12:43:36.062042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.447 qpair failed and we were unable to recover it. 00:29:30.447 [2024-11-20 12:43:36.062160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.447 [2024-11-20 12:43:36.062183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.447 qpair failed and we were unable to recover it. 00:29:30.447 [2024-11-20 12:43:36.062355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.447 [2024-11-20 12:43:36.062378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.447 qpair failed and we were unable to recover it. 00:29:30.447 [2024-11-20 12:43:36.062618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.447 [2024-11-20 12:43:36.062641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.447 qpair failed and we were unable to recover it. 00:29:30.447 [2024-11-20 12:43:36.062802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.447 [2024-11-20 12:43:36.062826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.447 qpair failed and we were unable to recover it. 00:29:30.447 [2024-11-20 12:43:36.063070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.447 [2024-11-20 12:43:36.063093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.447 qpair failed and we were unable to recover it. 00:29:30.447 [2024-11-20 12:43:36.063250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.447 [2024-11-20 12:43:36.063275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.447 qpair failed and we were unable to recover it. 00:29:30.447 [2024-11-20 12:43:36.063444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.447 [2024-11-20 12:43:36.063467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.447 qpair failed and we were unable to recover it. 00:29:30.447 [2024-11-20 12:43:36.063566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.448 [2024-11-20 12:43:36.063589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.448 qpair failed and we were unable to recover it. 00:29:30.448 [2024-11-20 12:43:36.063690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.448 [2024-11-20 12:43:36.063714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.448 qpair failed and we were unable to recover it. 00:29:30.448 [2024-11-20 12:43:36.063896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.448 [2024-11-20 12:43:36.063919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.448 qpair failed and we were unable to recover it. 00:29:30.448 [2024-11-20 12:43:36.064155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.448 [2024-11-20 12:43:36.064178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.448 qpair failed and we were unable to recover it. 00:29:30.448 [2024-11-20 12:43:36.064284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.448 [2024-11-20 12:43:36.064308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.448 qpair failed and we were unable to recover it. 00:29:30.448 [2024-11-20 12:43:36.064466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.448 [2024-11-20 12:43:36.064489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.448 qpair failed and we were unable to recover it. 00:29:30.448 [2024-11-20 12:43:36.064649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.448 [2024-11-20 12:43:36.064672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.448 qpair failed and we were unable to recover it. 00:29:30.448 [2024-11-20 12:43:36.064839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.448 [2024-11-20 12:43:36.064863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.448 qpair failed and we were unable to recover it. 00:29:30.448 [2024-11-20 12:43:36.064981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.448 [2024-11-20 12:43:36.065004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.448 qpair failed and we were unable to recover it. 00:29:30.448 [2024-11-20 12:43:36.065223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.448 [2024-11-20 12:43:36.065247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.448 qpair failed and we were unable to recover it. 00:29:30.448 [2024-11-20 12:43:36.065410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.448 [2024-11-20 12:43:36.065433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.448 qpair failed and we were unable to recover it. 00:29:30.448 [2024-11-20 12:43:36.065539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.448 [2024-11-20 12:43:36.065562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.448 qpair failed and we were unable to recover it. 00:29:30.448 [2024-11-20 12:43:36.065655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.448 [2024-11-20 12:43:36.065677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.448 qpair failed and we were unable to recover it. 00:29:30.448 [2024-11-20 12:43:36.065921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.448 [2024-11-20 12:43:36.065944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.448 qpair failed and we were unable to recover it. 00:29:30.448 [2024-11-20 12:43:36.066111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.448 [2024-11-20 12:43:36.066134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.448 qpair failed and we were unable to recover it. 00:29:30.448 [2024-11-20 12:43:36.066233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.448 [2024-11-20 12:43:36.066257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.448 qpair failed and we were unable to recover it. 00:29:30.448 [2024-11-20 12:43:36.066406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.448 [2024-11-20 12:43:36.066429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.448 qpair failed and we were unable to recover it. 00:29:30.448 [2024-11-20 12:43:36.066526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.448 [2024-11-20 12:43:36.066549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.448 qpair failed and we were unable to recover it. 00:29:30.448 [2024-11-20 12:43:36.066783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.448 [2024-11-20 12:43:36.066806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.448 qpair failed and we were unable to recover it. 00:29:30.448 [2024-11-20 12:43:36.066914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.448 [2024-11-20 12:43:36.066937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.448 qpair failed and we were unable to recover it. 00:29:30.448 [2024-11-20 12:43:36.067090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.448 [2024-11-20 12:43:36.067113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.448 qpair failed and we were unable to recover it. 00:29:30.448 [2024-11-20 12:43:36.067196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.448 [2024-11-20 12:43:36.067237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.448 qpair failed and we were unable to recover it. 00:29:30.448 [2024-11-20 12:43:36.067323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.448 [2024-11-20 12:43:36.067346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.448 qpair failed and we were unable to recover it. 00:29:30.448 [2024-11-20 12:43:36.067589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.448 [2024-11-20 12:43:36.067613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.448 qpair failed and we were unable to recover it. 00:29:30.448 [2024-11-20 12:43:36.067701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.448 [2024-11-20 12:43:36.067724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.448 qpair failed and we were unable to recover it. 00:29:30.448 [2024-11-20 12:43:36.067873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.448 [2024-11-20 12:43:36.067895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.448 qpair failed and we were unable to recover it. 00:29:30.448 [2024-11-20 12:43:36.067987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.448 [2024-11-20 12:43:36.068011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.448 qpair failed and we were unable to recover it. 00:29:30.448 [2024-11-20 12:43:36.068119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.448 [2024-11-20 12:43:36.068142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.448 qpair failed and we were unable to recover it. 00:29:30.448 [2024-11-20 12:43:36.068239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.448 [2024-11-20 12:43:36.068264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.448 qpair failed and we were unable to recover it. 00:29:30.448 [2024-11-20 12:43:36.068428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.448 [2024-11-20 12:43:36.068451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.448 qpair failed and we were unable to recover it. 00:29:30.448 [2024-11-20 12:43:36.068689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.448 [2024-11-20 12:43:36.068712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.448 qpair failed and we were unable to recover it. 00:29:30.448 [2024-11-20 12:43:36.068864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.448 [2024-11-20 12:43:36.068891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.448 qpair failed and we were unable to recover it. 00:29:30.448 [2024-11-20 12:43:36.068974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.448 [2024-11-20 12:43:36.068995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.448 qpair failed and we were unable to recover it. 00:29:30.448 [2024-11-20 12:43:36.069099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.448 [2024-11-20 12:43:36.069122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.448 qpair failed and we were unable to recover it. 00:29:30.448 [2024-11-20 12:43:36.069363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.448 [2024-11-20 12:43:36.069386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.448 qpair failed and we were unable to recover it. 00:29:30.448 [2024-11-20 12:43:36.069563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.448 [2024-11-20 12:43:36.069587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.448 qpair failed and we were unable to recover it. 00:29:30.448 [2024-11-20 12:43:36.069753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.448 [2024-11-20 12:43:36.069776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.448 qpair failed and we were unable to recover it. 00:29:30.448 [2024-11-20 12:43:36.069870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.448 [2024-11-20 12:43:36.069893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.448 qpair failed and we were unable to recover it. 00:29:30.448 [2024-11-20 12:43:36.070083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.448 [2024-11-20 12:43:36.070106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.448 qpair failed and we were unable to recover it. 00:29:30.449 [2024-11-20 12:43:36.070195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.449 [2024-11-20 12:43:36.070224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.449 qpair failed and we were unable to recover it. 00:29:30.449 [2024-11-20 12:43:36.070322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.449 [2024-11-20 12:43:36.070345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.449 qpair failed and we were unable to recover it. 00:29:30.449 [2024-11-20 12:43:36.070519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.449 [2024-11-20 12:43:36.070542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.449 qpair failed and we were unable to recover it. 00:29:30.449 [2024-11-20 12:43:36.070694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.449 [2024-11-20 12:43:36.070718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.449 qpair failed and we were unable to recover it. 00:29:30.449 [2024-11-20 12:43:36.070868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.449 [2024-11-20 12:43:36.070892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.449 qpair failed and we were unable to recover it. 00:29:30.449 [2024-11-20 12:43:36.071090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.449 [2024-11-20 12:43:36.071113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.449 qpair failed and we were unable to recover it. 00:29:30.449 [2024-11-20 12:43:36.071305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.449 [2024-11-20 12:43:36.071329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.449 qpair failed and we were unable to recover it. 00:29:30.449 [2024-11-20 12:43:36.071559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.449 [2024-11-20 12:43:36.071582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.449 qpair failed and we were unable to recover it. 00:29:30.449 [2024-11-20 12:43:36.071683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.449 [2024-11-20 12:43:36.071706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.449 qpair failed and we were unable to recover it. 00:29:30.449 [2024-11-20 12:43:36.071856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.449 [2024-11-20 12:43:36.071885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.449 qpair failed and we were unable to recover it. 00:29:30.449 [2024-11-20 12:43:36.072049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.449 [2024-11-20 12:43:36.072072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.449 qpair failed and we were unable to recover it. 00:29:30.449 [2024-11-20 12:43:36.072225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.449 [2024-11-20 12:43:36.072249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.449 qpair failed and we were unable to recover it. 00:29:30.449 [2024-11-20 12:43:36.072403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.449 [2024-11-20 12:43:36.072426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.449 qpair failed and we were unable to recover it. 00:29:30.449 [2024-11-20 12:43:36.072667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.449 [2024-11-20 12:43:36.072690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.449 qpair failed and we were unable to recover it. 00:29:30.449 [2024-11-20 12:43:36.072850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.449 [2024-11-20 12:43:36.072873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.449 qpair failed and we were unable to recover it. 00:29:30.449 [2024-11-20 12:43:36.072964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.449 [2024-11-20 12:43:36.072987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.449 qpair failed and we were unable to recover it. 00:29:30.449 [2024-11-20 12:43:36.073172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.449 [2024-11-20 12:43:36.073195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.449 qpair failed and we were unable to recover it. 00:29:30.449 [2024-11-20 12:43:36.073351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.449 [2024-11-20 12:43:36.073374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.449 qpair failed and we were unable to recover it. 00:29:30.449 [2024-11-20 12:43:36.073464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.449 [2024-11-20 12:43:36.073486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.449 qpair failed and we were unable to recover it. 00:29:30.449 [2024-11-20 12:43:36.073729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.449 [2024-11-20 12:43:36.073755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.449 qpair failed and we were unable to recover it. 00:29:30.449 [2024-11-20 12:43:36.073944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.449 [2024-11-20 12:43:36.073967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.449 qpair failed and we were unable to recover it. 00:29:30.449 [2024-11-20 12:43:36.074118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.449 [2024-11-20 12:43:36.074141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.449 qpair failed and we were unable to recover it. 00:29:30.449 [2024-11-20 12:43:36.074262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.449 [2024-11-20 12:43:36.074286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.449 qpair failed and we were unable to recover it. 00:29:30.449 [2024-11-20 12:43:36.074379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.449 [2024-11-20 12:43:36.074401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.449 qpair failed and we were unable to recover it. 00:29:30.449 [2024-11-20 12:43:36.074577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.449 [2024-11-20 12:43:36.074600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.449 qpair failed and we were unable to recover it. 00:29:30.449 [2024-11-20 12:43:36.074760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.449 [2024-11-20 12:43:36.074782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.449 qpair failed and we were unable to recover it. 00:29:30.449 [2024-11-20 12:43:36.074998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.449 [2024-11-20 12:43:36.075021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.449 qpair failed and we were unable to recover it. 00:29:30.449 [2024-11-20 12:43:36.075129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.449 [2024-11-20 12:43:36.075152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.449 qpair failed and we were unable to recover it. 00:29:30.449 [2024-11-20 12:43:36.075253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.449 [2024-11-20 12:43:36.075277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.449 qpair failed and we were unable to recover it. 00:29:30.449 [2024-11-20 12:43:36.075375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.449 [2024-11-20 12:43:36.075398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.449 qpair failed and we were unable to recover it. 00:29:30.449 [2024-11-20 12:43:36.075556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.449 [2024-11-20 12:43:36.075579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.449 qpair failed and we were unable to recover it. 00:29:30.449 [2024-11-20 12:43:36.075677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.449 [2024-11-20 12:43:36.075699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.449 qpair failed and we were unable to recover it. 00:29:30.449 [2024-11-20 12:43:36.075847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.449 [2024-11-20 12:43:36.075871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.449 qpair failed and we were unable to recover it. 00:29:30.449 [2024-11-20 12:43:36.075990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.449 [2024-11-20 12:43:36.076013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.449 qpair failed and we were unable to recover it. 00:29:30.449 [2024-11-20 12:43:36.076175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.449 [2024-11-20 12:43:36.076197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.449 qpair failed and we were unable to recover it. 00:29:30.449 [2024-11-20 12:43:36.076428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.449 [2024-11-20 12:43:36.076452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.449 qpair failed and we were unable to recover it. 00:29:30.449 [2024-11-20 12:43:36.076550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.449 [2024-11-20 12:43:36.076573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.449 qpair failed and we were unable to recover it. 00:29:30.449 [2024-11-20 12:43:36.076751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.449 [2024-11-20 12:43:36.076774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.449 qpair failed and we were unable to recover it. 00:29:30.449 [2024-11-20 12:43:36.076870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.450 [2024-11-20 12:43:36.076892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.450 qpair failed and we were unable to recover it. 00:29:30.450 [2024-11-20 12:43:36.077081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.450 [2024-11-20 12:43:36.077105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.450 qpair failed and we were unable to recover it. 00:29:30.450 [2024-11-20 12:43:36.077208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.450 [2024-11-20 12:43:36.077232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.450 qpair failed and we were unable to recover it. 00:29:30.450 [2024-11-20 12:43:36.077384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.450 [2024-11-20 12:43:36.077407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.450 qpair failed and we were unable to recover it. 00:29:30.450 [2024-11-20 12:43:36.077584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.450 [2024-11-20 12:43:36.077607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.450 qpair failed and we were unable to recover it. 00:29:30.450 [2024-11-20 12:43:36.077725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.450 [2024-11-20 12:43:36.077747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.450 qpair failed and we were unable to recover it. 00:29:30.450 [2024-11-20 12:43:36.077859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.450 [2024-11-20 12:43:36.077881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.450 qpair failed and we were unable to recover it. 00:29:30.450 [2024-11-20 12:43:36.077979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.450 [2024-11-20 12:43:36.078002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.450 qpair failed and we were unable to recover it. 00:29:30.450 [2024-11-20 12:43:36.078254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.450 [2024-11-20 12:43:36.078278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.450 qpair failed and we were unable to recover it. 00:29:30.450 [2024-11-20 12:43:36.078475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.450 [2024-11-20 12:43:36.078498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.450 qpair failed and we were unable to recover it. 00:29:30.450 [2024-11-20 12:43:36.078595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.450 [2024-11-20 12:43:36.078617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.450 qpair failed and we were unable to recover it. 00:29:30.450 [2024-11-20 12:43:36.078772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.450 [2024-11-20 12:43:36.078796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.450 qpair failed and we were unable to recover it. 00:29:30.450 [2024-11-20 12:43:36.078984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.450 [2024-11-20 12:43:36.079007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.450 qpair failed and we were unable to recover it. 00:29:30.450 [2024-11-20 12:43:36.079223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.450 [2024-11-20 12:43:36.079246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.450 qpair failed and we were unable to recover it. 00:29:30.450 [2024-11-20 12:43:36.079352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.450 [2024-11-20 12:43:36.079376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.450 qpair failed and we were unable to recover it. 00:29:30.450 [2024-11-20 12:43:36.079475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.450 [2024-11-20 12:43:36.079498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.450 qpair failed and we were unable to recover it. 00:29:30.450 [2024-11-20 12:43:36.079584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.450 [2024-11-20 12:43:36.079606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.450 qpair failed and we were unable to recover it. 00:29:30.450 [2024-11-20 12:43:36.079705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.450 [2024-11-20 12:43:36.079727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.450 qpair failed and we were unable to recover it. 00:29:30.450 [2024-11-20 12:43:36.079837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.450 [2024-11-20 12:43:36.079860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.450 qpair failed and we were unable to recover it. 00:29:30.450 [2024-11-20 12:43:36.080010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.450 [2024-11-20 12:43:36.080033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.450 qpair failed and we were unable to recover it. 00:29:30.450 [2024-11-20 12:43:36.080274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.450 [2024-11-20 12:43:36.080299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.450 qpair failed and we were unable to recover it. 00:29:30.450 [2024-11-20 12:43:36.080381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.450 [2024-11-20 12:43:36.080404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.450 qpair failed and we were unable to recover it. 00:29:30.450 [2024-11-20 12:43:36.080499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.450 [2024-11-20 12:43:36.080525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.450 qpair failed and we were unable to recover it. 00:29:30.450 [2024-11-20 12:43:36.080615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.450 [2024-11-20 12:43:36.080639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.450 qpair failed and we were unable to recover it. 00:29:30.450 [2024-11-20 12:43:36.080720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.450 [2024-11-20 12:43:36.080743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.450 qpair failed and we were unable to recover it. 00:29:30.450 [2024-11-20 12:43:36.080919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.450 [2024-11-20 12:43:36.080941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.450 qpair failed and we were unable to recover it. 00:29:30.450 [2024-11-20 12:43:36.081051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.450 [2024-11-20 12:43:36.081074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.450 qpair failed and we were unable to recover it. 00:29:30.450 [2024-11-20 12:43:36.081239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.450 [2024-11-20 12:43:36.081263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.450 qpair failed and we were unable to recover it. 00:29:30.450 [2024-11-20 12:43:36.081432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.450 [2024-11-20 12:43:36.081455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.450 qpair failed and we were unable to recover it. 00:29:30.450 [2024-11-20 12:43:36.081613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.450 [2024-11-20 12:43:36.081637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.450 qpair failed and we were unable to recover it. 00:29:30.450 [2024-11-20 12:43:36.081729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.450 [2024-11-20 12:43:36.081752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.450 qpair failed and we were unable to recover it. 00:29:30.450 [2024-11-20 12:43:36.081928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.450 [2024-11-20 12:43:36.081951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.450 qpair failed and we were unable to recover it. 00:29:30.450 [2024-11-20 12:43:36.082041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.450 [2024-11-20 12:43:36.082064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.450 qpair failed and we were unable to recover it. 00:29:30.450 [2024-11-20 12:43:36.082277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.450 [2024-11-20 12:43:36.082301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.450 qpair failed and we were unable to recover it. 00:29:30.450 [2024-11-20 12:43:36.082492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.450 [2024-11-20 12:43:36.082515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.450 qpair failed and we were unable to recover it. 00:29:30.450 [2024-11-20 12:43:36.082679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.450 [2024-11-20 12:43:36.082703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.450 qpair failed and we were unable to recover it. 00:29:30.450 [2024-11-20 12:43:36.082794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.450 [2024-11-20 12:43:36.082816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.450 qpair failed and we were unable to recover it. 00:29:30.450 [2024-11-20 12:43:36.082969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.450 [2024-11-20 12:43:36.082992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.450 qpair failed and we were unable to recover it. 00:29:30.450 [2024-11-20 12:43:36.083218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.451 [2024-11-20 12:43:36.083243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.451 qpair failed and we were unable to recover it. 00:29:30.451 [2024-11-20 12:43:36.083397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.451 [2024-11-20 12:43:36.083420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.451 qpair failed and we were unable to recover it. 00:29:30.451 [2024-11-20 12:43:36.083586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.451 [2024-11-20 12:43:36.083609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.451 qpair failed and we were unable to recover it. 00:29:30.451 [2024-11-20 12:43:36.083772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.451 [2024-11-20 12:43:36.083796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.451 qpair failed and we were unable to recover it. 00:29:30.451 [2024-11-20 12:43:36.083901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.451 [2024-11-20 12:43:36.083924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.451 qpair failed and we were unable to recover it. 00:29:30.451 [2024-11-20 12:43:36.084015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.451 [2024-11-20 12:43:36.084038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.451 qpair failed and we were unable to recover it. 00:29:30.451 [2024-11-20 12:43:36.084254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.451 [2024-11-20 12:43:36.084278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.451 qpair failed and we were unable to recover it. 00:29:30.451 [2024-11-20 12:43:36.084381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.451 [2024-11-20 12:43:36.084403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.451 qpair failed and we were unable to recover it. 00:29:30.451 [2024-11-20 12:43:36.084563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.451 [2024-11-20 12:43:36.084586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.451 qpair failed and we were unable to recover it. 00:29:30.451 [2024-11-20 12:43:36.084697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.451 [2024-11-20 12:43:36.084720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.451 qpair failed and we were unable to recover it. 00:29:30.451 [2024-11-20 12:43:36.084811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.451 [2024-11-20 12:43:36.084834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.451 qpair failed and we were unable to recover it. 00:29:30.451 [2024-11-20 12:43:36.085008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.451 [2024-11-20 12:43:36.085035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.451 qpair failed and we were unable to recover it. 00:29:30.451 [2024-11-20 12:43:36.085139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.451 [2024-11-20 12:43:36.085162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.451 qpair failed and we were unable to recover it. 00:29:30.451 [2024-11-20 12:43:36.085280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.451 [2024-11-20 12:43:36.085304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.451 qpair failed and we were unable to recover it. 00:29:30.451 [2024-11-20 12:43:36.085471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.451 [2024-11-20 12:43:36.085494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.451 qpair failed and we were unable to recover it. 00:29:30.451 [2024-11-20 12:43:36.085599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.451 [2024-11-20 12:43:36.085621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.451 qpair failed and we were unable to recover it. 00:29:30.451 [2024-11-20 12:43:36.085717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.451 [2024-11-20 12:43:36.085740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.451 qpair failed and we were unable to recover it. 00:29:30.451 [2024-11-20 12:43:36.085890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.451 [2024-11-20 12:43:36.085913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.451 qpair failed and we were unable to recover it. 00:29:30.451 [2024-11-20 12:43:36.086102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.451 [2024-11-20 12:43:36.086125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.451 qpair failed and we were unable to recover it. 00:29:30.451 [2024-11-20 12:43:36.086275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.451 [2024-11-20 12:43:36.086299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.451 qpair failed and we were unable to recover it. 00:29:30.451 [2024-11-20 12:43:36.086451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.451 [2024-11-20 12:43:36.086474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.451 qpair failed and we were unable to recover it. 00:29:30.451 [2024-11-20 12:43:36.086662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.451 [2024-11-20 12:43:36.086685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.451 qpair failed and we were unable to recover it. 00:29:30.451 [2024-11-20 12:43:36.086774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.451 [2024-11-20 12:43:36.086796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.451 qpair failed and we were unable to recover it. 00:29:30.451 [2024-11-20 12:43:36.086980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.451 [2024-11-20 12:43:36.087002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.451 qpair failed and we were unable to recover it. 00:29:30.451 [2024-11-20 12:43:36.087163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.451 [2024-11-20 12:43:36.087186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.451 qpair failed and we were unable to recover it. 00:29:30.451 [2024-11-20 12:43:36.087455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.451 [2024-11-20 12:43:36.087513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.451 qpair failed and we were unable to recover it. 00:29:30.451 [2024-11-20 12:43:36.087696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.451 [2024-11-20 12:43:36.087732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.451 qpair failed and we were unable to recover it. 00:29:30.451 [2024-11-20 12:43:36.087840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.451 [2024-11-20 12:43:36.087875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.451 qpair failed and we were unable to recover it. 00:29:30.451 [2024-11-20 12:43:36.088049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.451 [2024-11-20 12:43:36.088075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.451 qpair failed and we were unable to recover it. 00:29:30.451 [2024-11-20 12:43:36.088259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.451 [2024-11-20 12:43:36.088283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.451 qpair failed and we were unable to recover it. 00:29:30.451 [2024-11-20 12:43:36.088385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.451 [2024-11-20 12:43:36.088408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.451 qpair failed and we were unable to recover it. 00:29:30.451 [2024-11-20 12:43:36.088572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.451 [2024-11-20 12:43:36.088595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.451 qpair failed and we were unable to recover it. 00:29:30.451 [2024-11-20 12:43:36.088747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.451 [2024-11-20 12:43:36.088771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.451 qpair failed and we were unable to recover it. 00:29:30.451 [2024-11-20 12:43:36.088933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.451 [2024-11-20 12:43:36.088955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.451 qpair failed and we were unable to recover it. 00:29:30.451 [2024-11-20 12:43:36.089111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.452 [2024-11-20 12:43:36.089134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.452 qpair failed and we were unable to recover it. 00:29:30.452 [2024-11-20 12:43:36.089290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.452 [2024-11-20 12:43:36.089315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.452 qpair failed and we were unable to recover it. 00:29:30.452 [2024-11-20 12:43:36.089465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.452 [2024-11-20 12:43:36.089489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.452 qpair failed and we were unable to recover it. 00:29:30.452 [2024-11-20 12:43:36.089648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.452 [2024-11-20 12:43:36.089671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.452 qpair failed and we were unable to recover it. 00:29:30.452 [2024-11-20 12:43:36.089828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.452 [2024-11-20 12:43:36.089851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.452 qpair failed and we were unable to recover it. 00:29:30.452 [2024-11-20 12:43:36.089942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.452 [2024-11-20 12:43:36.089966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.452 qpair failed and we were unable to recover it. 00:29:30.452 [2024-11-20 12:43:36.090157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.452 [2024-11-20 12:43:36.090181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.452 qpair failed and we were unable to recover it. 00:29:30.452 [2024-11-20 12:43:36.090373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.452 [2024-11-20 12:43:36.090424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:30.452 qpair failed and we were unable to recover it. 00:29:30.452 [2024-11-20 12:43:36.090560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.452 [2024-11-20 12:43:36.090596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:30.452 qpair failed and we were unable to recover it. 00:29:30.452 [2024-11-20 12:43:36.090789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.452 [2024-11-20 12:43:36.090823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1acc000b90 with addr=10.0.0.2, port=4420 00:29:30.452 qpair failed and we were unable to recover it. 00:29:30.452 [2024-11-20 12:43:36.090987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.452 [2024-11-20 12:43:36.091012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.452 qpair failed and we were unable to recover it. 00:29:30.452 [2024-11-20 12:43:36.091106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.452 [2024-11-20 12:43:36.091129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.452 qpair failed and we were unable to recover it. 00:29:30.452 [2024-11-20 12:43:36.091226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.452 [2024-11-20 12:43:36.091251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.452 qpair failed and we were unable to recover it. 00:29:30.452 [2024-11-20 12:43:36.091404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.452 [2024-11-20 12:43:36.091427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.452 qpair failed and we were unable to recover it. 00:29:30.452 [2024-11-20 12:43:36.091527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.452 [2024-11-20 12:43:36.091550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.452 qpair failed and we were unable to recover it. 00:29:30.452 [2024-11-20 12:43:36.091717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.452 [2024-11-20 12:43:36.091739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.452 qpair failed and we were unable to recover it. 00:29:30.452 [2024-11-20 12:43:36.091905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.452 [2024-11-20 12:43:36.091929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.452 qpair failed and we were unable to recover it. 00:29:30.452 [2024-11-20 12:43:36.092078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.452 [2024-11-20 12:43:36.092102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.452 qpair failed and we were unable to recover it. 00:29:30.452 [2024-11-20 12:43:36.092294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.452 [2024-11-20 12:43:36.092333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.452 qpair failed and we were unable to recover it. 00:29:30.452 [2024-11-20 12:43:36.092461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.452 [2024-11-20 12:43:36.092494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.452 qpair failed and we were unable to recover it. 00:29:30.452 [2024-11-20 12:43:36.092673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.452 [2024-11-20 12:43:36.092707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.452 qpair failed and we were unable to recover it. 00:29:30.452 [2024-11-20 12:43:36.092947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.452 [2024-11-20 12:43:36.092980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.452 qpair failed and we were unable to recover it. 00:29:30.452 [2024-11-20 12:43:36.093165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.452 [2024-11-20 12:43:36.093198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.452 qpair failed and we were unable to recover it. 00:29:30.452 [2024-11-20 12:43:36.093337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.452 [2024-11-20 12:43:36.093370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.452 qpair failed and we were unable to recover it. 00:29:30.452 [2024-11-20 12:43:36.093580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.452 [2024-11-20 12:43:36.093605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.452 qpair failed and we were unable to recover it. 00:29:30.452 [2024-11-20 12:43:36.093844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.452 [2024-11-20 12:43:36.093867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.452 qpair failed and we were unable to recover it. 00:29:30.452 [2024-11-20 12:43:36.094097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.452 [2024-11-20 12:43:36.094121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.452 qpair failed and we were unable to recover it. 00:29:30.452 [2024-11-20 12:43:36.094228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.452 [2024-11-20 12:43:36.094253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.452 qpair failed and we were unable to recover it. 00:29:30.452 [2024-11-20 12:43:36.094415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.452 [2024-11-20 12:43:36.094437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.452 qpair failed and we were unable to recover it. 00:29:30.452 [2024-11-20 12:43:36.094603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.452 [2024-11-20 12:43:36.094625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.452 qpair failed and we were unable to recover it. 00:29:30.452 [2024-11-20 12:43:36.094808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.452 [2024-11-20 12:43:36.094832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.452 qpair failed and we were unable to recover it. 00:29:30.452 [2024-11-20 12:43:36.094997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.452 [2024-11-20 12:43:36.095021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.452 qpair failed and we were unable to recover it. 00:29:30.452 [2024-11-20 12:43:36.095137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.452 [2024-11-20 12:43:36.095160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.452 qpair failed and we were unable to recover it. 00:29:30.452 [2024-11-20 12:43:36.095323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.452 [2024-11-20 12:43:36.095348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.452 qpair failed and we were unable to recover it. 00:29:30.452 [2024-11-20 12:43:36.095501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.452 [2024-11-20 12:43:36.095523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.452 qpair failed and we were unable to recover it. 00:29:30.452 [2024-11-20 12:43:36.095674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.452 [2024-11-20 12:43:36.095697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.452 qpair failed and we were unable to recover it. 00:29:30.452 [2024-11-20 12:43:36.095929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.452 [2024-11-20 12:43:36.095952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.452 qpair failed and we were unable to recover it. 00:29:30.452 [2024-11-20 12:43:36.096034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.452 [2024-11-20 12:43:36.096057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.452 qpair failed and we were unable to recover it. 00:29:30.452 [2024-11-20 12:43:36.096273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.453 [2024-11-20 12:43:36.096298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.453 qpair failed and we were unable to recover it. 00:29:30.453 [2024-11-20 12:43:36.096474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.453 [2024-11-20 12:43:36.096498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.453 qpair failed and we were unable to recover it. 00:29:30.453 [2024-11-20 12:43:36.096589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.453 [2024-11-20 12:43:36.096611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.453 qpair failed and we were unable to recover it. 00:29:30.453 [2024-11-20 12:43:36.096851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.453 [2024-11-20 12:43:36.096875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.453 qpair failed and we were unable to recover it. 00:29:30.453 [2024-11-20 12:43:36.096975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.453 [2024-11-20 12:43:36.096998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.453 qpair failed and we were unable to recover it. 00:29:30.453 [2024-11-20 12:43:36.097167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.453 [2024-11-20 12:43:36.097189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.453 qpair failed and we were unable to recover it. 00:29:30.453 [2024-11-20 12:43:36.097305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.453 [2024-11-20 12:43:36.097329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.453 qpair failed and we were unable to recover it. 00:29:30.453 [2024-11-20 12:43:36.097535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.453 [2024-11-20 12:43:36.097562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.453 qpair failed and we were unable to recover it. 00:29:30.453 [2024-11-20 12:43:36.097738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.453 [2024-11-20 12:43:36.097761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.453 qpair failed and we were unable to recover it. 00:29:30.453 [2024-11-20 12:43:36.097976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.453 [2024-11-20 12:43:36.097999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.453 qpair failed and we were unable to recover it. 00:29:30.453 [2024-11-20 12:43:36.098095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.453 [2024-11-20 12:43:36.098118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.453 qpair failed and we were unable to recover it. 00:29:30.453 [2024-11-20 12:43:36.098271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.453 [2024-11-20 12:43:36.098295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.453 qpair failed and we were unable to recover it. 00:29:30.453 [2024-11-20 12:43:36.098536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.453 [2024-11-20 12:43:36.098560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.453 qpair failed and we were unable to recover it. 00:29:30.453 [2024-11-20 12:43:36.098720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.453 [2024-11-20 12:43:36.098743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.453 qpair failed and we were unable to recover it. 00:29:30.453 [2024-11-20 12:43:36.098906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.453 [2024-11-20 12:43:36.098929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.453 qpair failed and we were unable to recover it. 00:29:30.453 [2024-11-20 12:43:36.099030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.453 [2024-11-20 12:43:36.099053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.453 qpair failed and we were unable to recover it. 00:29:30.453 [2024-11-20 12:43:36.099225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.453 [2024-11-20 12:43:36.099250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.453 qpair failed and we were unable to recover it. 00:29:30.453 [2024-11-20 12:43:36.099432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.453 [2024-11-20 12:43:36.099455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.453 qpair failed and we were unable to recover it. 00:29:30.453 [2024-11-20 12:43:36.099720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.453 [2024-11-20 12:43:36.099743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.453 qpair failed and we were unable to recover it. 00:29:30.453 [2024-11-20 12:43:36.099836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.453 [2024-11-20 12:43:36.099859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.453 qpair failed and we were unable to recover it. 00:29:30.453 [2024-11-20 12:43:36.099960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.453 [2024-11-20 12:43:36.099983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.453 qpair failed and we were unable to recover it. 00:29:30.453 [2024-11-20 12:43:36.100157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.453 [2024-11-20 12:43:36.100180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.453 qpair failed and we were unable to recover it. 00:29:30.453 [2024-11-20 12:43:36.100371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.453 [2024-11-20 12:43:36.100395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.453 qpair failed and we were unable to recover it. 00:29:30.453 [2024-11-20 12:43:36.100481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.453 [2024-11-20 12:43:36.100504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.453 qpair failed and we were unable to recover it. 00:29:30.453 [2024-11-20 12:43:36.100593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.453 [2024-11-20 12:43:36.100616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.453 qpair failed and we were unable to recover it. 00:29:30.453 [2024-11-20 12:43:36.100766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.453 [2024-11-20 12:43:36.100789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.453 qpair failed and we were unable to recover it. 00:29:30.453 [2024-11-20 12:43:36.100941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.453 [2024-11-20 12:43:36.100963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.453 qpair failed and we were unable to recover it. 00:29:30.453 [2024-11-20 12:43:36.101208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.453 [2024-11-20 12:43:36.101231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.453 qpair failed and we were unable to recover it. 00:29:30.453 [2024-11-20 12:43:36.101449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.453 [2024-11-20 12:43:36.101473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.453 qpair failed and we were unable to recover it. 00:29:30.453 [2024-11-20 12:43:36.101643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.453 [2024-11-20 12:43:36.101666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.453 qpair failed and we were unable to recover it. 00:29:30.453 [2024-11-20 12:43:36.101785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.453 [2024-11-20 12:43:36.101810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.453 qpair failed and we were unable to recover it. 00:29:30.453 [2024-11-20 12:43:36.102032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.453 [2024-11-20 12:43:36.102055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.453 qpair failed and we were unable to recover it. 00:29:30.453 [2024-11-20 12:43:36.102240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.453 [2024-11-20 12:43:36.102265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.453 qpair failed and we were unable to recover it. 00:29:30.453 [2024-11-20 12:43:36.102442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.453 [2024-11-20 12:43:36.102465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.453 qpair failed and we were unable to recover it. 00:29:30.453 [2024-11-20 12:43:36.102549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.453 [2024-11-20 12:43:36.102576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.453 qpair failed and we were unable to recover it. 00:29:30.453 [2024-11-20 12:43:36.102743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.453 [2024-11-20 12:43:36.102767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.453 qpair failed and we were unable to recover it. 00:29:30.453 [2024-11-20 12:43:36.102992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.453 [2024-11-20 12:43:36.103015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.453 qpair failed and we were unable to recover it. 00:29:30.453 [2024-11-20 12:43:36.103126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.453 [2024-11-20 12:43:36.103149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.453 qpair failed and we were unable to recover it. 00:29:30.453 [2024-11-20 12:43:36.103310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.453 [2024-11-20 12:43:36.103334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.453 qpair failed and we were unable to recover it. 00:29:30.453 [2024-11-20 12:43:36.103494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.453 [2024-11-20 12:43:36.103517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.453 qpair failed and we were unable to recover it. 00:29:30.453 [2024-11-20 12:43:36.103622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-11-20 12:43:36.103645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-11-20 12:43:36.103751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-11-20 12:43:36.103774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-11-20 12:43:36.103926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-11-20 12:43:36.103949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-11-20 12:43:36.104035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-11-20 12:43:36.104058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-11-20 12:43:36.104140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-11-20 12:43:36.104163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-11-20 12:43:36.104362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-11-20 12:43:36.104387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-11-20 12:43:36.104495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-11-20 12:43:36.104518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-11-20 12:43:36.104602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-11-20 12:43:36.104625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-11-20 12:43:36.104737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-11-20 12:43:36.104761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-11-20 12:43:36.104910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-11-20 12:43:36.104933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-11-20 12:43:36.105022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-11-20 12:43:36.105045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-11-20 12:43:36.105233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-11-20 12:43:36.105257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 12:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:30.454 12:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:30.454 12:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:30.454 12:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:30.454 12:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:30.454 [2024-11-20 12:43:36.107247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-11-20 12:43:36.107295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-11-20 12:43:36.107611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-11-20 12:43:36.107636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-11-20 12:43:36.107819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-11-20 12:43:36.107844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-11-20 12:43:36.108081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-11-20 12:43:36.108103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-11-20 12:43:36.108252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-11-20 12:43:36.108275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-11-20 12:43:36.108394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-11-20 12:43:36.108416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-11-20 12:43:36.108584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-11-20 12:43:36.108606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-11-20 12:43:36.108718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-11-20 12:43:36.108739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-11-20 12:43:36.108898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-11-20 12:43:36.108921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-11-20 12:43:36.109134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-11-20 12:43:36.109156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-11-20 12:43:36.109254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-11-20 12:43:36.109275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-11-20 12:43:36.109493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-11-20 12:43:36.109517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-11-20 12:43:36.109639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-11-20 12:43:36.109660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-11-20 12:43:36.109880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-11-20 12:43:36.109901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-11-20 12:43:36.110007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-11-20 12:43:36.110029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-11-20 12:43:36.110127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-11-20 12:43:36.110148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-11-20 12:43:36.110333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-11-20 12:43:36.110358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-11-20 12:43:36.110512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-11-20 12:43:36.110535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-11-20 12:43:36.110683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-11-20 12:43:36.110707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-11-20 12:43:36.110875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-11-20 12:43:36.110898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-11-20 12:43:36.111058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-11-20 12:43:36.111080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-11-20 12:43:36.111209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-11-20 12:43:36.111246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-11-20 12:43:36.111381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-11-20 12:43:36.111414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-11-20 12:43:36.111540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-11-20 12:43:36.111572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-11-20 12:43:36.111709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-11-20 12:43:36.111740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-11-20 12:43:36.111857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-11-20 12:43:36.111889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-11-20 12:43:36.111998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-11-20 12:43:36.112030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.454 [2024-11-20 12:43:36.112134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.454 [2024-11-20 12:43:36.112158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.454 qpair failed and we were unable to recover it. 00:29:30.455 [2024-11-20 12:43:36.112272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-11-20 12:43:36.112295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-11-20 12:43:36.112462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-11-20 12:43:36.112484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-11-20 12:43:36.112597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-11-20 12:43:36.112618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-11-20 12:43:36.112797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-11-20 12:43:36.112819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-11-20 12:43:36.113070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-11-20 12:43:36.113094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-11-20 12:43:36.113213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-11-20 12:43:36.113236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-11-20 12:43:36.113394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-11-20 12:43:36.113415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-11-20 12:43:36.113590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-11-20 12:43:36.113612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-11-20 12:43:36.113764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-11-20 12:43:36.113785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-11-20 12:43:36.113942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-11-20 12:43:36.113962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-11-20 12:43:36.114057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-11-20 12:43:36.114079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-11-20 12:43:36.114244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-11-20 12:43:36.114266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-11-20 12:43:36.114365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-11-20 12:43:36.114386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-11-20 12:43:36.114489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-11-20 12:43:36.114511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-11-20 12:43:36.114622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-11-20 12:43:36.114643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-11-20 12:43:36.114730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-11-20 12:43:36.114750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-11-20 12:43:36.114846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-11-20 12:43:36.114867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-11-20 12:43:36.115035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-11-20 12:43:36.115056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-11-20 12:43:36.115170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-11-20 12:43:36.115191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-11-20 12:43:36.115295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-11-20 12:43:36.115317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-11-20 12:43:36.115469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-11-20 12:43:36.115494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-11-20 12:43:36.115603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-11-20 12:43:36.115624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-11-20 12:43:36.115726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-11-20 12:43:36.115748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-11-20 12:43:36.115995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-11-20 12:43:36.116016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-11-20 12:43:36.116116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-11-20 12:43:36.116136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-11-20 12:43:36.116232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-11-20 12:43:36.116255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-11-20 12:43:36.116343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-11-20 12:43:36.116366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-11-20 12:43:36.116528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-11-20 12:43:36.116549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-11-20 12:43:36.116670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-11-20 12:43:36.116691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-11-20 12:43:36.116846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-11-20 12:43:36.116867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-11-20 12:43:36.117029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-11-20 12:43:36.117051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-11-20 12:43:36.117146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-11-20 12:43:36.117168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-11-20 12:43:36.117294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-11-20 12:43:36.117315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-11-20 12:43:36.117403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-11-20 12:43:36.117427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-11-20 12:43:36.117603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-11-20 12:43:36.117627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-11-20 12:43:36.117723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-11-20 12:43:36.117745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-11-20 12:43:36.117832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-11-20 12:43:36.117854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-11-20 12:43:36.118083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-11-20 12:43:36.118104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-11-20 12:43:36.118265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.455 [2024-11-20 12:43:36.118289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.455 qpair failed and we were unable to recover it. 00:29:30.455 [2024-11-20 12:43:36.118375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-11-20 12:43:36.118396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-11-20 12:43:36.118482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-11-20 12:43:36.118504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-11-20 12:43:36.118748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-11-20 12:43:36.118769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-11-20 12:43:36.118861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-11-20 12:43:36.118882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-11-20 12:43:36.118970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-11-20 12:43:36.118991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-11-20 12:43:36.119143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-11-20 12:43:36.119164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-11-20 12:43:36.119344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-11-20 12:43:36.119367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-11-20 12:43:36.119486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-11-20 12:43:36.119509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-11-20 12:43:36.119606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-11-20 12:43:36.119631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-11-20 12:43:36.119732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-11-20 12:43:36.119753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-11-20 12:43:36.119974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-11-20 12:43:36.119995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-11-20 12:43:36.120148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-11-20 12:43:36.120170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-11-20 12:43:36.120324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-11-20 12:43:36.120349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-11-20 12:43:36.120511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-11-20 12:43:36.120533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-11-20 12:43:36.120709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-11-20 12:43:36.120731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-11-20 12:43:36.120824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-11-20 12:43:36.120845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-11-20 12:43:36.120956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-11-20 12:43:36.120977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-11-20 12:43:36.121072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-11-20 12:43:36.121093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-11-20 12:43:36.121278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-11-20 12:43:36.121300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-11-20 12:43:36.121398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-11-20 12:43:36.121419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-11-20 12:43:36.121577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-11-20 12:43:36.121598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-11-20 12:43:36.121783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-11-20 12:43:36.121805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-11-20 12:43:36.122004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-11-20 12:43:36.122026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-11-20 12:43:36.122121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-11-20 12:43:36.122142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-11-20 12:43:36.122284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-11-20 12:43:36.122307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-11-20 12:43:36.122423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-11-20 12:43:36.122444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-11-20 12:43:36.122542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-11-20 12:43:36.122563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.456 qpair failed and we were unable to recover it. 00:29:30.456 [2024-11-20 12:43:36.122646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-11-20 12:43:36.122667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-11-20 12:43:36.122753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-11-20 12:43:36.122775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-11-20 12:43:36.122928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-11-20 12:43:36.122949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-11-20 12:43:36.123103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-11-20 12:43:36.123125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-11-20 12:43:36.123226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-11-20 12:43:36.123249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-11-20 12:43:36.123346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-11-20 12:43:36.123367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-11-20 12:43:36.123464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-11-20 12:43:36.123486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-11-20 12:43:36.123659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-11-20 12:43:36.123681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-11-20 12:43:36.123767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-11-20 12:43:36.123788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-11-20 12:43:36.123874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-11-20 12:43:36.123895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-11-20 12:43:36.124046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-11-20 12:43:36.124068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-11-20 12:43:36.124229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-11-20 12:43:36.124253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-11-20 12:43:36.124350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-11-20 12:43:36.124371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-11-20 12:43:36.124470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-11-20 12:43:36.124493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-11-20 12:43:36.124573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-11-20 12:43:36.124595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-11-20 12:43:36.124817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-11-20 12:43:36.124838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-11-20 12:43:36.125006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-11-20 12:43:36.125028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-11-20 12:43:36.125121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-11-20 12:43:36.125142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-11-20 12:43:36.125246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-11-20 12:43:36.125268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-11-20 12:43:36.125485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-11-20 12:43:36.125506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-11-20 12:43:36.125591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-11-20 12:43:36.125612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-11-20 12:43:36.125715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-11-20 12:43:36.125736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-11-20 12:43:36.125845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-11-20 12:43:36.125871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-11-20 12:43:36.125962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-11-20 12:43:36.125983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-11-20 12:43:36.126082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-11-20 12:43:36.126103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-11-20 12:43:36.126241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-11-20 12:43:36.126265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-11-20 12:43:36.126355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-11-20 12:43:36.126377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-11-20 12:43:36.126462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-11-20 12:43:36.126484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-11-20 12:43:36.126751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-11-20 12:43:36.126773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-11-20 12:43:36.126923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-11-20 12:43:36.126945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-11-20 12:43:36.127109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-11-20 12:43:36.127131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-11-20 12:43:36.127227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-11-20 12:43:36.127256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-11-20 12:43:36.127352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-11-20 12:43:36.127373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-11-20 12:43:36.127467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-11-20 12:43:36.127489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-11-20 12:43:36.127589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-11-20 12:43:36.127610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-11-20 12:43:36.127762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-11-20 12:43:36.127785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-11-20 12:43:36.128008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-11-20 12:43:36.128030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.457 qpair failed and we were unable to recover it. 00:29:30.457 [2024-11-20 12:43:36.128120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.457 [2024-11-20 12:43:36.128142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-11-20 12:43:36.128241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-11-20 12:43:36.128264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-11-20 12:43:36.128352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-11-20 12:43:36.128372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-11-20 12:43:36.128474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-11-20 12:43:36.128496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-11-20 12:43:36.128655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-11-20 12:43:36.128676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-11-20 12:43:36.128832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-11-20 12:43:36.128852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-11-20 12:43:36.129017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-11-20 12:43:36.129038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-11-20 12:43:36.129132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-11-20 12:43:36.129153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-11-20 12:43:36.129249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-11-20 12:43:36.129271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-11-20 12:43:36.129354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-11-20 12:43:36.129375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-11-20 12:43:36.129552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-11-20 12:43:36.129574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-11-20 12:43:36.129662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-11-20 12:43:36.129682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-11-20 12:43:36.129881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-11-20 12:43:36.129907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-11-20 12:43:36.130001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-11-20 12:43:36.130023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-11-20 12:43:36.130189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-11-20 12:43:36.130215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-11-20 12:43:36.130302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-11-20 12:43:36.130326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-11-20 12:43:36.130428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-11-20 12:43:36.130450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-11-20 12:43:36.130684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-11-20 12:43:36.130705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-11-20 12:43:36.130927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-11-20 12:43:36.130948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-11-20 12:43:36.131042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-11-20 12:43:36.131063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-11-20 12:43:36.131176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-11-20 12:43:36.131197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-11-20 12:43:36.131302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-11-20 12:43:36.131324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-11-20 12:43:36.131405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-11-20 12:43:36.131428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-11-20 12:43:36.131662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-11-20 12:43:36.131685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-11-20 12:43:36.131774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-11-20 12:43:36.131796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-11-20 12:43:36.131956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-11-20 12:43:36.131978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-11-20 12:43:36.132075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-11-20 12:43:36.132097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-11-20 12:43:36.132268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-11-20 12:43:36.132290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-11-20 12:43:36.132554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-11-20 12:43:36.132575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-11-20 12:43:36.132686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-11-20 12:43:36.132709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-11-20 12:43:36.132805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-11-20 12:43:36.132827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-11-20 12:43:36.132923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-11-20 12:43:36.132945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-11-20 12:43:36.133045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-11-20 12:43:36.133066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-11-20 12:43:36.133246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-11-20 12:43:36.133269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-11-20 12:43:36.133368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-11-20 12:43:36.133389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-11-20 12:43:36.133540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-11-20 12:43:36.133561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-11-20 12:43:36.133652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-11-20 12:43:36.133673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-11-20 12:43:36.133834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-11-20 12:43:36.133855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.458 [2024-11-20 12:43:36.134022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.458 [2024-11-20 12:43:36.134043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.458 qpair failed and we were unable to recover it. 00:29:30.459 [2024-11-20 12:43:36.134144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-11-20 12:43:36.134166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-11-20 12:43:36.134293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-11-20 12:43:36.134315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-11-20 12:43:36.134414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-11-20 12:43:36.134436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-11-20 12:43:36.134543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-11-20 12:43:36.134566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-11-20 12:43:36.134711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-11-20 12:43:36.134732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-11-20 12:43:36.134812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-11-20 12:43:36.134833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-11-20 12:43:36.134921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-11-20 12:43:36.134942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-11-20 12:43:36.135037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-11-20 12:43:36.135058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-11-20 12:43:36.135221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-11-20 12:43:36.135243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-11-20 12:43:36.135337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-11-20 12:43:36.135359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-11-20 12:43:36.135442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-11-20 12:43:36.135463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-11-20 12:43:36.135546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-11-20 12:43:36.135567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-11-20 12:43:36.135670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-11-20 12:43:36.135691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-11-20 12:43:36.135856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-11-20 12:43:36.135877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-11-20 12:43:36.135963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-11-20 12:43:36.135994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-11-20 12:43:36.136165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-11-20 12:43:36.136187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-11-20 12:43:36.136293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-11-20 12:43:36.136314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-11-20 12:43:36.136398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-11-20 12:43:36.136419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-11-20 12:43:36.136514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-11-20 12:43:36.136536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-11-20 12:43:36.136625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-11-20 12:43:36.136646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-11-20 12:43:36.136740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-11-20 12:43:36.136761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-11-20 12:43:36.136930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-11-20 12:43:36.136951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-11-20 12:43:36.137033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-11-20 12:43:36.137056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-11-20 12:43:36.137139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-11-20 12:43:36.137161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-11-20 12:43:36.137275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-11-20 12:43:36.137298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-11-20 12:43:36.137386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-11-20 12:43:36.137408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-11-20 12:43:36.137517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-11-20 12:43:36.137537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-11-20 12:43:36.137617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-11-20 12:43:36.137639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-11-20 12:43:36.137743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-11-20 12:43:36.137766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-11-20 12:43:36.137842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-11-20 12:43:36.137863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-11-20 12:43:36.137972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-11-20 12:43:36.137993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-11-20 12:43:36.138077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-11-20 12:43:36.138100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-11-20 12:43:36.138192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-11-20 12:43:36.138221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-11-20 12:43:36.138374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-11-20 12:43:36.138397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-11-20 12:43:36.138491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-11-20 12:43:36.138513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-11-20 12:43:36.138713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-11-20 12:43:36.138734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-11-20 12:43:36.138893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-11-20 12:43:36.138914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-11-20 12:43:36.139022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.459 [2024-11-20 12:43:36.139043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.459 qpair failed and we were unable to recover it. 00:29:30.459 [2024-11-20 12:43:36.139217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-11-20 12:43:36.139239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-11-20 12:43:36.139400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-11-20 12:43:36.139422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-11-20 12:43:36.139524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-11-20 12:43:36.139546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-11-20 12:43:36.139636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-11-20 12:43:36.139658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-11-20 12:43:36.139831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-11-20 12:43:36.139853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-11-20 12:43:36.139938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-11-20 12:43:36.139960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-11-20 12:43:36.140040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-11-20 12:43:36.140061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-11-20 12:43:36.140154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-11-20 12:43:36.140175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-11-20 12:43:36.140283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-11-20 12:43:36.140304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-11-20 12:43:36.140413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-11-20 12:43:36.140434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-11-20 12:43:36.140549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-11-20 12:43:36.140570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-11-20 12:43:36.140655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-11-20 12:43:36.140675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-11-20 12:43:36.140755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-11-20 12:43:36.140776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-11-20 12:43:36.140955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-11-20 12:43:36.140978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-11-20 12:43:36.141127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-11-20 12:43:36.141147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-11-20 12:43:36.141296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-11-20 12:43:36.141319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-11-20 12:43:36.141413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-11-20 12:43:36.141434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-11-20 12:43:36.141550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-11-20 12:43:36.141591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-11-20 12:43:36.141704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-11-20 12:43:36.141738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 12:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:30.460 [2024-11-20 12:43:36.141856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-11-20 12:43:36.141889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-11-20 12:43:36.142066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-11-20 12:43:36.142089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 12:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:30.460 [2024-11-20 12:43:36.142177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-11-20 12:43:36.142223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-11-20 12:43:36.142330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-11-20 12:43:36.142351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-11-20 12:43:36.142501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 12:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.460 [2024-11-20 12:43:36.142524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-11-20 12:43:36.142621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-11-20 12:43:36.142641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-11-20 12:43:36.142731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-11-20 12:43:36.142753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 12:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:30.460 [2024-11-20 12:43:36.142845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-11-20 12:43:36.142867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-11-20 12:43:36.142950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-11-20 12:43:36.142971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-11-20 12:43:36.143085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-11-20 12:43:36.143106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-11-20 12:43:36.143194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-11-20 12:43:36.143222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.460 [2024-11-20 12:43:36.143317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.460 [2024-11-20 12:43:36.143337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.460 qpair failed and we were unable to recover it. 00:29:30.461 [2024-11-20 12:43:36.143440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-11-20 12:43:36.143461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-11-20 12:43:36.143614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-11-20 12:43:36.143636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-11-20 12:43:36.143721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-11-20 12:43:36.143743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-11-20 12:43:36.143840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-11-20 12:43:36.143862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-11-20 12:43:36.143943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-11-20 12:43:36.143964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-11-20 12:43:36.144045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-11-20 12:43:36.144066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-11-20 12:43:36.144161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-11-20 12:43:36.144182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-11-20 12:43:36.144270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-11-20 12:43:36.144292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-11-20 12:43:36.144384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-11-20 12:43:36.144407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-11-20 12:43:36.144495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-11-20 12:43:36.144517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-11-20 12:43:36.144611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-11-20 12:43:36.144632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-11-20 12:43:36.144780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-11-20 12:43:36.144804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-11-20 12:43:36.144905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-11-20 12:43:36.144928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-11-20 12:43:36.145007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-11-20 12:43:36.145028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-11-20 12:43:36.145121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-11-20 12:43:36.145142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-11-20 12:43:36.145293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-11-20 12:43:36.145316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-11-20 12:43:36.145401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-11-20 12:43:36.145422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-11-20 12:43:36.145504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-11-20 12:43:36.145525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-11-20 12:43:36.145626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-11-20 12:43:36.145647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-11-20 12:43:36.145728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-11-20 12:43:36.145748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-11-20 12:43:36.145836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-11-20 12:43:36.145857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-11-20 12:43:36.145949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-11-20 12:43:36.145970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-11-20 12:43:36.146061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-11-20 12:43:36.146082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-11-20 12:43:36.146244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-11-20 12:43:36.146266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-11-20 12:43:36.146445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-11-20 12:43:36.146466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-11-20 12:43:36.146637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-11-20 12:43:36.146659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-11-20 12:43:36.146738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-11-20 12:43:36.146759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-11-20 12:43:36.146843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-11-20 12:43:36.146865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-11-20 12:43:36.146949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-11-20 12:43:36.146971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.461 [2024-11-20 12:43:36.147052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.461 [2024-11-20 12:43:36.147073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.461 qpair failed and we were unable to recover it. 00:29:30.728 [2024-11-20 12:43:36.147186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-11-20 12:43:36.147217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-11-20 12:43:36.147372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-11-20 12:43:36.147393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-11-20 12:43:36.147501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-11-20 12:43:36.147522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-11-20 12:43:36.147604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-11-20 12:43:36.147626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-11-20 12:43:36.147801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-11-20 12:43:36.147823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-11-20 12:43:36.147904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-11-20 12:43:36.147925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-11-20 12:43:36.148029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-11-20 12:43:36.148051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-11-20 12:43:36.148213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-11-20 12:43:36.148235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-11-20 12:43:36.148327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-11-20 12:43:36.148349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-11-20 12:43:36.148438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-11-20 12:43:36.148460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-11-20 12:43:36.148545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-11-20 12:43:36.148567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-11-20 12:43:36.148646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-11-20 12:43:36.148667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-11-20 12:43:36.148750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-11-20 12:43:36.148771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-11-20 12:43:36.148879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-11-20 12:43:36.148899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-11-20 12:43:36.148981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-11-20 12:43:36.149001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-11-20 12:43:36.149157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-11-20 12:43:36.149178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-11-20 12:43:36.149281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-11-20 12:43:36.149303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-11-20 12:43:36.149455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-11-20 12:43:36.149477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-11-20 12:43:36.149563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-11-20 12:43:36.149584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-11-20 12:43:36.149737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-11-20 12:43:36.149758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-11-20 12:43:36.149840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-11-20 12:43:36.149861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-11-20 12:43:36.150014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.728 [2024-11-20 12:43:36.150035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.728 qpair failed and we were unable to recover it. 00:29:30.728 [2024-11-20 12:43:36.150147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-11-20 12:43:36.150184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-11-20 12:43:36.150320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-11-20 12:43:36.150352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-11-20 12:43:36.150463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-11-20 12:43:36.150495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-11-20 12:43:36.150613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-11-20 12:43:36.150645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-11-20 12:43:36.150820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-11-20 12:43:36.150851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-11-20 12:43:36.150968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-11-20 12:43:36.151000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1ad0000b90 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-11-20 12:43:36.151163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-11-20 12:43:36.151188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-11-20 12:43:36.151362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-11-20 12:43:36.151384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-11-20 12:43:36.151483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-11-20 12:43:36.151504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-11-20 12:43:36.151651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-11-20 12:43:36.151672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-11-20 12:43:36.151756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-11-20 12:43:36.151776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-11-20 12:43:36.151864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-11-20 12:43:36.151890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-11-20 12:43:36.151973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-11-20 12:43:36.151994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-11-20 12:43:36.152087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-11-20 12:43:36.152109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-11-20 12:43:36.152208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-11-20 12:43:36.152231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-11-20 12:43:36.152328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-11-20 12:43:36.152349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-11-20 12:43:36.152428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-11-20 12:43:36.152449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-11-20 12:43:36.152608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-11-20 12:43:36.152629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-11-20 12:43:36.152706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-11-20 12:43:36.152727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-11-20 12:43:36.152821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-11-20 12:43:36.152842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-11-20 12:43:36.152932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-11-20 12:43:36.152953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-11-20 12:43:36.153038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-11-20 12:43:36.153059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-11-20 12:43:36.153139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-11-20 12:43:36.153159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-11-20 12:43:36.153254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-11-20 12:43:36.153277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-11-20 12:43:36.153361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-11-20 12:43:36.153382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-11-20 12:43:36.153479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-11-20 12:43:36.153500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-11-20 12:43:36.153722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-11-20 12:43:36.153744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-11-20 12:43:36.153906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-11-20 12:43:36.153930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-11-20 12:43:36.154016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-11-20 12:43:36.154036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-11-20 12:43:36.154253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-11-20 12:43:36.154276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-11-20 12:43:36.154357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-11-20 12:43:36.154378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-11-20 12:43:36.154477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-11-20 12:43:36.154497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-11-20 12:43:36.154614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-11-20 12:43:36.154635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-11-20 12:43:36.154724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-11-20 12:43:36.154745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-11-20 12:43:36.154834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-11-20 12:43:36.154855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-11-20 12:43:36.154949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-11-20 12:43:36.154971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-11-20 12:43:36.155049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-11-20 12:43:36.155070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-11-20 12:43:36.155166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.729 [2024-11-20 12:43:36.155187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.729 qpair failed and we were unable to recover it. 00:29:30.729 [2024-11-20 12:43:36.155344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-11-20 12:43:36.155366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-11-20 12:43:36.155453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-11-20 12:43:36.155474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-11-20 12:43:36.155637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-11-20 12:43:36.155658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-11-20 12:43:36.155880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-11-20 12:43:36.155901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-11-20 12:43:36.155995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-11-20 12:43:36.156016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-11-20 12:43:36.156169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-11-20 12:43:36.156190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-11-20 12:43:36.156295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-11-20 12:43:36.156317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-11-20 12:43:36.156483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-11-20 12:43:36.156503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-11-20 12:43:36.156588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-11-20 12:43:36.156609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-11-20 12:43:36.156760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-11-20 12:43:36.156781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-11-20 12:43:36.156865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-11-20 12:43:36.156885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-11-20 12:43:36.156978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-11-20 12:43:36.156999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-11-20 12:43:36.157200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-11-20 12:43:36.157258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-11-20 12:43:36.157380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-11-20 12:43:36.157400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-11-20 12:43:36.157547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-11-20 12:43:36.157568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-11-20 12:43:36.157682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-11-20 12:43:36.157703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-11-20 12:43:36.157787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-11-20 12:43:36.157814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-11-20 12:43:36.157901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-11-20 12:43:36.157922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-11-20 12:43:36.158084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-11-20 12:43:36.158106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-11-20 12:43:36.158196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-11-20 12:43:36.158225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-11-20 12:43:36.158312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-11-20 12:43:36.158333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-11-20 12:43:36.158486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-11-20 12:43:36.158508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-11-20 12:43:36.158599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-11-20 12:43:36.158620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-11-20 12:43:36.158713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-11-20 12:43:36.158734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-11-20 12:43:36.158901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-11-20 12:43:36.158923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-11-20 12:43:36.159030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-11-20 12:43:36.159051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-11-20 12:43:36.159135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-11-20 12:43:36.159155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-11-20 12:43:36.159261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-11-20 12:43:36.159283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-11-20 12:43:36.159365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-11-20 12:43:36.159387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-11-20 12:43:36.159465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-11-20 12:43:36.159485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-11-20 12:43:36.159652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-11-20 12:43:36.159673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-11-20 12:43:36.159768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-11-20 12:43:36.159789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-11-20 12:43:36.159871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-11-20 12:43:36.159891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-11-20 12:43:36.160002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-11-20 12:43:36.160024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-11-20 12:43:36.160117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-11-20 12:43:36.160138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-11-20 12:43:36.160288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-11-20 12:43:36.160310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-11-20 12:43:36.160462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-11-20 12:43:36.160483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-11-20 12:43:36.160577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-11-20 12:43:36.160598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.730 qpair failed and we were unable to recover it. 00:29:30.730 [2024-11-20 12:43:36.160682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.730 [2024-11-20 12:43:36.160702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.731 qpair failed and we were unable to recover it. 00:29:30.731 [2024-11-20 12:43:36.160788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.731 [2024-11-20 12:43:36.160810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.731 qpair failed and we were unable to recover it. 00:29:30.731 [2024-11-20 12:43:36.160961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.731 [2024-11-20 12:43:36.160983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.731 qpair failed and we were unable to recover it. 00:29:30.731 [2024-11-20 12:43:36.161060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.731 [2024-11-20 12:43:36.161081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.731 qpair failed and we were unable to recover it. 00:29:30.731 [2024-11-20 12:43:36.161183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.731 [2024-11-20 12:43:36.161232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.731 qpair failed and we were unable to recover it. 00:29:30.731 [2024-11-20 12:43:36.161338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.731 [2024-11-20 12:43:36.161359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.731 qpair failed and we were unable to recover it. 00:29:30.731 [2024-11-20 12:43:36.161458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.731 [2024-11-20 12:43:36.161478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.731 qpair failed and we were unable to recover it. 00:29:30.731 [2024-11-20 12:43:36.161570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.731 [2024-11-20 12:43:36.161592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.731 qpair failed and we were unable to recover it. 00:29:30.731 [2024-11-20 12:43:36.161689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.731 [2024-11-20 12:43:36.161709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.731 qpair failed and we were unable to recover it. 00:29:30.731 [2024-11-20 12:43:36.161858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.731 [2024-11-20 12:43:36.161879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.731 qpair failed and we were unable to recover it. 00:29:30.731 [2024-11-20 12:43:36.161969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.731 [2024-11-20 12:43:36.161991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.731 qpair failed and we were unable to recover it. 00:29:30.731 [2024-11-20 12:43:36.162089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.731 [2024-11-20 12:43:36.162109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.731 qpair failed and we were unable to recover it. 00:29:30.731 [2024-11-20 12:43:36.162210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.731 [2024-11-20 12:43:36.162233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.731 qpair failed and we were unable to recover it. 00:29:30.731 [2024-11-20 12:43:36.162324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.731 [2024-11-20 12:43:36.162346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.731 qpair failed and we were unable to recover it. 00:29:30.731 [2024-11-20 12:43:36.162500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.731 [2024-11-20 12:43:36.162522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.731 qpair failed and we were unable to recover it. 00:29:30.731 [2024-11-20 12:43:36.162668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.731 [2024-11-20 12:43:36.162689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.731 qpair failed and we were unable to recover it. 00:29:30.731 [2024-11-20 12:43:36.162773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.731 [2024-11-20 12:43:36.162795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.731 qpair failed and we were unable to recover it. 00:29:30.731 [2024-11-20 12:43:36.162895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.731 [2024-11-20 12:43:36.162916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.731 qpair failed and we were unable to recover it. 00:29:30.731 [2024-11-20 12:43:36.163008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.731 [2024-11-20 12:43:36.163029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.731 qpair failed and we were unable to recover it. 00:29:30.731 [2024-11-20 12:43:36.163115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.731 [2024-11-20 12:43:36.163145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.731 qpair failed and we were unable to recover it. 00:29:30.731 [2024-11-20 12:43:36.163322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.731 [2024-11-20 12:43:36.163344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.731 qpair failed and we were unable to recover it. 00:29:30.731 [2024-11-20 12:43:36.163507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.731 [2024-11-20 12:43:36.163527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.731 qpair failed and we were unable to recover it. 00:29:30.731 [2024-11-20 12:43:36.163623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.731 [2024-11-20 12:43:36.163644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.731 qpair failed and we were unable to recover it. 00:29:30.731 [2024-11-20 12:43:36.163886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.731 [2024-11-20 12:43:36.163907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.731 qpair failed and we were unable to recover it. 00:29:30.731 [2024-11-20 12:43:36.164065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.731 [2024-11-20 12:43:36.164086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.731 qpair failed and we were unable to recover it. 00:29:30.731 [2024-11-20 12:43:36.164240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.731 [2024-11-20 12:43:36.164263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.731 qpair failed and we were unable to recover it. 00:29:30.731 [2024-11-20 12:43:36.164413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.731 [2024-11-20 12:43:36.164434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.731 qpair failed and we were unable to recover it. 00:29:30.731 [2024-11-20 12:43:36.164589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.731 [2024-11-20 12:43:36.164609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.731 qpair failed and we were unable to recover it. 00:29:30.731 [2024-11-20 12:43:36.164710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.731 [2024-11-20 12:43:36.164731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.731 qpair failed and we were unable to recover it. 00:29:30.731 [2024-11-20 12:43:36.164898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.731 [2024-11-20 12:43:36.164919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.731 qpair failed and we were unable to recover it. 00:29:30.731 [2024-11-20 12:43:36.165019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.731 [2024-11-20 12:43:36.165040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.731 qpair failed and we were unable to recover it. 00:29:30.731 [2024-11-20 12:43:36.165124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.731 [2024-11-20 12:43:36.165145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.731 qpair failed and we were unable to recover it. 00:29:30.731 [2024-11-20 12:43:36.165387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.731 [2024-11-20 12:43:36.165409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.731 qpair failed and we were unable to recover it. 00:29:30.731 [2024-11-20 12:43:36.165513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.731 [2024-11-20 12:43:36.165534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.731 qpair failed and we were unable to recover it. 00:29:30.731 [2024-11-20 12:43:36.165616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.731 [2024-11-20 12:43:36.165637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.731 qpair failed and we were unable to recover it. 00:29:30.731 [2024-11-20 12:43:36.165793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.731 [2024-11-20 12:43:36.165814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.731 qpair failed and we were unable to recover it. 00:29:30.731 [2024-11-20 12:43:36.165966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.731 [2024-11-20 12:43:36.165987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.731 qpair failed and we were unable to recover it. 00:29:30.731 [2024-11-20 12:43:36.166071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.731 [2024-11-20 12:43:36.166092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.731 qpair failed and we were unable to recover it. 00:29:30.731 [2024-11-20 12:43:36.166238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.731 [2024-11-20 12:43:36.166260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.731 qpair failed and we were unable to recover it. 00:29:30.732 [2024-11-20 12:43:36.166355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.732 [2024-11-20 12:43:36.166376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.732 qpair failed and we were unable to recover it. 00:29:30.732 [2024-11-20 12:43:36.166484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.732 [2024-11-20 12:43:36.166505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.732 qpair failed and we were unable to recover it. 00:29:30.732 [2024-11-20 12:43:36.166615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.732 [2024-11-20 12:43:36.166636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.732 qpair failed and we were unable to recover it. 00:29:30.732 [2024-11-20 12:43:36.166782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.732 [2024-11-20 12:43:36.166803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.732 qpair failed and we were unable to recover it. 00:29:30.732 [2024-11-20 12:43:36.166955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.732 [2024-11-20 12:43:36.166976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.732 qpair failed and we were unable to recover it. 00:29:30.732 [2024-11-20 12:43:36.167076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.732 [2024-11-20 12:43:36.167097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.732 qpair failed and we were unable to recover it. 00:29:30.732 [2024-11-20 12:43:36.167199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.732 [2024-11-20 12:43:36.167228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.732 qpair failed and we were unable to recover it. 00:29:30.732 [2024-11-20 12:43:36.167329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.732 [2024-11-20 12:43:36.167355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.732 qpair failed and we were unable to recover it. 00:29:30.732 [2024-11-20 12:43:36.167443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.732 [2024-11-20 12:43:36.167464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.732 qpair failed and we were unable to recover it. 00:29:30.732 [2024-11-20 12:43:36.167560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.732 [2024-11-20 12:43:36.167581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.732 qpair failed and we were unable to recover it. 00:29:30.732 [2024-11-20 12:43:36.167804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.732 [2024-11-20 12:43:36.167825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.732 qpair failed and we were unable to recover it. 00:29:30.732 [2024-11-20 12:43:36.167922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.732 [2024-11-20 12:43:36.167943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.732 qpair failed and we were unable to recover it. 00:29:30.732 [2024-11-20 12:43:36.168027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.732 [2024-11-20 12:43:36.168048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.732 qpair failed and we were unable to recover it. 00:29:30.732 [2024-11-20 12:43:36.168234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.732 [2024-11-20 12:43:36.168257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.732 qpair failed and we were unable to recover it. 00:29:30.732 [2024-11-20 12:43:36.168496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.732 [2024-11-20 12:43:36.168518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.732 qpair failed and we were unable to recover it. 00:29:30.732 [2024-11-20 12:43:36.168694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.732 [2024-11-20 12:43:36.168715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.732 qpair failed and we were unable to recover it. 00:29:30.732 [2024-11-20 12:43:36.168800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.732 [2024-11-20 12:43:36.168821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.732 qpair failed and we were unable to recover it. 00:29:30.732 [2024-11-20 12:43:36.168967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.732 [2024-11-20 12:43:36.168988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.732 qpair failed and we were unable to recover it. 00:29:30.732 [2024-11-20 12:43:36.169074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.732 [2024-11-20 12:43:36.169095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.732 qpair failed and we were unable to recover it. 00:29:30.732 [2024-11-20 12:43:36.169188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.732 [2024-11-20 12:43:36.169232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.732 qpair failed and we were unable to recover it. 00:29:30.732 [2024-11-20 12:43:36.169330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.732 [2024-11-20 12:43:36.169351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.732 qpair failed and we were unable to recover it. 00:29:30.732 [2024-11-20 12:43:36.169517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.732 [2024-11-20 12:43:36.169538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.732 qpair failed and we were unable to recover it. 00:29:30.732 [2024-11-20 12:43:36.169710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.732 [2024-11-20 12:43:36.169731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.732 qpair failed and we were unable to recover it. 00:29:30.732 [2024-11-20 12:43:36.169818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.732 [2024-11-20 12:43:36.169839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.732 qpair failed and we were unable to recover it. 00:29:30.732 [2024-11-20 12:43:36.170079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.732 [2024-11-20 12:43:36.170100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.732 qpair failed and we were unable to recover it. 00:29:30.732 [2024-11-20 12:43:36.170249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.732 [2024-11-20 12:43:36.170272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.732 qpair failed and we were unable to recover it. 00:29:30.732 [2024-11-20 12:43:36.170421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.732 [2024-11-20 12:43:36.170443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.732 qpair failed and we were unable to recover it. 00:29:30.732 [2024-11-20 12:43:36.170540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.732 [2024-11-20 12:43:36.170560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.732 qpair failed and we were unable to recover it. 00:29:30.732 [2024-11-20 12:43:36.170732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.732 [2024-11-20 12:43:36.170754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.732 qpair failed and we were unable to recover it. 00:29:30.732 [2024-11-20 12:43:36.170849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.732 [2024-11-20 12:43:36.170870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.732 qpair failed and we were unable to recover it. 00:29:30.732 [2024-11-20 12:43:36.171020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.732 [2024-11-20 12:43:36.171041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.732 qpair failed and we were unable to recover it. 00:29:30.732 [2024-11-20 12:43:36.171136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.732 [2024-11-20 12:43:36.171158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.732 qpair failed and we were unable to recover it. 00:29:30.732 [2024-11-20 12:43:36.171256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.732 [2024-11-20 12:43:36.171278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.732 qpair failed and we were unable to recover it. 00:29:30.732 [2024-11-20 12:43:36.171495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.732 [2024-11-20 12:43:36.171517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.732 qpair failed and we were unable to recover it. 00:29:30.732 [2024-11-20 12:43:36.171620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.732 [2024-11-20 12:43:36.171641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.733 qpair failed and we were unable to recover it. 00:29:30.733 [2024-11-20 12:43:36.171733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.733 [2024-11-20 12:43:36.171755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.733 qpair failed and we were unable to recover it. 00:29:30.733 [2024-11-20 12:43:36.171846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.733 [2024-11-20 12:43:36.171867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.733 qpair failed and we were unable to recover it. 00:29:30.733 [2024-11-20 12:43:36.171963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.733 [2024-11-20 12:43:36.171984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.733 qpair failed and we were unable to recover it. 00:29:30.733 [2024-11-20 12:43:36.172081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.733 [2024-11-20 12:43:36.172101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.733 qpair failed and we were unable to recover it. 00:29:30.733 [2024-11-20 12:43:36.172183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.733 [2024-11-20 12:43:36.172211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.733 qpair failed and we were unable to recover it. 00:29:30.733 [2024-11-20 12:43:36.172315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.733 [2024-11-20 12:43:36.172337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.733 qpair failed and we were unable to recover it. 00:29:30.733 [2024-11-20 12:43:36.172420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.733 [2024-11-20 12:43:36.172442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.733 qpair failed and we were unable to recover it. 00:29:30.733 [2024-11-20 12:43:36.172547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.733 [2024-11-20 12:43:36.172568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.733 qpair failed and we were unable to recover it. 00:29:30.733 [2024-11-20 12:43:36.172751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.733 [2024-11-20 12:43:36.172773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.733 qpair failed and we were unable to recover it. 00:29:30.733 [2024-11-20 12:43:36.172944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.733 [2024-11-20 12:43:36.172967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.733 qpair failed and we were unable to recover it. 00:29:30.733 [2024-11-20 12:43:36.173116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.733 [2024-11-20 12:43:36.173136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.733 qpair failed and we were unable to recover it. 00:29:30.733 [2024-11-20 12:43:36.173227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.733 [2024-11-20 12:43:36.173250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.733 qpair failed and we were unable to recover it. 00:29:30.733 [2024-11-20 12:43:36.173353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.733 [2024-11-20 12:43:36.173375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.733 qpair failed and we were unable to recover it. 00:29:30.733 [2024-11-20 12:43:36.173482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.733 [2024-11-20 12:43:36.173508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.733 qpair failed and we were unable to recover it. 00:29:30.733 [2024-11-20 12:43:36.173681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.733 [2024-11-20 12:43:36.173703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.733 qpair failed and we were unable to recover it. 00:29:30.733 [2024-11-20 12:43:36.173866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.733 [2024-11-20 12:43:36.173888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.733 qpair failed and we were unable to recover it. 00:29:30.733 [2024-11-20 12:43:36.173971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.733 [2024-11-20 12:43:36.173993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.733 qpair failed and we were unable to recover it. 00:29:30.733 [2024-11-20 12:43:36.174150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.733 [2024-11-20 12:43:36.174173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.733 qpair failed and we were unable to recover it. 00:29:30.733 [2024-11-20 12:43:36.174278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.733 [2024-11-20 12:43:36.174301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.733 qpair failed and we were unable to recover it. 00:29:30.733 [2024-11-20 12:43:36.174462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.733 [2024-11-20 12:43:36.174483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.733 qpair failed and we were unable to recover it. 00:29:30.733 [2024-11-20 12:43:36.174592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.733 [2024-11-20 12:43:36.174614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.733 qpair failed and we were unable to recover it. 00:29:30.733 [2024-11-20 12:43:36.174766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.733 [2024-11-20 12:43:36.174788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.733 qpair failed and we were unable to recover it. 00:29:30.733 [2024-11-20 12:43:36.174875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.733 [2024-11-20 12:43:36.174897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.733 qpair failed and we were unable to recover it. 00:29:30.733 [2024-11-20 12:43:36.175066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.733 [2024-11-20 12:43:36.175089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.733 qpair failed and we were unable to recover it. 00:29:30.733 [2024-11-20 12:43:36.175181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.733 [2024-11-20 12:43:36.175207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.733 qpair failed and we were unable to recover it. 00:29:30.733 [2024-11-20 12:43:36.175394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.733 [2024-11-20 12:43:36.175417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.733 qpair failed and we were unable to recover it. 00:29:30.733 [2024-11-20 12:43:36.175568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.733 [2024-11-20 12:43:36.175590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.733 qpair failed and we were unable to recover it. 00:29:30.733 [2024-11-20 12:43:36.175743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.733 [2024-11-20 12:43:36.175766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.733 qpair failed and we were unable to recover it. 00:29:30.733 [2024-11-20 12:43:36.175920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.733 [2024-11-20 12:43:36.175943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.733 qpair failed and we were unable to recover it. 00:29:30.733 [2024-11-20 12:43:36.176041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.733 [2024-11-20 12:43:36.176063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.733 qpair failed and we were unable to recover it. 00:29:30.733 [2024-11-20 12:43:36.176242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.733 [2024-11-20 12:43:36.176265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.733 qpair failed and we were unable to recover it. 00:29:30.733 [2024-11-20 12:43:36.176434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.733 [2024-11-20 12:43:36.176456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.733 qpair failed and we were unable to recover it. 00:29:30.733 [2024-11-20 12:43:36.176553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.733 [2024-11-20 12:43:36.176574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.733 qpair failed and we were unable to recover it. 00:29:30.733 [2024-11-20 12:43:36.176739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.733 [2024-11-20 12:43:36.176760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.733 qpair failed and we were unable to recover it. 00:29:30.733 [2024-11-20 12:43:36.176975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.733 [2024-11-20 12:43:36.176997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.733 qpair failed and we were unable to recover it. 00:29:30.733 [2024-11-20 12:43:36.177221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.733 [2024-11-20 12:43:36.177244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.733 qpair failed and we were unable to recover it. 00:29:30.734 [2024-11-20 12:43:36.177325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.734 [2024-11-20 12:43:36.177346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.734 qpair failed and we were unable to recover it. 00:29:30.734 [2024-11-20 12:43:36.177564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.734 [2024-11-20 12:43:36.177585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.734 qpair failed and we were unable to recover it. 00:29:30.734 [2024-11-20 12:43:36.177748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.734 [2024-11-20 12:43:36.177769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.734 qpair failed and we were unable to recover it. 00:29:30.734 [2024-11-20 12:43:36.177976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.734 [2024-11-20 12:43:36.177997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.734 qpair failed and we were unable to recover it. 00:29:30.734 [2024-11-20 12:43:36.178171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.734 [2024-11-20 12:43:36.178192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.734 qpair failed and we were unable to recover it. 00:29:30.734 [2024-11-20 12:43:36.178378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.734 [2024-11-20 12:43:36.178400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.734 qpair failed and we were unable to recover it. 00:29:30.734 [2024-11-20 12:43:36.178571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.734 [2024-11-20 12:43:36.178592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.734 qpair failed and we were unable to recover it. 00:29:30.734 [2024-11-20 12:43:36.178771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.734 [2024-11-20 12:43:36.178792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.734 qpair failed and we were unable to recover it. 00:29:30.734 [2024-11-20 12:43:36.179071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.734 [2024-11-20 12:43:36.179094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.734 qpair failed and we were unable to recover it. 00:29:30.734 [2024-11-20 12:43:36.179197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.734 [2024-11-20 12:43:36.179224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.734 qpair failed and we were unable to recover it. 00:29:30.734 [2024-11-20 12:43:36.179377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.734 [2024-11-20 12:43:36.179399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.734 qpair failed and we were unable to recover it. 00:29:30.734 [2024-11-20 12:43:36.179581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.734 [2024-11-20 12:43:36.179603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.734 qpair failed and we were unable to recover it. 00:29:30.734 Malloc0 00:29:30.734 [2024-11-20 12:43:36.179795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.734 [2024-11-20 12:43:36.179817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.734 qpair failed and we were unable to recover it. 00:29:30.734 [2024-11-20 12:43:36.179910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.734 [2024-11-20 12:43:36.179931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.734 qpair failed and we were unable to recover it. 00:29:30.734 [2024-11-20 12:43:36.180043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.734 [2024-11-20 12:43:36.180065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.734 qpair failed and we were unable to recover it. 00:29:30.734 [2024-11-20 12:43:36.180279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.734 [2024-11-20 12:43:36.180303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.734 12:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.734 qpair failed and we were unable to recover it. 00:29:30.734 [2024-11-20 12:43:36.180463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.734 [2024-11-20 12:43:36.180484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.734 qpair failed and we were unable to recover it. 00:29:30.734 12:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:30.734 [2024-11-20 12:43:36.180673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.734 [2024-11-20 12:43:36.180708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.734 qpair failed and we were unable to recover it. 00:29:30.734 [2024-11-20 12:43:36.180925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.734 [2024-11-20 12:43:36.180948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.734 qpair failed and we were unable to recover it. 00:29:30.734 12:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.734 [2024-11-20 12:43:36.181193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.734 [2024-11-20 12:43:36.181244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.734 qpair failed and we were unable to recover it. 00:29:30.734 12:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:30.734 [2024-11-20 12:43:36.181346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.734 [2024-11-20 12:43:36.181369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.734 qpair failed and we were unable to recover it. 00:29:30.734 [2024-11-20 12:43:36.181563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.734 [2024-11-20 12:43:36.181584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.734 qpair failed and we were unable to recover it. 00:29:30.734 [2024-11-20 12:43:36.181674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.734 [2024-11-20 12:43:36.181695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.734 qpair failed and we were unable to recover it. 00:29:30.734 [2024-11-20 12:43:36.181863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.734 [2024-11-20 12:43:36.181885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.734 qpair failed and we were unable to recover it. 00:29:30.734 [2024-11-20 12:43:36.181979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.734 [2024-11-20 12:43:36.182002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.734 qpair failed and we were unable to recover it. 00:29:30.734 [2024-11-20 12:43:36.182073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.734 [2024-11-20 12:43:36.182095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.734 qpair failed and we were unable to recover it. 00:29:30.734 [2024-11-20 12:43:36.182248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.734 [2024-11-20 12:43:36.182271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.734 qpair failed and we were unable to recover it. 00:29:30.734 [2024-11-20 12:43:36.182368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.734 [2024-11-20 12:43:36.182389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.734 qpair failed and we were unable to recover it. 00:29:30.734 [2024-11-20 12:43:36.182602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.734 [2024-11-20 12:43:36.182624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.734 qpair failed and we were unable to recover it. 00:29:30.734 [2024-11-20 12:43:36.182725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.734 [2024-11-20 12:43:36.182746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.734 qpair failed and we were unable to recover it. 00:29:30.734 [2024-11-20 12:43:36.182844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.734 [2024-11-20 12:43:36.182865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.734 qpair failed and we were unable to recover it. 00:29:30.734 [2024-11-20 12:43:36.183104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.734 [2024-11-20 12:43:36.183125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.734 qpair failed and we were unable to recover it. 00:29:30.734 [2024-11-20 12:43:36.183313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.734 [2024-11-20 12:43:36.183336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.734 qpair failed and we were unable to recover it. 00:29:30.734 [2024-11-20 12:43:36.183452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.734 [2024-11-20 12:43:36.183473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.734 qpair failed and we were unable to recover it. 00:29:30.734 [2024-11-20 12:43:36.183653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.734 [2024-11-20 12:43:36.183675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.734 qpair failed and we were unable to recover it. 00:29:30.734 [2024-11-20 12:43:36.183766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.734 [2024-11-20 12:43:36.183788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.734 qpair failed and we were unable to recover it. 00:29:30.734 [2024-11-20 12:43:36.184001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.735 [2024-11-20 12:43:36.184022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.735 qpair failed and we were unable to recover it. 00:29:30.735 [2024-11-20 12:43:36.184130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.735 [2024-11-20 12:43:36.184150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.735 qpair failed and we were unable to recover it. 00:29:30.735 [2024-11-20 12:43:36.184252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.735 [2024-11-20 12:43:36.184275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.735 qpair failed and we were unable to recover it. 00:29:30.735 [2024-11-20 12:43:36.184445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.735 [2024-11-20 12:43:36.184466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.735 qpair failed and we were unable to recover it. 00:29:30.735 [2024-11-20 12:43:36.184571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.735 [2024-11-20 12:43:36.184592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.735 qpair failed and we were unable to recover it. 00:29:30.735 [2024-11-20 12:43:36.184698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.735 [2024-11-20 12:43:36.184720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.735 qpair failed and we were unable to recover it. 00:29:30.735 [2024-11-20 12:43:36.184877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.735 [2024-11-20 12:43:36.184897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.735 qpair failed and we were unable to recover it. 00:29:30.735 [2024-11-20 12:43:36.185006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.735 [2024-11-20 12:43:36.185026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.735 qpair failed and we were unable to recover it. 00:29:30.735 [2024-11-20 12:43:36.185197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.735 [2024-11-20 12:43:36.185237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.735 qpair failed and we were unable to recover it. 00:29:30.735 [2024-11-20 12:43:36.185347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.735 [2024-11-20 12:43:36.185368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.735 qpair failed and we were unable to recover it. 00:29:30.735 [2024-11-20 12:43:36.185589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.735 [2024-11-20 12:43:36.185610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.735 qpair failed and we were unable to recover it. 00:29:30.735 [2024-11-20 12:43:36.185763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.735 [2024-11-20 12:43:36.185784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.735 qpair failed and we were unable to recover it. 00:29:30.735 [2024-11-20 12:43:36.185971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.735 [2024-11-20 12:43:36.185992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.735 qpair failed and we were unable to recover it. 00:29:30.735 [2024-11-20 12:43:36.186152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.735 [2024-11-20 12:43:36.186173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.735 qpair failed and we were unable to recover it. 00:29:30.735 [2024-11-20 12:43:36.186292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.735 [2024-11-20 12:43:36.186314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.735 qpair failed and we were unable to recover it. 00:29:30.735 [2024-11-20 12:43:36.186554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.735 [2024-11-20 12:43:36.186575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.735 qpair failed and we were unable to recover it. 00:29:30.735 [2024-11-20 12:43:36.186743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.735 [2024-11-20 12:43:36.186764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.735 qpair failed and we were unable to recover it. 00:29:30.735 [2024-11-20 12:43:36.186915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.735 [2024-11-20 12:43:36.186936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.735 qpair failed and we were unable to recover it. 00:29:30.735 [2024-11-20 12:43:36.187033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.735 [2024-11-20 12:43:36.187054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.735 qpair failed and we were unable to recover it. 00:29:30.735 [2024-11-20 12:43:36.187185] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:30.735 [2024-11-20 12:43:36.187296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.735 [2024-11-20 12:43:36.187319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.735 qpair failed and we were unable to recover it. 00:29:30.735 [2024-11-20 12:43:36.187421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.735 [2024-11-20 12:43:36.187442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.735 qpair failed and we were unable to recover it. 00:29:30.735 [2024-11-20 12:43:36.187611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.735 [2024-11-20 12:43:36.187632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.735 qpair failed and we were unable to recover it. 00:29:30.735 [2024-11-20 12:43:36.187792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.735 [2024-11-20 12:43:36.187813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.735 qpair failed and we were unable to recover it. 00:29:30.735 [2024-11-20 12:43:36.187981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.735 [2024-11-20 12:43:36.188002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.735 qpair failed and we were unable to recover it. 00:29:30.735 [2024-11-20 12:43:36.188250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.735 [2024-11-20 12:43:36.188272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.735 qpair failed and we were unable to recover it. 00:29:30.735 [2024-11-20 12:43:36.188362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.735 [2024-11-20 12:43:36.188383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.735 qpair failed and we were unable to recover it. 00:29:30.735 [2024-11-20 12:43:36.188647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.735 [2024-11-20 12:43:36.188668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.735 qpair failed and we were unable to recover it. 00:29:30.735 [2024-11-20 12:43:36.188816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.735 [2024-11-20 12:43:36.188837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.735 qpair failed and we were unable to recover it. 00:29:30.735 [2024-11-20 12:43:36.189003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.735 [2024-11-20 12:43:36.189024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.735 qpair failed and we were unable to recover it. 00:29:30.735 [2024-11-20 12:43:36.189216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.735 [2024-11-20 12:43:36.189238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.735 qpair failed and we were unable to recover it. 00:29:30.735 [2024-11-20 12:43:36.189455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.735 [2024-11-20 12:43:36.189476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.735 qpair failed and we were unable to recover it. 00:29:30.735 [2024-11-20 12:43:36.189629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.735 [2024-11-20 12:43:36.189650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.735 qpair failed and we were unable to recover it. 00:29:30.735 [2024-11-20 12:43:36.189797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.735 [2024-11-20 12:43:36.189818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.735 qpair failed and we were unable to recover it. 00:29:30.735 [2024-11-20 12:43:36.189984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.735 [2024-11-20 12:43:36.190005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.735 qpair failed and we were unable to recover it. 00:29:30.735 [2024-11-20 12:43:36.190266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.735 [2024-11-20 12:43:36.190293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.735 qpair failed and we were unable to recover it. 00:29:30.735 [2024-11-20 12:43:36.190489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.735 [2024-11-20 12:43:36.190510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.735 qpair failed and we were unable to recover it. 00:29:30.735 [2024-11-20 12:43:36.190600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.735 [2024-11-20 12:43:36.190621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.735 qpair failed and we were unable to recover it. 00:29:30.735 [2024-11-20 12:43:36.190774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.735 [2024-11-20 12:43:36.190795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.735 qpair failed and we were unable to recover it. 00:29:30.735 [2024-11-20 12:43:36.190909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.736 [2024-11-20 12:43:36.190930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.736 qpair failed and we were unable to recover it. 00:29:30.736 [2024-11-20 12:43:36.191094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.736 [2024-11-20 12:43:36.191115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.736 qpair failed and we were unable to recover it. 00:29:30.736 [2024-11-20 12:43:36.191275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.736 [2024-11-20 12:43:36.191297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.736 qpair failed and we were unable to recover it. 00:29:30.736 [2024-11-20 12:43:36.191415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.736 [2024-11-20 12:43:36.191436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.736 qpair failed and we were unable to recover it. 00:29:30.736 [2024-11-20 12:43:36.191528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.736 [2024-11-20 12:43:36.191549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.736 qpair failed and we were unable to recover it. 00:29:30.736 [2024-11-20 12:43:36.191649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.736 [2024-11-20 12:43:36.191670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.736 qpair failed and we were unable to recover it. 00:29:30.736 [2024-11-20 12:43:36.191842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.736 [2024-11-20 12:43:36.191863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.736 qpair failed and we were unable to recover it. 00:29:30.736 [2024-11-20 12:43:36.191956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.736 [2024-11-20 12:43:36.191977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.736 qpair failed and we were unable to recover it. 00:29:30.736 [2024-11-20 12:43:36.192076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.736 [2024-11-20 12:43:36.192097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.736 qpair failed and we were unable to recover it. 00:29:30.736 12:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.736 12:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:30.736 12:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.736 12:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:30.736 [2024-11-20 12:43:36.193598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.736 [2024-11-20 12:43:36.193635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.736 qpair failed and we were unable to recover it. 00:29:30.736 [2024-11-20 12:43:36.193945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.736 [2024-11-20 12:43:36.193969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.736 qpair failed and we were unable to recover it. 00:29:30.736 [2024-11-20 12:43:36.194217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.736 [2024-11-20 12:43:36.194241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.736 qpair failed and we were unable to recover it. 00:29:30.736 [2024-11-20 12:43:36.194359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.736 [2024-11-20 12:43:36.194381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.736 qpair failed and we were unable to recover it. 00:29:30.736 [2024-11-20 12:43:36.194493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.736 [2024-11-20 12:43:36.194514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.736 qpair failed and we were unable to recover it. 00:29:30.736 [2024-11-20 12:43:36.194705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.736 [2024-11-20 12:43:36.194726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.736 qpair failed and we were unable to recover it. 00:29:30.736 [2024-11-20 12:43:36.194948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.736 [2024-11-20 12:43:36.194969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.736 qpair failed and we were unable to recover it. 00:29:30.736 [2024-11-20 12:43:36.195084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.736 [2024-11-20 12:43:36.195104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.736 qpair failed and we were unable to recover it. 00:29:30.736 [2024-11-20 12:43:36.195212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.736 [2024-11-20 12:43:36.195234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.736 qpair failed and we were unable to recover it. 00:29:30.736 [2024-11-20 12:43:36.195336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.736 [2024-11-20 12:43:36.195357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.736 qpair failed and we were unable to recover it. 00:29:30.736 [2024-11-20 12:43:36.195516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.736 [2024-11-20 12:43:36.195537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.736 qpair failed and we were unable to recover it. 00:29:30.736 [2024-11-20 12:43:36.195827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.736 [2024-11-20 12:43:36.195848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.736 qpair failed and we were unable to recover it. 00:29:30.736 [2024-11-20 12:43:36.196094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.736 [2024-11-20 12:43:36.196115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.736 qpair failed and we were unable to recover it. 00:29:30.736 [2024-11-20 12:43:36.196244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.736 [2024-11-20 12:43:36.196267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.736 qpair failed and we were unable to recover it. 00:29:30.736 [2024-11-20 12:43:36.196462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.736 [2024-11-20 12:43:36.196482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.736 qpair failed and we were unable to recover it. 00:29:30.736 [2024-11-20 12:43:36.196585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.736 [2024-11-20 12:43:36.196606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.736 qpair failed and we were unable to recover it. 00:29:30.736 [2024-11-20 12:43:36.196769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.736 [2024-11-20 12:43:36.196790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.736 qpair failed and we were unable to recover it. 00:29:30.736 [2024-11-20 12:43:36.197013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.736 [2024-11-20 12:43:36.197034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.736 qpair failed and we were unable to recover it. 00:29:30.736 [2024-11-20 12:43:36.197128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.736 [2024-11-20 12:43:36.197150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.736 qpair failed and we were unable to recover it. 00:29:30.736 [2024-11-20 12:43:36.197245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.736 [2024-11-20 12:43:36.197268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.736 qpair failed and we were unable to recover it. 00:29:30.736 [2024-11-20 12:43:36.197350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.736 [2024-11-20 12:43:36.197371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.736 qpair failed and we were unable to recover it. 00:29:30.736 [2024-11-20 12:43:36.197538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.736 [2024-11-20 12:43:36.197559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.736 qpair failed and we were unable to recover it. 00:29:30.736 [2024-11-20 12:43:36.197725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.736 [2024-11-20 12:43:36.197746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.736 qpair failed and we were unable to recover it. 00:29:30.736 [2024-11-20 12:43:36.197905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.736 [2024-11-20 12:43:36.197926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.736 qpair failed and we were unable to recover it. 00:29:30.736 [2024-11-20 12:43:36.198093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.736 [2024-11-20 12:43:36.198114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.736 qpair failed and we were unable to recover it. 00:29:30.736 [2024-11-20 12:43:36.198285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.736 [2024-11-20 12:43:36.198308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.736 qpair failed and we were unable to recover it. 00:29:30.736 [2024-11-20 12:43:36.198385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.736 [2024-11-20 12:43:36.198410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.736 qpair failed and we were unable to recover it. 00:29:30.736 [2024-11-20 12:43:36.198624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.736 [2024-11-20 12:43:36.198646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.736 qpair failed and we were unable to recover it. 00:29:30.737 [2024-11-20 12:43:36.198861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.737 [2024-11-20 12:43:36.198882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.737 qpair failed and we were unable to recover it. 00:29:30.737 [2024-11-20 12:43:36.199042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.737 [2024-11-20 12:43:36.199063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.737 qpair failed and we were unable to recover it. 00:29:30.737 [2024-11-20 12:43:36.199255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.737 [2024-11-20 12:43:36.199277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.737 qpair failed and we were unable to recover it. 00:29:30.737 [2024-11-20 12:43:36.199387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.737 [2024-11-20 12:43:36.199408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.737 qpair failed and we were unable to recover it. 00:29:30.737 [2024-11-20 12:43:36.199575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.737 [2024-11-20 12:43:36.199595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.737 qpair failed and we were unable to recover it. 00:29:30.737 [2024-11-20 12:43:36.199758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.737 [2024-11-20 12:43:36.199779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.737 qpair failed and we were unable to recover it. 00:29:30.737 [2024-11-20 12:43:36.200020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.737 [2024-11-20 12:43:36.200041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.737 qpair failed and we were unable to recover it. 00:29:30.737 [2024-11-20 12:43:36.200260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.737 [2024-11-20 12:43:36.200282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.737 qpair failed and we were unable to recover it. 00:29:30.737 [2024-11-20 12:43:36.200580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.737 [2024-11-20 12:43:36.200601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.737 qpair failed and we were unable to recover it. 00:29:30.737 [2024-11-20 12:43:36.200709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.737 [2024-11-20 12:43:36.200731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.737 qpair failed and we were unable to recover it. 00:29:30.737 12:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.737 [2024-11-20 12:43:36.200882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.737 [2024-11-20 12:43:36.200903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.737 qpair failed and we were unable to recover it. 00:29:30.737 [2024-11-20 12:43:36.201056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.737 [2024-11-20 12:43:36.201085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.737 qpair failed and we were unable to recover it. 00:29:30.737 12:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:30.737 [2024-11-20 12:43:36.201324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.737 [2024-11-20 12:43:36.201347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.737 qpair failed and we were unable to recover it. 00:29:30.737 [2024-11-20 12:43:36.201509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.737 12:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.737 [2024-11-20 12:43:36.201530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.737 qpair failed and we were unable to recover it. 00:29:30.737 [2024-11-20 12:43:36.201688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.737 [2024-11-20 12:43:36.201710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.737 qpair failed and we were unable to recover it. 00:29:30.737 12:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:30.737 [2024-11-20 12:43:36.201815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.737 [2024-11-20 12:43:36.201836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.737 qpair failed and we were unable to recover it. 00:29:30.737 [2024-11-20 12:43:36.201947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.737 [2024-11-20 12:43:36.201971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.737 qpair failed and we were unable to recover it. 00:29:30.737 [2024-11-20 12:43:36.202144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.737 [2024-11-20 12:43:36.202166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.737 qpair failed and we were unable to recover it. 00:29:30.737 [2024-11-20 12:43:36.202278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.737 [2024-11-20 12:43:36.202301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.737 qpair failed and we were unable to recover it. 00:29:30.737 [2024-11-20 12:43:36.202452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.737 [2024-11-20 12:43:36.202474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.737 qpair failed and we were unable to recover it. 00:29:30.737 [2024-11-20 12:43:36.202586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.737 [2024-11-20 12:43:36.202607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.737 qpair failed and we were unable to recover it. 00:29:30.737 [2024-11-20 12:43:36.202830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.737 [2024-11-20 12:43:36.202851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.737 qpair failed and we were unable to recover it. 00:29:30.737 [2024-11-20 12:43:36.202948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.737 [2024-11-20 12:43:36.202969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.737 qpair failed and we were unable to recover it. 00:29:30.737 [2024-11-20 12:43:36.203065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.737 [2024-11-20 12:43:36.203086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.737 qpair failed and we were unable to recover it. 00:29:30.737 [2024-11-20 12:43:36.203192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.737 [2024-11-20 12:43:36.203222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.737 qpair failed and we were unable to recover it. 00:29:30.737 [2024-11-20 12:43:36.203479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.737 [2024-11-20 12:43:36.203500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.737 qpair failed and we were unable to recover it. 00:29:30.737 [2024-11-20 12:43:36.203585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.737 [2024-11-20 12:43:36.203607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.737 qpair failed and we were unable to recover it. 00:29:30.737 [2024-11-20 12:43:36.203697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.737 [2024-11-20 12:43:36.203718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.737 qpair failed and we were unable to recover it. 00:29:30.737 [2024-11-20 12:43:36.203869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.737 [2024-11-20 12:43:36.203890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.737 qpair failed and we were unable to recover it. 00:29:30.737 [2024-11-20 12:43:36.204156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.737 [2024-11-20 12:43:36.204178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.737 qpair failed and we were unable to recover it. 00:29:30.737 [2024-11-20 12:43:36.204280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.737 [2024-11-20 12:43:36.204302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.737 qpair failed and we were unable to recover it. 00:29:30.737 [2024-11-20 12:43:36.204475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.737 [2024-11-20 12:43:36.204496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.738 qpair failed and we were unable to recover it. 00:29:30.738 [2024-11-20 12:43:36.204645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.738 [2024-11-20 12:43:36.204667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.738 qpair failed and we were unable to recover it. 00:29:30.738 [2024-11-20 12:43:36.204837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.738 [2024-11-20 12:43:36.204858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.738 qpair failed and we were unable to recover it. 00:29:30.738 [2024-11-20 12:43:36.204954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.738 [2024-11-20 12:43:36.204975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.738 qpair failed and we were unable to recover it. 00:29:30.738 [2024-11-20 12:43:36.205164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.738 [2024-11-20 12:43:36.205186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.738 qpair failed and we were unable to recover it. 00:29:30.738 [2024-11-20 12:43:36.205388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.738 [2024-11-20 12:43:36.205410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.738 qpair failed and we were unable to recover it. 00:29:30.738 [2024-11-20 12:43:36.205564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.738 [2024-11-20 12:43:36.205589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.738 qpair failed and we were unable to recover it. 00:29:30.738 [2024-11-20 12:43:36.205760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.738 [2024-11-20 12:43:36.205781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.738 qpair failed and we were unable to recover it. 00:29:30.738 [2024-11-20 12:43:36.205867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.738 [2024-11-20 12:43:36.205888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.738 qpair failed and we were unable to recover it. 00:29:30.738 [2024-11-20 12:43:36.206045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.738 [2024-11-20 12:43:36.206066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.738 qpair failed and we were unable to recover it. 00:29:30.738 [2024-11-20 12:43:36.206333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.738 [2024-11-20 12:43:36.206356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.738 qpair failed and we were unable to recover it. 00:29:30.738 [2024-11-20 12:43:36.206472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.738 [2024-11-20 12:43:36.206494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.738 qpair failed and we were unable to recover it. 00:29:30.738 [2024-11-20 12:43:36.206642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.738 [2024-11-20 12:43:36.206663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.738 qpair failed and we were unable to recover it. 00:29:30.738 [2024-11-20 12:43:36.206836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.738 [2024-11-20 12:43:36.206858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.738 qpair failed and we were unable to recover it. 00:29:30.738 [2024-11-20 12:43:36.207037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.738 [2024-11-20 12:43:36.207059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.738 qpair failed and we were unable to recover it. 00:29:30.738 [2024-11-20 12:43:36.207288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.738 [2024-11-20 12:43:36.207310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.738 qpair failed and we were unable to recover it. 00:29:30.738 [2024-11-20 12:43:36.207408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.738 [2024-11-20 12:43:36.207429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.738 qpair failed and we were unable to recover it. 00:29:30.738 [2024-11-20 12:43:36.207580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.738 [2024-11-20 12:43:36.207601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.738 qpair failed and we were unable to recover it. 00:29:30.738 [2024-11-20 12:43:36.207747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.738 [2024-11-20 12:43:36.207769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.738 qpair failed and we were unable to recover it. 00:29:30.738 [2024-11-20 12:43:36.207941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.738 [2024-11-20 12:43:36.207962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.738 qpair failed and we were unable to recover it. 00:29:30.738 [2024-11-20 12:43:36.208057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.738 [2024-11-20 12:43:36.208078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.738 qpair failed and we were unable to recover it. 00:29:30.738 [2024-11-20 12:43:36.208271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.738 [2024-11-20 12:43:36.208293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.738 qpair failed and we were unable to recover it. 00:29:30.738 [2024-11-20 12:43:36.208447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.738 [2024-11-20 12:43:36.208467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.738 qpair failed and we were unable to recover it. 00:29:30.738 [2024-11-20 12:43:36.208641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.738 [2024-11-20 12:43:36.208663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.738 qpair failed and we were unable to recover it. 00:29:30.738 12:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.738 [2024-11-20 12:43:36.208840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.738 [2024-11-20 12:43:36.208862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.738 qpair failed and we were unable to recover it. 00:29:30.738 [2024-11-20 12:43:36.209015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.738 [2024-11-20 12:43:36.209036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.738 qpair failed and we were unable to recover it. 00:29:30.738 12:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:30.738 [2024-11-20 12:43:36.209221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.738 [2024-11-20 12:43:36.209244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.738 qpair failed and we were unable to recover it. 00:29:30.738 [2024-11-20 12:43:36.209323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.738 [2024-11-20 12:43:36.209344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.738 qpair failed and we were unable to recover it. 00:29:30.738 [2024-11-20 12:43:36.209449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.738 [2024-11-20 12:43:36.209470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.738 12:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.738 qpair failed and we were unable to recover it. 00:29:30.738 [2024-11-20 12:43:36.209579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.738 [2024-11-20 12:43:36.209600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.738 qpair failed and we were unable to recover it. 00:29:30.738 12:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:30.738 [2024-11-20 12:43:36.209778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.738 [2024-11-20 12:43:36.209800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.738 qpair failed and we were unable to recover it. 00:29:30.738 [2024-11-20 12:43:36.209905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.738 [2024-11-20 12:43:36.209930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.738 qpair failed and we were unable to recover it. 00:29:30.738 [2024-11-20 12:43:36.210146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.738 [2024-11-20 12:43:36.210167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.738 qpair failed and we were unable to recover it. 00:29:30.738 [2024-11-20 12:43:36.210425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.738 [2024-11-20 12:43:36.210448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.738 qpair failed and we were unable to recover it. 00:29:30.738 [2024-11-20 12:43:36.210553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.738 [2024-11-20 12:43:36.210574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.738 qpair failed and we were unable to recover it. 00:29:30.738 [2024-11-20 12:43:36.210817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.738 [2024-11-20 12:43:36.210838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.738 qpair failed and we were unable to recover it. 00:29:30.738 [2024-11-20 12:43:36.210986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.738 [2024-11-20 12:43:36.211007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.738 qpair failed and we were unable to recover it. 00:29:30.738 [2024-11-20 12:43:36.211157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.739 [2024-11-20 12:43:36.211177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.739 qpair failed and we were unable to recover it. 00:29:30.739 [2024-11-20 12:43:36.211343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.739 [2024-11-20 12:43:36.211365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.739 qpair failed and we were unable to recover it. 00:29:30.739 [2024-11-20 12:43:36.211616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.739 [2024-11-20 12:43:36.211637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.739 qpair failed and we were unable to recover it. 00:29:30.739 [2024-11-20 12:43:36.211804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.739 [2024-11-20 12:43:36.211825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.739 qpair failed and we were unable to recover it. 00:29:30.739 [2024-11-20 12:43:36.211930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.739 [2024-11-20 12:43:36.211951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.739 qpair failed and we were unable to recover it. 00:29:30.739 [2024-11-20 12:43:36.212133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.739 [2024-11-20 12:43:36.212154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b9ba0 with addr=10.0.0.2, port=4420 00:29:30.739 qpair failed and we were unable to recover it. 00:29:30.739 [2024-11-20 12:43:36.212235] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:30.739 12:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.739 12:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:30.739 12:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.739 12:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:30.739 [2024-11-20 12:43:36.217870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.739 [2024-11-20 12:43:36.218014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.739 [2024-11-20 12:43:36.218048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.739 [2024-11-20 12:43:36.218064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.739 [2024-11-20 12:43:36.218078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:30.739 [2024-11-20 12:43:36.218115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.739 qpair failed and we were unable to recover it. 00:29:30.739 12:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.739 12:43:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 343102 00:29:30.739 [2024-11-20 12:43:36.227768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.739 [2024-11-20 12:43:36.227842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.739 [2024-11-20 12:43:36.227866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.739 [2024-11-20 12:43:36.227877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.739 [2024-11-20 12:43:36.227887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:30.739 [2024-11-20 12:43:36.227911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.739 qpair failed and we were unable to recover it. 00:29:30.739 [2024-11-20 12:43:36.237785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.739 [2024-11-20 12:43:36.237846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.739 [2024-11-20 12:43:36.237863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.739 [2024-11-20 12:43:36.237871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.739 [2024-11-20 12:43:36.237878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:30.739 [2024-11-20 12:43:36.237894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.739 qpair failed and we were unable to recover it. 00:29:30.739 [2024-11-20 12:43:36.247766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.739 [2024-11-20 12:43:36.247827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.739 [2024-11-20 12:43:36.247842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.739 [2024-11-20 12:43:36.247849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.739 [2024-11-20 12:43:36.247855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:30.739 [2024-11-20 12:43:36.247869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.739 qpair failed and we were unable to recover it. 00:29:30.739 [2024-11-20 12:43:36.257766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.739 [2024-11-20 12:43:36.257824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.739 [2024-11-20 12:43:36.257840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.739 [2024-11-20 12:43:36.257848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.739 [2024-11-20 12:43:36.257854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:30.739 [2024-11-20 12:43:36.257869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.739 qpair failed and we were unable to recover it. 00:29:30.739 [2024-11-20 12:43:36.267756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.739 [2024-11-20 12:43:36.267813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.739 [2024-11-20 12:43:36.267832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.739 [2024-11-20 12:43:36.267839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.739 [2024-11-20 12:43:36.267845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:30.739 [2024-11-20 12:43:36.267861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.739 qpair failed and we were unable to recover it. 00:29:30.739 [2024-11-20 12:43:36.277716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.739 [2024-11-20 12:43:36.277774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.739 [2024-11-20 12:43:36.277789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.739 [2024-11-20 12:43:36.277796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.739 [2024-11-20 12:43:36.277802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:30.739 [2024-11-20 12:43:36.277817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.739 qpair failed and we were unable to recover it. 00:29:30.739 [2024-11-20 12:43:36.287804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.739 [2024-11-20 12:43:36.287859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.739 [2024-11-20 12:43:36.287875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.739 [2024-11-20 12:43:36.287882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.739 [2024-11-20 12:43:36.287888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:30.739 [2024-11-20 12:43:36.287902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.739 qpair failed and we were unable to recover it. 00:29:30.739 [2024-11-20 12:43:36.297876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.739 [2024-11-20 12:43:36.297939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.739 [2024-11-20 12:43:36.297957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.739 [2024-11-20 12:43:36.297964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.739 [2024-11-20 12:43:36.297970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:30.739 [2024-11-20 12:43:36.297984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.739 qpair failed and we were unable to recover it. 00:29:30.739 [2024-11-20 12:43:36.307872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.739 [2024-11-20 12:43:36.307922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.739 [2024-11-20 12:43:36.307937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.739 [2024-11-20 12:43:36.307944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.739 [2024-11-20 12:43:36.307950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:30.739 [2024-11-20 12:43:36.307965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.739 qpair failed and we were unable to recover it. 00:29:30.739 [2024-11-20 12:43:36.317910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.740 [2024-11-20 12:43:36.317961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.740 [2024-11-20 12:43:36.317976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.740 [2024-11-20 12:43:36.317983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.740 [2024-11-20 12:43:36.317989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:30.740 [2024-11-20 12:43:36.318004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.740 qpair failed and we were unable to recover it. 00:29:30.740 [2024-11-20 12:43:36.327917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.740 [2024-11-20 12:43:36.327972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.740 [2024-11-20 12:43:36.327987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.740 [2024-11-20 12:43:36.327994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.740 [2024-11-20 12:43:36.328000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:30.740 [2024-11-20 12:43:36.328015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.740 qpair failed and we were unable to recover it. 00:29:30.740 [2024-11-20 12:43:36.337945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.740 [2024-11-20 12:43:36.337997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.740 [2024-11-20 12:43:36.338012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.740 [2024-11-20 12:43:36.338019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.740 [2024-11-20 12:43:36.338028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:30.740 [2024-11-20 12:43:36.338043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.740 qpair failed and we were unable to recover it. 00:29:30.740 [2024-11-20 12:43:36.347966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.740 [2024-11-20 12:43:36.348022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.740 [2024-11-20 12:43:36.348037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.740 [2024-11-20 12:43:36.348045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.740 [2024-11-20 12:43:36.348051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:30.740 [2024-11-20 12:43:36.348065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.740 qpair failed and we were unable to recover it. 00:29:30.740 [2024-11-20 12:43:36.357916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.740 [2024-11-20 12:43:36.357970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.740 [2024-11-20 12:43:36.357986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.740 [2024-11-20 12:43:36.357993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.740 [2024-11-20 12:43:36.357999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:30.740 [2024-11-20 12:43:36.358014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.740 qpair failed and we were unable to recover it. 00:29:30.740 [2024-11-20 12:43:36.368029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.740 [2024-11-20 12:43:36.368086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.740 [2024-11-20 12:43:36.368102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.740 [2024-11-20 12:43:36.368109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.740 [2024-11-20 12:43:36.368116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:30.740 [2024-11-20 12:43:36.368130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.740 qpair failed and we were unable to recover it. 00:29:30.740 [2024-11-20 12:43:36.378054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.740 [2024-11-20 12:43:36.378111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.740 [2024-11-20 12:43:36.378126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.740 [2024-11-20 12:43:36.378133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.740 [2024-11-20 12:43:36.378139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:30.740 [2024-11-20 12:43:36.378153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.740 qpair failed and we were unable to recover it. 00:29:30.740 [2024-11-20 12:43:36.388094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.740 [2024-11-20 12:43:36.388141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.740 [2024-11-20 12:43:36.388156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.740 [2024-11-20 12:43:36.388163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.740 [2024-11-20 12:43:36.388169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:30.740 [2024-11-20 12:43:36.388183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.740 qpair failed and we were unable to recover it. 00:29:30.740 [2024-11-20 12:43:36.398101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.740 [2024-11-20 12:43:36.398158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.740 [2024-11-20 12:43:36.398173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.740 [2024-11-20 12:43:36.398180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.740 [2024-11-20 12:43:36.398186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:30.740 [2024-11-20 12:43:36.398200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.740 qpair failed and we were unable to recover it. 00:29:30.740 [2024-11-20 12:43:36.408146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.740 [2024-11-20 12:43:36.408199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.740 [2024-11-20 12:43:36.408219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.740 [2024-11-20 12:43:36.408226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.740 [2024-11-20 12:43:36.408232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:30.740 [2024-11-20 12:43:36.408247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.740 qpair failed and we were unable to recover it. 00:29:30.740 [2024-11-20 12:43:36.418176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.740 [2024-11-20 12:43:36.418239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.740 [2024-11-20 12:43:36.418254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.740 [2024-11-20 12:43:36.418261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.740 [2024-11-20 12:43:36.418267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:30.740 [2024-11-20 12:43:36.418282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.740 qpair failed and we were unable to recover it. 00:29:30.740 [2024-11-20 12:43:36.428232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.740 [2024-11-20 12:43:36.428285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.740 [2024-11-20 12:43:36.428303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.740 [2024-11-20 12:43:36.428311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.740 [2024-11-20 12:43:36.428317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:30.740 [2024-11-20 12:43:36.428331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.740 qpair failed and we were unable to recover it. 00:29:30.740 [2024-11-20 12:43:36.438168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.740 [2024-11-20 12:43:36.438221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.740 [2024-11-20 12:43:36.438235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.740 [2024-11-20 12:43:36.438242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.740 [2024-11-20 12:43:36.438247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:30.740 [2024-11-20 12:43:36.438262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.740 qpair failed and we were unable to recover it. 00:29:30.740 [2024-11-20 12:43:36.448256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.740 [2024-11-20 12:43:36.448316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.740 [2024-11-20 12:43:36.448335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.741 [2024-11-20 12:43:36.448342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.741 [2024-11-20 12:43:36.448348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:30.741 [2024-11-20 12:43:36.448363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.741 qpair failed and we were unable to recover it. 00:29:30.741 [2024-11-20 12:43:36.458282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.741 [2024-11-20 12:43:36.458340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.741 [2024-11-20 12:43:36.458357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.741 [2024-11-20 12:43:36.458364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.741 [2024-11-20 12:43:36.458370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:30.741 [2024-11-20 12:43:36.458384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.741 qpair failed and we were unable to recover it. 00:29:30.741 [2024-11-20 12:43:36.468309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.741 [2024-11-20 12:43:36.468362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.741 [2024-11-20 12:43:36.468378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.741 [2024-11-20 12:43:36.468385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.741 [2024-11-20 12:43:36.468394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:30.741 [2024-11-20 12:43:36.468409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.741 qpair failed and we were unable to recover it. 00:29:30.741 [2024-11-20 12:43:36.478330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.741 [2024-11-20 12:43:36.478383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.741 [2024-11-20 12:43:36.478398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.741 [2024-11-20 12:43:36.478405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.741 [2024-11-20 12:43:36.478410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:30.741 [2024-11-20 12:43:36.478425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.741 qpair failed and we were unable to recover it. 00:29:31.002 [2024-11-20 12:43:36.488421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.002 [2024-11-20 12:43:36.488484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.002 [2024-11-20 12:43:36.488500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.002 [2024-11-20 12:43:36.488507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.002 [2024-11-20 12:43:36.488513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.002 [2024-11-20 12:43:36.488528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.002 [2024-11-20 12:43:36.498390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.002 [2024-11-20 12:43:36.498447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.002 [2024-11-20 12:43:36.498461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.002 [2024-11-20 12:43:36.498468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.002 [2024-11-20 12:43:36.498474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.002 [2024-11-20 12:43:36.498489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.002 [2024-11-20 12:43:36.508421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.002 [2024-11-20 12:43:36.508512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.002 [2024-11-20 12:43:36.508528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.002 [2024-11-20 12:43:36.508535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.002 [2024-11-20 12:43:36.508541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.002 [2024-11-20 12:43:36.508555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.002 [2024-11-20 12:43:36.518459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.002 [2024-11-20 12:43:36.518518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.002 [2024-11-20 12:43:36.518533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.002 [2024-11-20 12:43:36.518540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.002 [2024-11-20 12:43:36.518546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.002 [2024-11-20 12:43:36.518560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.002 [2024-11-20 12:43:36.528481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.002 [2024-11-20 12:43:36.528536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.002 [2024-11-20 12:43:36.528550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.002 [2024-11-20 12:43:36.528557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.002 [2024-11-20 12:43:36.528564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.002 [2024-11-20 12:43:36.528578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.002 [2024-11-20 12:43:36.538506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.002 [2024-11-20 12:43:36.538558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.002 [2024-11-20 12:43:36.538572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.002 [2024-11-20 12:43:36.538579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.002 [2024-11-20 12:43:36.538585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.002 [2024-11-20 12:43:36.538599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.002 [2024-11-20 12:43:36.548527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.002 [2024-11-20 12:43:36.548578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.002 [2024-11-20 12:43:36.548593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.002 [2024-11-20 12:43:36.548600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.002 [2024-11-20 12:43:36.548607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.002 [2024-11-20 12:43:36.548622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.002 [2024-11-20 12:43:36.558595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.002 [2024-11-20 12:43:36.558650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.002 [2024-11-20 12:43:36.558669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.002 [2024-11-20 12:43:36.558676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.002 [2024-11-20 12:43:36.558682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.002 [2024-11-20 12:43:36.558696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.002 [2024-11-20 12:43:36.568596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.002 [2024-11-20 12:43:36.568653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.002 [2024-11-20 12:43:36.568668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.002 [2024-11-20 12:43:36.568676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.002 [2024-11-20 12:43:36.568682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.002 [2024-11-20 12:43:36.568697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.002 [2024-11-20 12:43:36.578629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.002 [2024-11-20 12:43:36.578693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.002 [2024-11-20 12:43:36.578709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.002 [2024-11-20 12:43:36.578717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.002 [2024-11-20 12:43:36.578723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.002 [2024-11-20 12:43:36.578737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.002 [2024-11-20 12:43:36.588646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.002 [2024-11-20 12:43:36.588703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.002 [2024-11-20 12:43:36.588723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.002 [2024-11-20 12:43:36.588730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.002 [2024-11-20 12:43:36.588737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.002 [2024-11-20 12:43:36.588750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.002 [2024-11-20 12:43:36.598696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.002 [2024-11-20 12:43:36.598763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.002 [2024-11-20 12:43:36.598779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.002 [2024-11-20 12:43:36.598786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.002 [2024-11-20 12:43:36.598796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.003 [2024-11-20 12:43:36.598811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-11-20 12:43:36.608710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.003 [2024-11-20 12:43:36.608764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.003 [2024-11-20 12:43:36.608779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.003 [2024-11-20 12:43:36.608786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.003 [2024-11-20 12:43:36.608792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.003 [2024-11-20 12:43:36.608807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-11-20 12:43:36.618773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.003 [2024-11-20 12:43:36.618844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.003 [2024-11-20 12:43:36.618859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.003 [2024-11-20 12:43:36.618866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.003 [2024-11-20 12:43:36.618872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.003 [2024-11-20 12:43:36.618887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-11-20 12:43:36.628760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.003 [2024-11-20 12:43:36.628840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.003 [2024-11-20 12:43:36.628855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.003 [2024-11-20 12:43:36.628862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.003 [2024-11-20 12:43:36.628867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.003 [2024-11-20 12:43:36.628882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-11-20 12:43:36.638794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.003 [2024-11-20 12:43:36.638846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.003 [2024-11-20 12:43:36.638861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.003 [2024-11-20 12:43:36.638867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.003 [2024-11-20 12:43:36.638873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.003 [2024-11-20 12:43:36.638887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-11-20 12:43:36.648829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.003 [2024-11-20 12:43:36.648887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.003 [2024-11-20 12:43:36.648902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.003 [2024-11-20 12:43:36.648909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.003 [2024-11-20 12:43:36.648915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.003 [2024-11-20 12:43:36.648930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-11-20 12:43:36.658860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.003 [2024-11-20 12:43:36.658914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.003 [2024-11-20 12:43:36.658929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.003 [2024-11-20 12:43:36.658936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.003 [2024-11-20 12:43:36.658942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.003 [2024-11-20 12:43:36.658956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-11-20 12:43:36.668874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.003 [2024-11-20 12:43:36.668948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.003 [2024-11-20 12:43:36.668963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.003 [2024-11-20 12:43:36.668969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.003 [2024-11-20 12:43:36.668976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.003 [2024-11-20 12:43:36.668990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-11-20 12:43:36.678944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.003 [2024-11-20 12:43:36.679010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.003 [2024-11-20 12:43:36.679025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.003 [2024-11-20 12:43:36.679032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.003 [2024-11-20 12:43:36.679038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.003 [2024-11-20 12:43:36.679052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-11-20 12:43:36.688939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.003 [2024-11-20 12:43:36.688992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.003 [2024-11-20 12:43:36.689015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.003 [2024-11-20 12:43:36.689022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.003 [2024-11-20 12:43:36.689029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.003 [2024-11-20 12:43:36.689043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-11-20 12:43:36.698965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.003 [2024-11-20 12:43:36.699020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.003 [2024-11-20 12:43:36.699034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.003 [2024-11-20 12:43:36.699040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.003 [2024-11-20 12:43:36.699047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.003 [2024-11-20 12:43:36.699060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-11-20 12:43:36.708988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.003 [2024-11-20 12:43:36.709037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.003 [2024-11-20 12:43:36.709053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.003 [2024-11-20 12:43:36.709059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.003 [2024-11-20 12:43:36.709065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.003 [2024-11-20 12:43:36.709080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-11-20 12:43:36.718957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.003 [2024-11-20 12:43:36.719014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.003 [2024-11-20 12:43:36.719031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.003 [2024-11-20 12:43:36.719038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.003 [2024-11-20 12:43:36.719044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.003 [2024-11-20 12:43:36.719059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-11-20 12:43:36.729061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.003 [2024-11-20 12:43:36.729129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.003 [2024-11-20 12:43:36.729145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.003 [2024-11-20 12:43:36.729152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.003 [2024-11-20 12:43:36.729161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.003 [2024-11-20 12:43:36.729176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.004 [2024-11-20 12:43:36.739081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.004 [2024-11-20 12:43:36.739137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.004 [2024-11-20 12:43:36.739151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.004 [2024-11-20 12:43:36.739159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.004 [2024-11-20 12:43:36.739164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.004 [2024-11-20 12:43:36.739179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-11-20 12:43:36.749120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.004 [2024-11-20 12:43:36.749176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.004 [2024-11-20 12:43:36.749191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.004 [2024-11-20 12:43:36.749198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.004 [2024-11-20 12:43:36.749209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.004 [2024-11-20 12:43:36.749224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-11-20 12:43:36.759170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.004 [2024-11-20 12:43:36.759222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.004 [2024-11-20 12:43:36.759237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.004 [2024-11-20 12:43:36.759244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.004 [2024-11-20 12:43:36.759250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.004 [2024-11-20 12:43:36.759265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.264 [2024-11-20 12:43:36.769166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.264 [2024-11-20 12:43:36.769222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.264 [2024-11-20 12:43:36.769237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.264 [2024-11-20 12:43:36.769244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.264 [2024-11-20 12:43:36.769250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.264 [2024-11-20 12:43:36.769265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.264 qpair failed and we were unable to recover it. 00:29:31.264 [2024-11-20 12:43:36.779194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.264 [2024-11-20 12:43:36.779257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.264 [2024-11-20 12:43:36.779272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.264 [2024-11-20 12:43:36.779280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.264 [2024-11-20 12:43:36.779286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.264 [2024-11-20 12:43:36.779300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.264 qpair failed and we were unable to recover it. 00:29:31.264 [2024-11-20 12:43:36.789210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.264 [2024-11-20 12:43:36.789264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.264 [2024-11-20 12:43:36.789279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.264 [2024-11-20 12:43:36.789286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.264 [2024-11-20 12:43:36.789292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.265 [2024-11-20 12:43:36.789306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.265 qpair failed and we were unable to recover it. 00:29:31.265 [2024-11-20 12:43:36.799264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.265 [2024-11-20 12:43:36.799326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.265 [2024-11-20 12:43:36.799341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.265 [2024-11-20 12:43:36.799348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.265 [2024-11-20 12:43:36.799353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.265 [2024-11-20 12:43:36.799368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.265 qpair failed and we were unable to recover it. 00:29:31.265 [2024-11-20 12:43:36.809290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.265 [2024-11-20 12:43:36.809347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.265 [2024-11-20 12:43:36.809363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.265 [2024-11-20 12:43:36.809370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.265 [2024-11-20 12:43:36.809376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.265 [2024-11-20 12:43:36.809391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.265 qpair failed and we were unable to recover it. 00:29:31.265 [2024-11-20 12:43:36.819253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.265 [2024-11-20 12:43:36.819352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.265 [2024-11-20 12:43:36.819370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.265 [2024-11-20 12:43:36.819378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.265 [2024-11-20 12:43:36.819384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.265 [2024-11-20 12:43:36.819398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.265 qpair failed and we were unable to recover it. 00:29:31.265 [2024-11-20 12:43:36.829354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.265 [2024-11-20 12:43:36.829416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.265 [2024-11-20 12:43:36.829433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.265 [2024-11-20 12:43:36.829440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.265 [2024-11-20 12:43:36.829446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.265 [2024-11-20 12:43:36.829462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.265 qpair failed and we were unable to recover it. 00:29:31.265 [2024-11-20 12:43:36.839353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.265 [2024-11-20 12:43:36.839411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.265 [2024-11-20 12:43:36.839427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.265 [2024-11-20 12:43:36.839434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.265 [2024-11-20 12:43:36.839440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.265 [2024-11-20 12:43:36.839455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.265 qpair failed and we were unable to recover it. 00:29:31.265 [2024-11-20 12:43:36.849333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.265 [2024-11-20 12:43:36.849410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.265 [2024-11-20 12:43:36.849425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.265 [2024-11-20 12:43:36.849432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.265 [2024-11-20 12:43:36.849439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.265 [2024-11-20 12:43:36.849453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.265 qpair failed and we were unable to recover it. 00:29:31.265 [2024-11-20 12:43:36.859366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.265 [2024-11-20 12:43:36.859426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.265 [2024-11-20 12:43:36.859442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.265 [2024-11-20 12:43:36.859449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.265 [2024-11-20 12:43:36.859455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.265 [2024-11-20 12:43:36.859473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.265 qpair failed and we were unable to recover it. 00:29:31.265 [2024-11-20 12:43:36.869404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.265 [2024-11-20 12:43:36.869457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.265 [2024-11-20 12:43:36.869471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.265 [2024-11-20 12:43:36.869478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.265 [2024-11-20 12:43:36.869484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.265 [2024-11-20 12:43:36.869497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.265 qpair failed and we were unable to recover it. 00:29:31.265 [2024-11-20 12:43:36.879496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.265 [2024-11-20 12:43:36.879549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.265 [2024-11-20 12:43:36.879562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.265 [2024-11-20 12:43:36.879569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.265 [2024-11-20 12:43:36.879574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.265 [2024-11-20 12:43:36.879588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.265 qpair failed and we were unable to recover it. 00:29:31.265 [2024-11-20 12:43:36.889449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.265 [2024-11-20 12:43:36.889505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.265 [2024-11-20 12:43:36.889519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.265 [2024-11-20 12:43:36.889526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.265 [2024-11-20 12:43:36.889531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.265 [2024-11-20 12:43:36.889546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.265 qpair failed and we were unable to recover it. 00:29:31.265 [2024-11-20 12:43:36.899519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.265 [2024-11-20 12:43:36.899574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.265 [2024-11-20 12:43:36.899589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.265 [2024-11-20 12:43:36.899596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.265 [2024-11-20 12:43:36.899602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.265 [2024-11-20 12:43:36.899616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.265 qpair failed and we were unable to recover it. 00:29:31.265 [2024-11-20 12:43:36.909544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.265 [2024-11-20 12:43:36.909601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.265 [2024-11-20 12:43:36.909617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.265 [2024-11-20 12:43:36.909624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.265 [2024-11-20 12:43:36.909630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.266 [2024-11-20 12:43:36.909645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.266 qpair failed and we were unable to recover it. 00:29:31.266 [2024-11-20 12:43:36.919532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.266 [2024-11-20 12:43:36.919584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.266 [2024-11-20 12:43:36.919599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.266 [2024-11-20 12:43:36.919606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.266 [2024-11-20 12:43:36.919612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.266 [2024-11-20 12:43:36.919627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.266 qpair failed and we were unable to recover it. 00:29:31.266 [2024-11-20 12:43:36.929622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.266 [2024-11-20 12:43:36.929712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.266 [2024-11-20 12:43:36.929727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.266 [2024-11-20 12:43:36.929734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.266 [2024-11-20 12:43:36.929739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.266 [2024-11-20 12:43:36.929753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.266 qpair failed and we were unable to recover it. 00:29:31.266 [2024-11-20 12:43:36.939583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.266 [2024-11-20 12:43:36.939637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.266 [2024-11-20 12:43:36.939651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.266 [2024-11-20 12:43:36.939659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.266 [2024-11-20 12:43:36.939665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.266 [2024-11-20 12:43:36.939679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.266 qpair failed and we were unable to recover it. 00:29:31.266 [2024-11-20 12:43:36.949698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.266 [2024-11-20 12:43:36.949750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.266 [2024-11-20 12:43:36.949767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.266 [2024-11-20 12:43:36.949774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.266 [2024-11-20 12:43:36.949780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.266 [2024-11-20 12:43:36.949795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.266 qpair failed and we were unable to recover it. 00:29:31.266 [2024-11-20 12:43:36.959696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.266 [2024-11-20 12:43:36.959748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.266 [2024-11-20 12:43:36.959764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.266 [2024-11-20 12:43:36.959772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.266 [2024-11-20 12:43:36.959778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.266 [2024-11-20 12:43:36.959793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.266 qpair failed and we were unable to recover it. 00:29:31.266 [2024-11-20 12:43:36.969728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.266 [2024-11-20 12:43:36.969783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.266 [2024-11-20 12:43:36.969800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.266 [2024-11-20 12:43:36.969807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.266 [2024-11-20 12:43:36.969813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.266 [2024-11-20 12:43:36.969828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.266 qpair failed and we were unable to recover it. 00:29:31.266 [2024-11-20 12:43:36.979683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.266 [2024-11-20 12:43:36.979741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.266 [2024-11-20 12:43:36.979756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.266 [2024-11-20 12:43:36.979763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.266 [2024-11-20 12:43:36.979768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.266 [2024-11-20 12:43:36.979782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.266 qpair failed and we were unable to recover it. 00:29:31.266 [2024-11-20 12:43:36.989726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.266 [2024-11-20 12:43:36.989778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.266 [2024-11-20 12:43:36.989793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.266 [2024-11-20 12:43:36.989800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.266 [2024-11-20 12:43:36.989806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.266 [2024-11-20 12:43:36.989824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.266 qpair failed and we were unable to recover it. 00:29:31.266 [2024-11-20 12:43:36.999784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.266 [2024-11-20 12:43:36.999835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.266 [2024-11-20 12:43:36.999850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.266 [2024-11-20 12:43:36.999857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.266 [2024-11-20 12:43:36.999864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.266 [2024-11-20 12:43:36.999878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.266 qpair failed and we were unable to recover it. 00:29:31.266 [2024-11-20 12:43:37.009799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.266 [2024-11-20 12:43:37.009854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.266 [2024-11-20 12:43:37.009869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.266 [2024-11-20 12:43:37.009877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.266 [2024-11-20 12:43:37.009883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.266 [2024-11-20 12:43:37.009897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.266 qpair failed and we were unable to recover it. 00:29:31.266 [2024-11-20 12:43:37.020006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.266 [2024-11-20 12:43:37.020068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.266 [2024-11-20 12:43:37.020084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.266 [2024-11-20 12:43:37.020090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.266 [2024-11-20 12:43:37.020096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.266 [2024-11-20 12:43:37.020111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.266 qpair failed and we were unable to recover it. 00:29:31.527 [2024-11-20 12:43:37.029974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.527 [2024-11-20 12:43:37.030036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.527 [2024-11-20 12:43:37.030052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.527 [2024-11-20 12:43:37.030060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.527 [2024-11-20 12:43:37.030067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.527 [2024-11-20 12:43:37.030082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.527 qpair failed and we were unable to recover it. 00:29:31.527 [2024-11-20 12:43:37.039987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.527 [2024-11-20 12:43:37.040058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.527 [2024-11-20 12:43:37.040072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.527 [2024-11-20 12:43:37.040079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.527 [2024-11-20 12:43:37.040084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.527 [2024-11-20 12:43:37.040098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.527 qpair failed and we were unable to recover it. 00:29:31.527 [2024-11-20 12:43:37.049975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.527 [2024-11-20 12:43:37.050051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.527 [2024-11-20 12:43:37.050066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.527 [2024-11-20 12:43:37.050073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.527 [2024-11-20 12:43:37.050079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.527 [2024-11-20 12:43:37.050093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.527 qpair failed and we were unable to recover it. 00:29:31.527 [2024-11-20 12:43:37.060003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.527 [2024-11-20 12:43:37.060059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.527 [2024-11-20 12:43:37.060074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.527 [2024-11-20 12:43:37.060081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.527 [2024-11-20 12:43:37.060088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.527 [2024-11-20 12:43:37.060102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.527 qpair failed and we were unable to recover it. 00:29:31.527 [2024-11-20 12:43:37.070008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.527 [2024-11-20 12:43:37.070060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.527 [2024-11-20 12:43:37.070075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.527 [2024-11-20 12:43:37.070083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.527 [2024-11-20 12:43:37.070089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.527 [2024-11-20 12:43:37.070104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.527 qpair failed and we were unable to recover it. 00:29:31.527 [2024-11-20 12:43:37.080059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.527 [2024-11-20 12:43:37.080113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.527 [2024-11-20 12:43:37.080131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.527 [2024-11-20 12:43:37.080138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.528 [2024-11-20 12:43:37.080144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.528 [2024-11-20 12:43:37.080158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.528 qpair failed and we were unable to recover it. 00:29:31.528 [2024-11-20 12:43:37.090078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.528 [2024-11-20 12:43:37.090133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.528 [2024-11-20 12:43:37.090149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.528 [2024-11-20 12:43:37.090156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.528 [2024-11-20 12:43:37.090162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.528 [2024-11-20 12:43:37.090176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.528 qpair failed and we were unable to recover it. 00:29:31.528 [2024-11-20 12:43:37.100128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.528 [2024-11-20 12:43:37.100217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.528 [2024-11-20 12:43:37.100232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.528 [2024-11-20 12:43:37.100240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.528 [2024-11-20 12:43:37.100246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.528 [2024-11-20 12:43:37.100260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.528 qpair failed and we were unable to recover it. 00:29:31.528 [2024-11-20 12:43:37.110133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.528 [2024-11-20 12:43:37.110186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.528 [2024-11-20 12:43:37.110206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.528 [2024-11-20 12:43:37.110214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.528 [2024-11-20 12:43:37.110220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.528 [2024-11-20 12:43:37.110235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.528 qpair failed and we were unable to recover it. 00:29:31.528 [2024-11-20 12:43:37.120188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.528 [2024-11-20 12:43:37.120265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.528 [2024-11-20 12:43:37.120280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.528 [2024-11-20 12:43:37.120288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.528 [2024-11-20 12:43:37.120294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.528 [2024-11-20 12:43:37.120312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.528 qpair failed and we were unable to recover it. 00:29:31.528 [2024-11-20 12:43:37.130246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.528 [2024-11-20 12:43:37.130306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.528 [2024-11-20 12:43:37.130321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.528 [2024-11-20 12:43:37.130329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.528 [2024-11-20 12:43:37.130335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.528 [2024-11-20 12:43:37.130349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.528 qpair failed and we were unable to recover it. 00:29:31.528 [2024-11-20 12:43:37.140248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.528 [2024-11-20 12:43:37.140311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.528 [2024-11-20 12:43:37.140326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.528 [2024-11-20 12:43:37.140333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.528 [2024-11-20 12:43:37.140340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.528 [2024-11-20 12:43:37.140354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.528 qpair failed and we were unable to recover it. 00:29:31.528 [2024-11-20 12:43:37.150251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.528 [2024-11-20 12:43:37.150309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.528 [2024-11-20 12:43:37.150325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.528 [2024-11-20 12:43:37.150332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.528 [2024-11-20 12:43:37.150339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.528 [2024-11-20 12:43:37.150353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.528 qpair failed and we were unable to recover it. 00:29:31.528 [2024-11-20 12:43:37.160280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.528 [2024-11-20 12:43:37.160332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.528 [2024-11-20 12:43:37.160347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.528 [2024-11-20 12:43:37.160354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.528 [2024-11-20 12:43:37.160361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.528 [2024-11-20 12:43:37.160375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.528 qpair failed and we were unable to recover it. 00:29:31.528 [2024-11-20 12:43:37.170343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.528 [2024-11-20 12:43:37.170401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.528 [2024-11-20 12:43:37.170417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.528 [2024-11-20 12:43:37.170424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.528 [2024-11-20 12:43:37.170430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.528 [2024-11-20 12:43:37.170444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.528 qpair failed and we were unable to recover it. 00:29:31.528 [2024-11-20 12:43:37.180371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.528 [2024-11-20 12:43:37.180430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.528 [2024-11-20 12:43:37.180468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.528 [2024-11-20 12:43:37.180475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.528 [2024-11-20 12:43:37.180481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.529 [2024-11-20 12:43:37.180495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.529 qpair failed and we were unable to recover it. 00:29:31.529 [2024-11-20 12:43:37.190414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.529 [2024-11-20 12:43:37.190469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.529 [2024-11-20 12:43:37.190483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.529 [2024-11-20 12:43:37.190490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.529 [2024-11-20 12:43:37.190496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.529 [2024-11-20 12:43:37.190511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.529 qpair failed and we were unable to recover it. 00:29:31.529 [2024-11-20 12:43:37.200411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.529 [2024-11-20 12:43:37.200463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.529 [2024-11-20 12:43:37.200477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.529 [2024-11-20 12:43:37.200484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.529 [2024-11-20 12:43:37.200491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.529 [2024-11-20 12:43:37.200505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.529 qpair failed and we were unable to recover it. 00:29:31.529 [2024-11-20 12:43:37.210473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.529 [2024-11-20 12:43:37.210535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.529 [2024-11-20 12:43:37.210553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.529 [2024-11-20 12:43:37.210560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.529 [2024-11-20 12:43:37.210566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.529 [2024-11-20 12:43:37.210581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.529 qpair failed and we were unable to recover it. 00:29:31.529 [2024-11-20 12:43:37.220488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.529 [2024-11-20 12:43:37.220579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.529 [2024-11-20 12:43:37.220594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.529 [2024-11-20 12:43:37.220601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.529 [2024-11-20 12:43:37.220608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.529 [2024-11-20 12:43:37.220622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.529 qpair failed and we were unable to recover it. 00:29:31.529 [2024-11-20 12:43:37.230511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.529 [2024-11-20 12:43:37.230563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.529 [2024-11-20 12:43:37.230579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.529 [2024-11-20 12:43:37.230586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.529 [2024-11-20 12:43:37.230592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.529 [2024-11-20 12:43:37.230606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.529 qpair failed and we were unable to recover it. 00:29:31.529 [2024-11-20 12:43:37.240551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.529 [2024-11-20 12:43:37.240617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.529 [2024-11-20 12:43:37.240631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.529 [2024-11-20 12:43:37.240638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.529 [2024-11-20 12:43:37.240644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.529 [2024-11-20 12:43:37.240658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.529 qpair failed and we were unable to recover it. 00:29:31.529 [2024-11-20 12:43:37.250546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.529 [2024-11-20 12:43:37.250600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.529 [2024-11-20 12:43:37.250615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.529 [2024-11-20 12:43:37.250622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.529 [2024-11-20 12:43:37.250628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.529 [2024-11-20 12:43:37.250645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.529 qpair failed and we were unable to recover it. 00:29:31.529 [2024-11-20 12:43:37.260571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.529 [2024-11-20 12:43:37.260624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.529 [2024-11-20 12:43:37.260640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.529 [2024-11-20 12:43:37.260647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.529 [2024-11-20 12:43:37.260653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.529 [2024-11-20 12:43:37.260667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.529 qpair failed and we were unable to recover it. 00:29:31.529 [2024-11-20 12:43:37.270656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.529 [2024-11-20 12:43:37.270715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.529 [2024-11-20 12:43:37.270730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.529 [2024-11-20 12:43:37.270737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.529 [2024-11-20 12:43:37.270743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.529 [2024-11-20 12:43:37.270756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.529 qpair failed and we were unable to recover it. 00:29:31.529 [2024-11-20 12:43:37.280633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.529 [2024-11-20 12:43:37.280683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.529 [2024-11-20 12:43:37.280698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.529 [2024-11-20 12:43:37.280705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.529 [2024-11-20 12:43:37.280711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.529 [2024-11-20 12:43:37.280725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.529 qpair failed and we were unable to recover it. 00:29:31.789 [2024-11-20 12:43:37.290659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.789 [2024-11-20 12:43:37.290716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.789 [2024-11-20 12:43:37.290732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.789 [2024-11-20 12:43:37.290739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.789 [2024-11-20 12:43:37.290745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.789 [2024-11-20 12:43:37.290760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.789 qpair failed and we were unable to recover it. 00:29:31.789 [2024-11-20 12:43:37.300698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.789 [2024-11-20 12:43:37.300758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.789 [2024-11-20 12:43:37.300773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.789 [2024-11-20 12:43:37.300780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.789 [2024-11-20 12:43:37.300786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.789 [2024-11-20 12:43:37.300799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.789 qpair failed and we were unable to recover it. 00:29:31.789 [2024-11-20 12:43:37.310759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.789 [2024-11-20 12:43:37.310809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.789 [2024-11-20 12:43:37.310824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.789 [2024-11-20 12:43:37.310831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.789 [2024-11-20 12:43:37.310837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.789 [2024-11-20 12:43:37.310852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.789 qpair failed and we were unable to recover it. 00:29:31.789 [2024-11-20 12:43:37.320735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.789 [2024-11-20 12:43:37.320788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.789 [2024-11-20 12:43:37.320803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.789 [2024-11-20 12:43:37.320810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.789 [2024-11-20 12:43:37.320816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.789 [2024-11-20 12:43:37.320831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.789 qpair failed and we were unable to recover it. 00:29:31.789 [2024-11-20 12:43:37.330721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.789 [2024-11-20 12:43:37.330791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.789 [2024-11-20 12:43:37.330806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.789 [2024-11-20 12:43:37.330812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.789 [2024-11-20 12:43:37.330819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.789 [2024-11-20 12:43:37.330833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.789 qpair failed and we were unable to recover it. 00:29:31.789 [2024-11-20 12:43:37.340821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.790 [2024-11-20 12:43:37.340885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.790 [2024-11-20 12:43:37.340907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.790 [2024-11-20 12:43:37.340914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.790 [2024-11-20 12:43:37.340920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.790 [2024-11-20 12:43:37.340934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.790 qpair failed and we were unable to recover it. 00:29:31.790 [2024-11-20 12:43:37.350831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.790 [2024-11-20 12:43:37.350911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.790 [2024-11-20 12:43:37.350926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.790 [2024-11-20 12:43:37.350933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.790 [2024-11-20 12:43:37.350939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.790 [2024-11-20 12:43:37.350953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.790 qpair failed and we were unable to recover it. 00:29:31.790 [2024-11-20 12:43:37.360896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.790 [2024-11-20 12:43:37.360948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.790 [2024-11-20 12:43:37.360962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.790 [2024-11-20 12:43:37.360969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.790 [2024-11-20 12:43:37.360975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.790 [2024-11-20 12:43:37.360990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.790 qpair failed and we were unable to recover it. 00:29:31.790 [2024-11-20 12:43:37.370898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.790 [2024-11-20 12:43:37.370950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.790 [2024-11-20 12:43:37.370966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.790 [2024-11-20 12:43:37.370973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.790 [2024-11-20 12:43:37.370980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.790 [2024-11-20 12:43:37.370994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.790 qpair failed and we were unable to recover it. 00:29:31.790 [2024-11-20 12:43:37.380956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.790 [2024-11-20 12:43:37.381014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.790 [2024-11-20 12:43:37.381029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.790 [2024-11-20 12:43:37.381036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.790 [2024-11-20 12:43:37.381042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.790 [2024-11-20 12:43:37.381059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.790 qpair failed and we were unable to recover it. 00:29:31.790 [2024-11-20 12:43:37.390949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.790 [2024-11-20 12:43:37.391018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.790 [2024-11-20 12:43:37.391033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.790 [2024-11-20 12:43:37.391040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.790 [2024-11-20 12:43:37.391046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.790 [2024-11-20 12:43:37.391060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.790 qpair failed and we were unable to recover it. 00:29:31.790 [2024-11-20 12:43:37.400968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.790 [2024-11-20 12:43:37.401019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.790 [2024-11-20 12:43:37.401034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.790 [2024-11-20 12:43:37.401041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.790 [2024-11-20 12:43:37.401047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.790 [2024-11-20 12:43:37.401062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.790 qpair failed and we were unable to recover it. 00:29:31.790 [2024-11-20 12:43:37.410957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.790 [2024-11-20 12:43:37.411012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.790 [2024-11-20 12:43:37.411028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.790 [2024-11-20 12:43:37.411035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.790 [2024-11-20 12:43:37.411041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.790 [2024-11-20 12:43:37.411056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.790 qpair failed and we were unable to recover it. 00:29:31.790 [2024-11-20 12:43:37.421035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.790 [2024-11-20 12:43:37.421086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.790 [2024-11-20 12:43:37.421101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.790 [2024-11-20 12:43:37.421109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.790 [2024-11-20 12:43:37.421115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.790 [2024-11-20 12:43:37.421129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.790 qpair failed and we were unable to recover it. 00:29:31.790 [2024-11-20 12:43:37.430991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.790 [2024-11-20 12:43:37.431046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.790 [2024-11-20 12:43:37.431061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.790 [2024-11-20 12:43:37.431068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.790 [2024-11-20 12:43:37.431074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.790 [2024-11-20 12:43:37.431088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.790 qpair failed and we were unable to recover it. 00:29:31.790 [2024-11-20 12:43:37.441106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.790 [2024-11-20 12:43:37.441169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.790 [2024-11-20 12:43:37.441183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.790 [2024-11-20 12:43:37.441190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.790 [2024-11-20 12:43:37.441196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.790 [2024-11-20 12:43:37.441214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.790 qpair failed and we were unable to recover it. 00:29:31.790 [2024-11-20 12:43:37.451170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.790 [2024-11-20 12:43:37.451273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.790 [2024-11-20 12:43:37.451287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.790 [2024-11-20 12:43:37.451294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.790 [2024-11-20 12:43:37.451300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.790 [2024-11-20 12:43:37.451315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.790 qpair failed and we were unable to recover it. 00:29:31.790 [2024-11-20 12:43:37.461148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.790 [2024-11-20 12:43:37.461207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.790 [2024-11-20 12:43:37.461222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.790 [2024-11-20 12:43:37.461229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.790 [2024-11-20 12:43:37.461235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.790 [2024-11-20 12:43:37.461250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.790 qpair failed and we were unable to recover it. 00:29:31.790 [2024-11-20 12:43:37.471174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.791 [2024-11-20 12:43:37.471244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.791 [2024-11-20 12:43:37.471262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.791 [2024-11-20 12:43:37.471269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.791 [2024-11-20 12:43:37.471275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.791 [2024-11-20 12:43:37.471290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.791 qpair failed and we were unable to recover it. 00:29:31.791 [2024-11-20 12:43:37.481234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.791 [2024-11-20 12:43:37.481284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.791 [2024-11-20 12:43:37.481300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.791 [2024-11-20 12:43:37.481307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.791 [2024-11-20 12:43:37.481312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.791 [2024-11-20 12:43:37.481327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.791 qpair failed and we were unable to recover it. 00:29:31.791 [2024-11-20 12:43:37.491220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.791 [2024-11-20 12:43:37.491277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.791 [2024-11-20 12:43:37.491293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.791 [2024-11-20 12:43:37.491300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.791 [2024-11-20 12:43:37.491306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.791 [2024-11-20 12:43:37.491321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.791 qpair failed and we were unable to recover it. 00:29:31.791 [2024-11-20 12:43:37.501263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.791 [2024-11-20 12:43:37.501319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.791 [2024-11-20 12:43:37.501334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.791 [2024-11-20 12:43:37.501341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.791 [2024-11-20 12:43:37.501348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.791 [2024-11-20 12:43:37.501363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.791 qpair failed and we were unable to recover it. 00:29:31.791 [2024-11-20 12:43:37.511219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.791 [2024-11-20 12:43:37.511274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.791 [2024-11-20 12:43:37.511290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.791 [2024-11-20 12:43:37.511297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.791 [2024-11-20 12:43:37.511303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.791 [2024-11-20 12:43:37.511321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.791 qpair failed and we were unable to recover it. 00:29:31.791 [2024-11-20 12:43:37.521341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.791 [2024-11-20 12:43:37.521400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.791 [2024-11-20 12:43:37.521415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.791 [2024-11-20 12:43:37.521421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.791 [2024-11-20 12:43:37.521428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.791 [2024-11-20 12:43:37.521442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.791 qpair failed and we were unable to recover it. 00:29:31.791 [2024-11-20 12:43:37.531381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.791 [2024-11-20 12:43:37.531433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.791 [2024-11-20 12:43:37.531448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.791 [2024-11-20 12:43:37.531456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.791 [2024-11-20 12:43:37.531462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.791 [2024-11-20 12:43:37.531477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.791 qpair failed and we were unable to recover it. 00:29:31.791 [2024-11-20 12:43:37.541377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.791 [2024-11-20 12:43:37.541429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.791 [2024-11-20 12:43:37.541444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.791 [2024-11-20 12:43:37.541450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.791 [2024-11-20 12:43:37.541456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:31.791 [2024-11-20 12:43:37.541471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.791 qpair failed and we were unable to recover it. 00:29:32.051 [2024-11-20 12:43:37.551414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.051 [2024-11-20 12:43:37.551465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.051 [2024-11-20 12:43:37.551480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.051 [2024-11-20 12:43:37.551487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.051 [2024-11-20 12:43:37.551494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.051 [2024-11-20 12:43:37.551508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.051 qpair failed and we were unable to recover it. 00:29:32.051 [2024-11-20 12:43:37.561418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.051 [2024-11-20 12:43:37.561469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.051 [2024-11-20 12:43:37.561485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.051 [2024-11-20 12:43:37.561492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.051 [2024-11-20 12:43:37.561498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.051 [2024-11-20 12:43:37.561513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.051 qpair failed and we were unable to recover it. 00:29:32.051 [2024-11-20 12:43:37.571477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.051 [2024-11-20 12:43:37.571534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.051 [2024-11-20 12:43:37.571548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.051 [2024-11-20 12:43:37.571555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.051 [2024-11-20 12:43:37.571561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.051 [2024-11-20 12:43:37.571575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.051 qpair failed and we were unable to recover it. 00:29:32.051 [2024-11-20 12:43:37.581499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.051 [2024-11-20 12:43:37.581549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.051 [2024-11-20 12:43:37.581563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.051 [2024-11-20 12:43:37.581570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.051 [2024-11-20 12:43:37.581576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.051 [2024-11-20 12:43:37.581591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.051 qpair failed and we were unable to recover it. 00:29:32.051 [2024-11-20 12:43:37.591521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.051 [2024-11-20 12:43:37.591571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.051 [2024-11-20 12:43:37.591586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.051 [2024-11-20 12:43:37.591593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.051 [2024-11-20 12:43:37.591600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.051 [2024-11-20 12:43:37.591614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.051 qpair failed and we were unable to recover it. 00:29:32.051 [2024-11-20 12:43:37.601566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.051 [2024-11-20 12:43:37.601619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.051 [2024-11-20 12:43:37.601638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.051 [2024-11-20 12:43:37.601645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.051 [2024-11-20 12:43:37.601651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.051 [2024-11-20 12:43:37.601665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.051 qpair failed and we were unable to recover it. 00:29:32.051 [2024-11-20 12:43:37.611589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.051 [2024-11-20 12:43:37.611646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.051 [2024-11-20 12:43:37.611663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.051 [2024-11-20 12:43:37.611670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.052 [2024-11-20 12:43:37.611676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.052 [2024-11-20 12:43:37.611690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-11-20 12:43:37.621625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.052 [2024-11-20 12:43:37.621719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.052 [2024-11-20 12:43:37.621734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.052 [2024-11-20 12:43:37.621741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.052 [2024-11-20 12:43:37.621747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.052 [2024-11-20 12:43:37.621761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-11-20 12:43:37.631638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.052 [2024-11-20 12:43:37.631692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.052 [2024-11-20 12:43:37.631707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.052 [2024-11-20 12:43:37.631714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.052 [2024-11-20 12:43:37.631720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.052 [2024-11-20 12:43:37.631734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-11-20 12:43:37.641669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.052 [2024-11-20 12:43:37.641717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.052 [2024-11-20 12:43:37.641732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.052 [2024-11-20 12:43:37.641739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.052 [2024-11-20 12:43:37.641745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.052 [2024-11-20 12:43:37.641762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-11-20 12:43:37.651693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.052 [2024-11-20 12:43:37.651768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.052 [2024-11-20 12:43:37.651783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.052 [2024-11-20 12:43:37.651790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.052 [2024-11-20 12:43:37.651797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.052 [2024-11-20 12:43:37.651812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-11-20 12:43:37.661653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.052 [2024-11-20 12:43:37.661754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.052 [2024-11-20 12:43:37.661770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.052 [2024-11-20 12:43:37.661777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.052 [2024-11-20 12:43:37.661783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.052 [2024-11-20 12:43:37.661798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-11-20 12:43:37.671824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.052 [2024-11-20 12:43:37.671878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.052 [2024-11-20 12:43:37.671894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.052 [2024-11-20 12:43:37.671901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.052 [2024-11-20 12:43:37.671907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.052 [2024-11-20 12:43:37.671921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-11-20 12:43:37.681797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.052 [2024-11-20 12:43:37.681850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.052 [2024-11-20 12:43:37.681865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.052 [2024-11-20 12:43:37.681873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.052 [2024-11-20 12:43:37.681879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.052 [2024-11-20 12:43:37.681892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-11-20 12:43:37.691875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.052 [2024-11-20 12:43:37.691980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.052 [2024-11-20 12:43:37.691995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.052 [2024-11-20 12:43:37.692002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.052 [2024-11-20 12:43:37.692008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.052 [2024-11-20 12:43:37.692023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-11-20 12:43:37.701844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.052 [2024-11-20 12:43:37.701899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.052 [2024-11-20 12:43:37.701914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.052 [2024-11-20 12:43:37.701920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.052 [2024-11-20 12:43:37.701927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.052 [2024-11-20 12:43:37.701941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-11-20 12:43:37.711886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.052 [2024-11-20 12:43:37.711943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.052 [2024-11-20 12:43:37.711959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.052 [2024-11-20 12:43:37.711966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.052 [2024-11-20 12:43:37.711972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.052 [2024-11-20 12:43:37.711987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-11-20 12:43:37.721924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.052 [2024-11-20 12:43:37.721999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.052 [2024-11-20 12:43:37.722014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.052 [2024-11-20 12:43:37.722021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.052 [2024-11-20 12:43:37.722027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.052 [2024-11-20 12:43:37.722041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-11-20 12:43:37.731925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.052 [2024-11-20 12:43:37.731979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.052 [2024-11-20 12:43:37.731996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.052 [2024-11-20 12:43:37.732003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.052 [2024-11-20 12:43:37.732009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.052 [2024-11-20 12:43:37.732023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.052 qpair failed and we were unable to recover it. 00:29:32.052 [2024-11-20 12:43:37.741955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.052 [2024-11-20 12:43:37.742011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.052 [2024-11-20 12:43:37.742027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.052 [2024-11-20 12:43:37.742034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.053 [2024-11-20 12:43:37.742040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.053 [2024-11-20 12:43:37.742054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.053 qpair failed and we were unable to recover it. 00:29:32.053 [2024-11-20 12:43:37.751976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.053 [2024-11-20 12:43:37.752026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.053 [2024-11-20 12:43:37.752040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.053 [2024-11-20 12:43:37.752046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.053 [2024-11-20 12:43:37.752052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.053 [2024-11-20 12:43:37.752067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.053 qpair failed and we were unable to recover it. 00:29:32.053 [2024-11-20 12:43:37.762021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.053 [2024-11-20 12:43:37.762087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.053 [2024-11-20 12:43:37.762103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.053 [2024-11-20 12:43:37.762110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.053 [2024-11-20 12:43:37.762116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.053 [2024-11-20 12:43:37.762131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.053 qpair failed and we were unable to recover it. 00:29:32.053 [2024-11-20 12:43:37.772075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.053 [2024-11-20 12:43:37.772133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.053 [2024-11-20 12:43:37.772148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.053 [2024-11-20 12:43:37.772155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.053 [2024-11-20 12:43:37.772161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.053 [2024-11-20 12:43:37.772178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.053 qpair failed and we were unable to recover it. 00:29:32.053 [2024-11-20 12:43:37.782055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.053 [2024-11-20 12:43:37.782125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.053 [2024-11-20 12:43:37.782140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.053 [2024-11-20 12:43:37.782147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.053 [2024-11-20 12:43:37.782153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.053 [2024-11-20 12:43:37.782167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.053 qpair failed and we were unable to recover it. 00:29:32.053 [2024-11-20 12:43:37.792139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.053 [2024-11-20 12:43:37.792225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.053 [2024-11-20 12:43:37.792241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.053 [2024-11-20 12:43:37.792247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.053 [2024-11-20 12:43:37.792253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.053 [2024-11-20 12:43:37.792268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.053 qpair failed and we were unable to recover it. 00:29:32.053 [2024-11-20 12:43:37.802116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.053 [2024-11-20 12:43:37.802166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.053 [2024-11-20 12:43:37.802181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.053 [2024-11-20 12:43:37.802187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.053 [2024-11-20 12:43:37.802193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.053 [2024-11-20 12:43:37.802211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.053 qpair failed and we were unable to recover it. 00:29:32.053 [2024-11-20 12:43:37.812168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.053 [2024-11-20 12:43:37.812224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.053 [2024-11-20 12:43:37.812241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.053 [2024-11-20 12:43:37.812247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.053 [2024-11-20 12:43:37.812254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.053 [2024-11-20 12:43:37.812269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.053 qpair failed and we were unable to recover it. 00:29:32.313 [2024-11-20 12:43:37.822206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.313 [2024-11-20 12:43:37.822261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.313 [2024-11-20 12:43:37.822276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.313 [2024-11-20 12:43:37.822283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.313 [2024-11-20 12:43:37.822289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.313 [2024-11-20 12:43:37.822303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.313 qpair failed and we were unable to recover it. 00:29:32.313 [2024-11-20 12:43:37.832241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.313 [2024-11-20 12:43:37.832296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.313 [2024-11-20 12:43:37.832313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.313 [2024-11-20 12:43:37.832320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.313 [2024-11-20 12:43:37.832326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.313 [2024-11-20 12:43:37.832341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.313 qpair failed and we were unable to recover it. 00:29:32.313 [2024-11-20 12:43:37.842228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.313 [2024-11-20 12:43:37.842280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.313 [2024-11-20 12:43:37.842295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.313 [2024-11-20 12:43:37.842302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.313 [2024-11-20 12:43:37.842308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.313 [2024-11-20 12:43:37.842322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.313 qpair failed and we were unable to recover it. 00:29:32.313 [2024-11-20 12:43:37.852261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.313 [2024-11-20 12:43:37.852314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.313 [2024-11-20 12:43:37.852329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.313 [2024-11-20 12:43:37.852336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.313 [2024-11-20 12:43:37.852342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.313 [2024-11-20 12:43:37.852357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.313 qpair failed and we were unable to recover it. 00:29:32.313 [2024-11-20 12:43:37.862270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.313 [2024-11-20 12:43:37.862330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.313 [2024-11-20 12:43:37.862349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.313 [2024-11-20 12:43:37.862356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.313 [2024-11-20 12:43:37.862362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.313 [2024-11-20 12:43:37.862377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.313 qpair failed and we were unable to recover it. 00:29:32.313 [2024-11-20 12:43:37.872309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.313 [2024-11-20 12:43:37.872359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.313 [2024-11-20 12:43:37.872373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.313 [2024-11-20 12:43:37.872380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.313 [2024-11-20 12:43:37.872386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.313 [2024-11-20 12:43:37.872401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.313 qpair failed and we were unable to recover it. 00:29:32.313 [2024-11-20 12:43:37.882348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.313 [2024-11-20 12:43:37.882401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.313 [2024-11-20 12:43:37.882415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.313 [2024-11-20 12:43:37.882422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.313 [2024-11-20 12:43:37.882428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.313 [2024-11-20 12:43:37.882442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.313 qpair failed and we were unable to recover it. 00:29:32.313 [2024-11-20 12:43:37.892393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.313 [2024-11-20 12:43:37.892449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.313 [2024-11-20 12:43:37.892463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.313 [2024-11-20 12:43:37.892470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.313 [2024-11-20 12:43:37.892476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.313 [2024-11-20 12:43:37.892490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.313 qpair failed and we were unable to recover it. 00:29:32.313 [2024-11-20 12:43:37.902491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.313 [2024-11-20 12:43:37.902577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.313 [2024-11-20 12:43:37.902592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.313 [2024-11-20 12:43:37.902599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.313 [2024-11-20 12:43:37.902605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.313 [2024-11-20 12:43:37.902623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.313 qpair failed and we were unable to recover it. 00:29:32.313 [2024-11-20 12:43:37.912476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.313 [2024-11-20 12:43:37.912529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.313 [2024-11-20 12:43:37.912545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.313 [2024-11-20 12:43:37.912552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.313 [2024-11-20 12:43:37.912558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.313 [2024-11-20 12:43:37.912572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.313 qpair failed and we were unable to recover it. 00:29:32.313 [2024-11-20 12:43:37.922480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.313 [2024-11-20 12:43:37.922575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.313 [2024-11-20 12:43:37.922590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.313 [2024-11-20 12:43:37.922596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.313 [2024-11-20 12:43:37.922602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.313 [2024-11-20 12:43:37.922616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.313 qpair failed and we were unable to recover it. 00:29:32.313 [2024-11-20 12:43:37.932546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.313 [2024-11-20 12:43:37.932600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.313 [2024-11-20 12:43:37.932615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.313 [2024-11-20 12:43:37.932622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.313 [2024-11-20 12:43:37.932628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.313 [2024-11-20 12:43:37.932642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.313 qpair failed and we were unable to recover it. 00:29:32.313 [2024-11-20 12:43:37.942524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.313 [2024-11-20 12:43:37.942594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.314 [2024-11-20 12:43:37.942609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.314 [2024-11-20 12:43:37.942616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.314 [2024-11-20 12:43:37.942621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.314 [2024-11-20 12:43:37.942636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.314 qpair failed and we were unable to recover it. 00:29:32.314 [2024-11-20 12:43:37.952545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.314 [2024-11-20 12:43:37.952599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.314 [2024-11-20 12:43:37.952613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.314 [2024-11-20 12:43:37.952620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.314 [2024-11-20 12:43:37.952626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.314 [2024-11-20 12:43:37.952640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.314 qpair failed and we were unable to recover it. 00:29:32.314 [2024-11-20 12:43:37.962585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.314 [2024-11-20 12:43:37.962638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.314 [2024-11-20 12:43:37.962654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.314 [2024-11-20 12:43:37.962661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.314 [2024-11-20 12:43:37.962667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.314 [2024-11-20 12:43:37.962681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.314 qpair failed and we were unable to recover it. 00:29:32.314 [2024-11-20 12:43:37.972649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.314 [2024-11-20 12:43:37.972707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.314 [2024-11-20 12:43:37.972721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.314 [2024-11-20 12:43:37.972729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.314 [2024-11-20 12:43:37.972735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.314 [2024-11-20 12:43:37.972750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.314 qpair failed and we were unable to recover it. 00:29:32.314 [2024-11-20 12:43:37.982635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.314 [2024-11-20 12:43:37.982688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.314 [2024-11-20 12:43:37.982703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.314 [2024-11-20 12:43:37.982710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.314 [2024-11-20 12:43:37.982716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.314 [2024-11-20 12:43:37.982730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.314 qpair failed and we were unable to recover it. 00:29:32.314 [2024-11-20 12:43:37.992669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.314 [2024-11-20 12:43:37.992730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.314 [2024-11-20 12:43:37.992751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.314 [2024-11-20 12:43:37.992758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.314 [2024-11-20 12:43:37.992764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.314 [2024-11-20 12:43:37.992778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.314 qpair failed and we were unable to recover it. 00:29:32.314 [2024-11-20 12:43:38.002684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.314 [2024-11-20 12:43:38.002739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.314 [2024-11-20 12:43:38.002754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.314 [2024-11-20 12:43:38.002761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.314 [2024-11-20 12:43:38.002767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.314 [2024-11-20 12:43:38.002781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.314 qpair failed and we were unable to recover it. 00:29:32.314 [2024-11-20 12:43:38.012737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.314 [2024-11-20 12:43:38.012793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.314 [2024-11-20 12:43:38.012809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.314 [2024-11-20 12:43:38.012816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.314 [2024-11-20 12:43:38.012822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.314 [2024-11-20 12:43:38.012837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.314 qpair failed and we were unable to recover it. 00:29:32.314 [2024-11-20 12:43:38.022764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.314 [2024-11-20 12:43:38.022815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.314 [2024-11-20 12:43:38.022830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.314 [2024-11-20 12:43:38.022838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.314 [2024-11-20 12:43:38.022844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.314 [2024-11-20 12:43:38.022859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.314 qpair failed and we were unable to recover it. 00:29:32.314 [2024-11-20 12:43:38.032805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.314 [2024-11-20 12:43:38.032861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.314 [2024-11-20 12:43:38.032876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.314 [2024-11-20 12:43:38.032883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.314 [2024-11-20 12:43:38.032889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.314 [2024-11-20 12:43:38.032906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.314 qpair failed and we were unable to recover it. 00:29:32.314 [2024-11-20 12:43:38.042823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.314 [2024-11-20 12:43:38.042877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.314 [2024-11-20 12:43:38.042893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.314 [2024-11-20 12:43:38.042900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.314 [2024-11-20 12:43:38.042906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.314 [2024-11-20 12:43:38.042920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.314 qpair failed and we were unable to recover it. 00:29:32.314 [2024-11-20 12:43:38.052826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.314 [2024-11-20 12:43:38.052879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.314 [2024-11-20 12:43:38.052894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.314 [2024-11-20 12:43:38.052901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.314 [2024-11-20 12:43:38.052907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.314 [2024-11-20 12:43:38.052921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.314 qpair failed and we were unable to recover it. 00:29:32.314 [2024-11-20 12:43:38.062825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.314 [2024-11-20 12:43:38.062879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.314 [2024-11-20 12:43:38.062895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.314 [2024-11-20 12:43:38.062901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.314 [2024-11-20 12:43:38.062908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.314 [2024-11-20 12:43:38.062922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.314 qpair failed and we were unable to recover it. 00:29:32.314 [2024-11-20 12:43:38.072820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.314 [2024-11-20 12:43:38.072874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.314 [2024-11-20 12:43:38.072889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.315 [2024-11-20 12:43:38.072896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.315 [2024-11-20 12:43:38.072903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.315 [2024-11-20 12:43:38.072917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.315 qpair failed and we were unable to recover it. 00:29:32.575 [2024-11-20 12:43:38.082926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.575 [2024-11-20 12:43:38.082976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.575 [2024-11-20 12:43:38.082992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.575 [2024-11-20 12:43:38.082998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.575 [2024-11-20 12:43:38.083005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.575 [2024-11-20 12:43:38.083019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.575 qpair failed and we were unable to recover it. 00:29:32.575 [2024-11-20 12:43:38.092959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.575 [2024-11-20 12:43:38.093014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.575 [2024-11-20 12:43:38.093029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.575 [2024-11-20 12:43:38.093036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.575 [2024-11-20 12:43:38.093043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.575 [2024-11-20 12:43:38.093057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.575 qpair failed and we were unable to recover it. 00:29:32.575 [2024-11-20 12:43:38.102977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.575 [2024-11-20 12:43:38.103047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.575 [2024-11-20 12:43:38.103064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.575 [2024-11-20 12:43:38.103071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.575 [2024-11-20 12:43:38.103077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.575 [2024-11-20 12:43:38.103092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.575 qpair failed and we were unable to recover it. 00:29:32.575 [2024-11-20 12:43:38.113008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.575 [2024-11-20 12:43:38.113059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.575 [2024-11-20 12:43:38.113075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.575 [2024-11-20 12:43:38.113082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.575 [2024-11-20 12:43:38.113088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.575 [2024-11-20 12:43:38.113103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.575 qpair failed and we were unable to recover it. 00:29:32.575 [2024-11-20 12:43:38.123124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.575 [2024-11-20 12:43:38.123175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.575 [2024-11-20 12:43:38.123192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.575 [2024-11-20 12:43:38.123200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.575 [2024-11-20 12:43:38.123209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.575 [2024-11-20 12:43:38.123224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.575 qpair failed and we were unable to recover it. 00:29:32.575 [2024-11-20 12:43:38.133068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.575 [2024-11-20 12:43:38.133119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.575 [2024-11-20 12:43:38.133135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.575 [2024-11-20 12:43:38.133142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.575 [2024-11-20 12:43:38.133148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.575 [2024-11-20 12:43:38.133162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.575 qpair failed and we were unable to recover it. 00:29:32.575 [2024-11-20 12:43:38.143096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.575 [2024-11-20 12:43:38.143164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.575 [2024-11-20 12:43:38.143180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.575 [2024-11-20 12:43:38.143187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.575 [2024-11-20 12:43:38.143193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.575 [2024-11-20 12:43:38.143212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.575 qpair failed and we were unable to recover it. 00:29:32.575 [2024-11-20 12:43:38.153157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.575 [2024-11-20 12:43:38.153206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.575 [2024-11-20 12:43:38.153221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.575 [2024-11-20 12:43:38.153228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.575 [2024-11-20 12:43:38.153234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.575 [2024-11-20 12:43:38.153248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.575 qpair failed and we were unable to recover it. 00:29:32.575 [2024-11-20 12:43:38.163140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.575 [2024-11-20 12:43:38.163194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.575 [2024-11-20 12:43:38.163212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.575 [2024-11-20 12:43:38.163219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.575 [2024-11-20 12:43:38.163225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.575 [2024-11-20 12:43:38.163243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.575 qpair failed and we were unable to recover it. 00:29:32.575 [2024-11-20 12:43:38.173182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.575 [2024-11-20 12:43:38.173255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.575 [2024-11-20 12:43:38.173271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.575 [2024-11-20 12:43:38.173278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.575 [2024-11-20 12:43:38.173284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.575 [2024-11-20 12:43:38.173299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.575 qpair failed and we were unable to recover it. 00:29:32.575 [2024-11-20 12:43:38.183200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.575 [2024-11-20 12:43:38.183262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.575 [2024-11-20 12:43:38.183276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.575 [2024-11-20 12:43:38.183283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.575 [2024-11-20 12:43:38.183290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.575 [2024-11-20 12:43:38.183304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.575 qpair failed and we were unable to recover it. 00:29:32.575 [2024-11-20 12:43:38.193219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.575 [2024-11-20 12:43:38.193277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.575 [2024-11-20 12:43:38.193293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.575 [2024-11-20 12:43:38.193301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.575 [2024-11-20 12:43:38.193307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.575 [2024-11-20 12:43:38.193322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.575 qpair failed and we were unable to recover it. 00:29:32.575 [2024-11-20 12:43:38.203191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.575 [2024-11-20 12:43:38.203251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.575 [2024-11-20 12:43:38.203265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.575 [2024-11-20 12:43:38.203272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.576 [2024-11-20 12:43:38.203278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.576 [2024-11-20 12:43:38.203292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.576 qpair failed and we were unable to recover it. 00:29:32.576 [2024-11-20 12:43:38.213302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.576 [2024-11-20 12:43:38.213382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.576 [2024-11-20 12:43:38.213398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.576 [2024-11-20 12:43:38.213405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.576 [2024-11-20 12:43:38.213410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.576 [2024-11-20 12:43:38.213425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.576 qpair failed and we were unable to recover it. 00:29:32.576 [2024-11-20 12:43:38.223268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.576 [2024-11-20 12:43:38.223327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.576 [2024-11-20 12:43:38.223342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.576 [2024-11-20 12:43:38.223349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.576 [2024-11-20 12:43:38.223354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.576 [2024-11-20 12:43:38.223369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.576 qpair failed and we were unable to recover it. 00:29:32.576 [2024-11-20 12:43:38.233344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.576 [2024-11-20 12:43:38.233442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.576 [2024-11-20 12:43:38.233457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.576 [2024-11-20 12:43:38.233463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.576 [2024-11-20 12:43:38.233469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.576 [2024-11-20 12:43:38.233484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.576 qpair failed and we were unable to recover it. 00:29:32.576 [2024-11-20 12:43:38.243382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.576 [2024-11-20 12:43:38.243469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.576 [2024-11-20 12:43:38.243484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.576 [2024-11-20 12:43:38.243491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.576 [2024-11-20 12:43:38.243497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.576 [2024-11-20 12:43:38.243511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.576 qpair failed and we were unable to recover it. 00:29:32.576 [2024-11-20 12:43:38.253330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.576 [2024-11-20 12:43:38.253385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.576 [2024-11-20 12:43:38.253403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.576 [2024-11-20 12:43:38.253410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.576 [2024-11-20 12:43:38.253417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.576 [2024-11-20 12:43:38.253430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.576 qpair failed and we were unable to recover it. 00:29:32.576 [2024-11-20 12:43:38.263438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.576 [2024-11-20 12:43:38.263498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.576 [2024-11-20 12:43:38.263513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.576 [2024-11-20 12:43:38.263520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.576 [2024-11-20 12:43:38.263526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.576 [2024-11-20 12:43:38.263541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.576 qpair failed and we were unable to recover it. 00:29:32.576 [2024-11-20 12:43:38.273449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.576 [2024-11-20 12:43:38.273507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.576 [2024-11-20 12:43:38.273522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.576 [2024-11-20 12:43:38.273529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.576 [2024-11-20 12:43:38.273535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.576 [2024-11-20 12:43:38.273549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.576 qpair failed and we were unable to recover it. 00:29:32.576 [2024-11-20 12:43:38.283459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.576 [2024-11-20 12:43:38.283512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.576 [2024-11-20 12:43:38.283527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.576 [2024-11-20 12:43:38.283534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.576 [2024-11-20 12:43:38.283540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.576 [2024-11-20 12:43:38.283554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.576 qpair failed and we were unable to recover it. 00:29:32.576 [2024-11-20 12:43:38.293530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.576 [2024-11-20 12:43:38.293586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.576 [2024-11-20 12:43:38.293601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.576 [2024-11-20 12:43:38.293608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.576 [2024-11-20 12:43:38.293614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.576 [2024-11-20 12:43:38.293633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.576 qpair failed and we were unable to recover it. 00:29:32.576 [2024-11-20 12:43:38.303474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.576 [2024-11-20 12:43:38.303528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.576 [2024-11-20 12:43:38.303543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.576 [2024-11-20 12:43:38.303550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.576 [2024-11-20 12:43:38.303555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.576 [2024-11-20 12:43:38.303569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.576 qpair failed and we were unable to recover it. 00:29:32.576 [2024-11-20 12:43:38.313499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.576 [2024-11-20 12:43:38.313559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.576 [2024-11-20 12:43:38.313575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.576 [2024-11-20 12:43:38.313582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.576 [2024-11-20 12:43:38.313588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.576 [2024-11-20 12:43:38.313602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.576 qpair failed and we were unable to recover it. 00:29:32.576 [2024-11-20 12:43:38.323633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.576 [2024-11-20 12:43:38.323684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.576 [2024-11-20 12:43:38.323698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.576 [2024-11-20 12:43:38.323705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.576 [2024-11-20 12:43:38.323711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.576 [2024-11-20 12:43:38.323725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.576 qpair failed and we were unable to recover it. 00:29:32.576 [2024-11-20 12:43:38.333671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.576 [2024-11-20 12:43:38.333748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.576 [2024-11-20 12:43:38.333762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.576 [2024-11-20 12:43:38.333769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.577 [2024-11-20 12:43:38.333775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.577 [2024-11-20 12:43:38.333789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.577 qpair failed and we were unable to recover it. 00:29:32.836 [2024-11-20 12:43:38.343661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.836 [2024-11-20 12:43:38.343718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.836 [2024-11-20 12:43:38.343733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.836 [2024-11-20 12:43:38.343740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.836 [2024-11-20 12:43:38.343746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.836 [2024-11-20 12:43:38.343760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.836 qpair failed and we were unable to recover it. 00:29:32.836 [2024-11-20 12:43:38.353702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.836 [2024-11-20 12:43:38.353763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.836 [2024-11-20 12:43:38.353777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.836 [2024-11-20 12:43:38.353784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.836 [2024-11-20 12:43:38.353789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.836 [2024-11-20 12:43:38.353803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.836 qpair failed and we were unable to recover it. 00:29:32.836 [2024-11-20 12:43:38.363698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.836 [2024-11-20 12:43:38.363792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.836 [2024-11-20 12:43:38.363807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.836 [2024-11-20 12:43:38.363814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.836 [2024-11-20 12:43:38.363820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.836 [2024-11-20 12:43:38.363834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.836 qpair failed and we were unable to recover it. 00:29:32.836 [2024-11-20 12:43:38.373777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.836 [2024-11-20 12:43:38.373862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.836 [2024-11-20 12:43:38.373877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.836 [2024-11-20 12:43:38.373884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.836 [2024-11-20 12:43:38.373890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.836 [2024-11-20 12:43:38.373904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.836 qpair failed and we were unable to recover it. 00:29:32.836 [2024-11-20 12:43:38.383734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.836 [2024-11-20 12:43:38.383819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.836 [2024-11-20 12:43:38.383837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.836 [2024-11-20 12:43:38.383844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.836 [2024-11-20 12:43:38.383850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.837 [2024-11-20 12:43:38.383865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.837 qpair failed and we were unable to recover it. 00:29:32.837 [2024-11-20 12:43:38.393742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.837 [2024-11-20 12:43:38.393797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.837 [2024-11-20 12:43:38.393812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.837 [2024-11-20 12:43:38.393819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.837 [2024-11-20 12:43:38.393825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.837 [2024-11-20 12:43:38.393840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.837 qpair failed and we were unable to recover it. 00:29:32.837 [2024-11-20 12:43:38.403753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.837 [2024-11-20 12:43:38.403851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.837 [2024-11-20 12:43:38.403867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.837 [2024-11-20 12:43:38.403874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.837 [2024-11-20 12:43:38.403880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.837 [2024-11-20 12:43:38.403894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.837 qpair failed and we were unable to recover it. 00:29:32.837 [2024-11-20 12:43:38.413803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.837 [2024-11-20 12:43:38.413856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.837 [2024-11-20 12:43:38.413871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.837 [2024-11-20 12:43:38.413878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.837 [2024-11-20 12:43:38.413884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.837 [2024-11-20 12:43:38.413898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.837 qpair failed and we were unable to recover it. 00:29:32.837 [2024-11-20 12:43:38.423875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.837 [2024-11-20 12:43:38.423929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.837 [2024-11-20 12:43:38.423944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.837 [2024-11-20 12:43:38.423951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.837 [2024-11-20 12:43:38.423957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.837 [2024-11-20 12:43:38.423975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.837 qpair failed and we were unable to recover it. 00:29:32.837 [2024-11-20 12:43:38.433866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.837 [2024-11-20 12:43:38.433920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.837 [2024-11-20 12:43:38.433935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.837 [2024-11-20 12:43:38.433942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.837 [2024-11-20 12:43:38.433948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.837 [2024-11-20 12:43:38.433963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.837 qpair failed and we were unable to recover it. 00:29:32.837 [2024-11-20 12:43:38.443863] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.837 [2024-11-20 12:43:38.443916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.837 [2024-11-20 12:43:38.443931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.837 [2024-11-20 12:43:38.443938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.837 [2024-11-20 12:43:38.443944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.837 [2024-11-20 12:43:38.443958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.837 qpair failed and we were unable to recover it. 00:29:32.837 [2024-11-20 12:43:38.454008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.837 [2024-11-20 12:43:38.454065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.837 [2024-11-20 12:43:38.454080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.837 [2024-11-20 12:43:38.454087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.837 [2024-11-20 12:43:38.454093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.837 [2024-11-20 12:43:38.454107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.837 qpair failed and we were unable to recover it. 00:29:32.837 [2024-11-20 12:43:38.463935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.837 [2024-11-20 12:43:38.463987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.837 [2024-11-20 12:43:38.464002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.837 [2024-11-20 12:43:38.464009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.837 [2024-11-20 12:43:38.464016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.837 [2024-11-20 12:43:38.464030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.837 qpair failed and we were unable to recover it. 00:29:32.837 [2024-11-20 12:43:38.474024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.837 [2024-11-20 12:43:38.474082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.837 [2024-11-20 12:43:38.474096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.837 [2024-11-20 12:43:38.474103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.837 [2024-11-20 12:43:38.474108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.837 [2024-11-20 12:43:38.474123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.837 qpair failed and we were unable to recover it. 00:29:32.837 [2024-11-20 12:43:38.484060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.837 [2024-11-20 12:43:38.484112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.837 [2024-11-20 12:43:38.484126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.837 [2024-11-20 12:43:38.484133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.837 [2024-11-20 12:43:38.484139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.837 [2024-11-20 12:43:38.484154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.837 qpair failed and we were unable to recover it. 00:29:32.837 [2024-11-20 12:43:38.494011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.837 [2024-11-20 12:43:38.494081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.837 [2024-11-20 12:43:38.494096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.837 [2024-11-20 12:43:38.494103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.837 [2024-11-20 12:43:38.494109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.837 [2024-11-20 12:43:38.494124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.837 qpair failed and we were unable to recover it. 00:29:32.837 [2024-11-20 12:43:38.504094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.837 [2024-11-20 12:43:38.504161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.837 [2024-11-20 12:43:38.504176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.837 [2024-11-20 12:43:38.504184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.837 [2024-11-20 12:43:38.504189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.837 [2024-11-20 12:43:38.504208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.837 qpair failed and we were unable to recover it. 00:29:32.837 [2024-11-20 12:43:38.514175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.837 [2024-11-20 12:43:38.514265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.837 [2024-11-20 12:43:38.514281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.837 [2024-11-20 12:43:38.514291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.837 [2024-11-20 12:43:38.514297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.837 [2024-11-20 12:43:38.514312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.838 qpair failed and we were unable to recover it. 00:29:32.838 [2024-11-20 12:43:38.524140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.838 [2024-11-20 12:43:38.524227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.838 [2024-11-20 12:43:38.524242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.838 [2024-11-20 12:43:38.524249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.838 [2024-11-20 12:43:38.524255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.838 [2024-11-20 12:43:38.524269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.838 qpair failed and we were unable to recover it. 00:29:32.838 [2024-11-20 12:43:38.534126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.838 [2024-11-20 12:43:38.534184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.838 [2024-11-20 12:43:38.534198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.838 [2024-11-20 12:43:38.534209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.838 [2024-11-20 12:43:38.534215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.838 [2024-11-20 12:43:38.534231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.838 qpair failed and we were unable to recover it. 00:29:32.838 [2024-11-20 12:43:38.544268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.838 [2024-11-20 12:43:38.544343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.838 [2024-11-20 12:43:38.544357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.838 [2024-11-20 12:43:38.544364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.838 [2024-11-20 12:43:38.544370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.838 [2024-11-20 12:43:38.544385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.838 qpair failed and we were unable to recover it. 00:29:32.838 [2024-11-20 12:43:38.554282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.838 [2024-11-20 12:43:38.554337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.838 [2024-11-20 12:43:38.554351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.838 [2024-11-20 12:43:38.554359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.838 [2024-11-20 12:43:38.554365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.838 [2024-11-20 12:43:38.554384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.838 qpair failed and we were unable to recover it. 00:29:32.838 [2024-11-20 12:43:38.564318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.838 [2024-11-20 12:43:38.564407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.838 [2024-11-20 12:43:38.564423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.838 [2024-11-20 12:43:38.564430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.838 [2024-11-20 12:43:38.564436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.838 [2024-11-20 12:43:38.564451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.838 qpair failed and we were unable to recover it. 00:29:32.838 [2024-11-20 12:43:38.574339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.838 [2024-11-20 12:43:38.574415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.838 [2024-11-20 12:43:38.574430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.838 [2024-11-20 12:43:38.574437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.838 [2024-11-20 12:43:38.574443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.838 [2024-11-20 12:43:38.574458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.838 qpair failed and we were unable to recover it. 00:29:32.838 [2024-11-20 12:43:38.584393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.838 [2024-11-20 12:43:38.584452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.838 [2024-11-20 12:43:38.584466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.838 [2024-11-20 12:43:38.584474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.838 [2024-11-20 12:43:38.584480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.838 [2024-11-20 12:43:38.584495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.838 qpair failed and we were unable to recover it. 00:29:32.838 [2024-11-20 12:43:38.594404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.838 [2024-11-20 12:43:38.594458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.838 [2024-11-20 12:43:38.594473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.838 [2024-11-20 12:43:38.594480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.838 [2024-11-20 12:43:38.594486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:32.838 [2024-11-20 12:43:38.594501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.838 qpair failed and we were unable to recover it. 00:29:33.098 [2024-11-20 12:43:38.604425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.098 [2024-11-20 12:43:38.604478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.098 [2024-11-20 12:43:38.604493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.098 [2024-11-20 12:43:38.604499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.099 [2024-11-20 12:43:38.604505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.099 [2024-11-20 12:43:38.604519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.099 qpair failed and we were unable to recover it. 00:29:33.099 [2024-11-20 12:43:38.614434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.099 [2024-11-20 12:43:38.614487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.099 [2024-11-20 12:43:38.614503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.099 [2024-11-20 12:43:38.614511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.099 [2024-11-20 12:43:38.614517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.099 [2024-11-20 12:43:38.614533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.099 qpair failed and we were unable to recover it. 00:29:33.099 [2024-11-20 12:43:38.624475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.099 [2024-11-20 12:43:38.624540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.099 [2024-11-20 12:43:38.624554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.099 [2024-11-20 12:43:38.624562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.099 [2024-11-20 12:43:38.624568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.099 [2024-11-20 12:43:38.624582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.099 qpair failed and we were unable to recover it. 00:29:33.099 [2024-11-20 12:43:38.634476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.099 [2024-11-20 12:43:38.634553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.099 [2024-11-20 12:43:38.634568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.099 [2024-11-20 12:43:38.634574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.099 [2024-11-20 12:43:38.634581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.099 [2024-11-20 12:43:38.634596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.099 qpair failed and we were unable to recover it. 00:29:33.099 [2024-11-20 12:43:38.644516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.099 [2024-11-20 12:43:38.644566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.099 [2024-11-20 12:43:38.644583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.099 [2024-11-20 12:43:38.644596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.099 [2024-11-20 12:43:38.644602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.099 [2024-11-20 12:43:38.644617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.099 qpair failed and we were unable to recover it. 00:29:33.099 [2024-11-20 12:43:38.654528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.099 [2024-11-20 12:43:38.654581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.099 [2024-11-20 12:43:38.654595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.099 [2024-11-20 12:43:38.654602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.099 [2024-11-20 12:43:38.654609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.099 [2024-11-20 12:43:38.654624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.099 qpair failed and we were unable to recover it. 00:29:33.099 [2024-11-20 12:43:38.664560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.099 [2024-11-20 12:43:38.664617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.099 [2024-11-20 12:43:38.664632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.099 [2024-11-20 12:43:38.664639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.099 [2024-11-20 12:43:38.664646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.099 [2024-11-20 12:43:38.664661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.099 qpair failed and we were unable to recover it. 00:29:33.099 [2024-11-20 12:43:38.674582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.099 [2024-11-20 12:43:38.674639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.099 [2024-11-20 12:43:38.674653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.099 [2024-11-20 12:43:38.674661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.099 [2024-11-20 12:43:38.674667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.099 [2024-11-20 12:43:38.674681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.099 qpair failed and we were unable to recover it. 00:29:33.099 [2024-11-20 12:43:38.684603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.099 [2024-11-20 12:43:38.684660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.099 [2024-11-20 12:43:38.684675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.099 [2024-11-20 12:43:38.684682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.099 [2024-11-20 12:43:38.684688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.099 [2024-11-20 12:43:38.684706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.099 qpair failed and we were unable to recover it. 00:29:33.099 [2024-11-20 12:43:38.694657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.099 [2024-11-20 12:43:38.694714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.099 [2024-11-20 12:43:38.694729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.099 [2024-11-20 12:43:38.694736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.099 [2024-11-20 12:43:38.694742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.099 [2024-11-20 12:43:38.694757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.099 qpair failed and we were unable to recover it. 00:29:33.099 [2024-11-20 12:43:38.704710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.099 [2024-11-20 12:43:38.704765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.099 [2024-11-20 12:43:38.704779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.099 [2024-11-20 12:43:38.704786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.099 [2024-11-20 12:43:38.704793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.099 [2024-11-20 12:43:38.704807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.099 qpair failed and we were unable to recover it. 00:29:33.099 [2024-11-20 12:43:38.714716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.099 [2024-11-20 12:43:38.714774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.099 [2024-11-20 12:43:38.714789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.099 [2024-11-20 12:43:38.714797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.099 [2024-11-20 12:43:38.714803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.099 [2024-11-20 12:43:38.714818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.099 qpair failed and we were unable to recover it. 00:29:33.099 [2024-11-20 12:43:38.724724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.099 [2024-11-20 12:43:38.724779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.099 [2024-11-20 12:43:38.724792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.099 [2024-11-20 12:43:38.724800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.099 [2024-11-20 12:43:38.724806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.099 [2024-11-20 12:43:38.724821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.099 qpair failed and we were unable to recover it. 00:29:33.099 [2024-11-20 12:43:38.734763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.100 [2024-11-20 12:43:38.734844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.100 [2024-11-20 12:43:38.734859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.100 [2024-11-20 12:43:38.734867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.100 [2024-11-20 12:43:38.734873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.100 [2024-11-20 12:43:38.734887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.100 qpair failed and we were unable to recover it. 00:29:33.100 [2024-11-20 12:43:38.744788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.100 [2024-11-20 12:43:38.744845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.100 [2024-11-20 12:43:38.744859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.100 [2024-11-20 12:43:38.744867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.100 [2024-11-20 12:43:38.744874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.100 [2024-11-20 12:43:38.744888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.100 qpair failed and we were unable to recover it. 00:29:33.100 [2024-11-20 12:43:38.754853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.100 [2024-11-20 12:43:38.754917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.100 [2024-11-20 12:43:38.754931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.100 [2024-11-20 12:43:38.754938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.100 [2024-11-20 12:43:38.754944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.100 [2024-11-20 12:43:38.754958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.100 qpair failed and we were unable to recover it. 00:29:33.100 [2024-11-20 12:43:38.764835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.100 [2024-11-20 12:43:38.764904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.100 [2024-11-20 12:43:38.764919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.100 [2024-11-20 12:43:38.764927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.100 [2024-11-20 12:43:38.764932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.100 [2024-11-20 12:43:38.764946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.100 qpair failed and we were unable to recover it. 00:29:33.100 [2024-11-20 12:43:38.774880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.100 [2024-11-20 12:43:38.774936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.100 [2024-11-20 12:43:38.774952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.100 [2024-11-20 12:43:38.774963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.100 [2024-11-20 12:43:38.774970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.100 [2024-11-20 12:43:38.774984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.100 qpair failed and we were unable to recover it. 00:29:33.100 [2024-11-20 12:43:38.784918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.100 [2024-11-20 12:43:38.784970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.100 [2024-11-20 12:43:38.784985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.100 [2024-11-20 12:43:38.784992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.100 [2024-11-20 12:43:38.784998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.100 [2024-11-20 12:43:38.785013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.100 qpair failed and we were unable to recover it. 00:29:33.100 [2024-11-20 12:43:38.794927] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.100 [2024-11-20 12:43:38.795021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.100 [2024-11-20 12:43:38.795037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.100 [2024-11-20 12:43:38.795044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.100 [2024-11-20 12:43:38.795050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.100 [2024-11-20 12:43:38.795065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.100 qpair failed and we were unable to recover it. 00:29:33.100 [2024-11-20 12:43:38.804960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.100 [2024-11-20 12:43:38.805015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.100 [2024-11-20 12:43:38.805028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.100 [2024-11-20 12:43:38.805036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.100 [2024-11-20 12:43:38.805043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.100 [2024-11-20 12:43:38.805058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.100 qpair failed and we were unable to recover it. 00:29:33.100 [2024-11-20 12:43:38.814988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.100 [2024-11-20 12:43:38.815044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.100 [2024-11-20 12:43:38.815062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.100 [2024-11-20 12:43:38.815069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.100 [2024-11-20 12:43:38.815076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.100 [2024-11-20 12:43:38.815095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.100 qpair failed and we were unable to recover it. 00:29:33.100 [2024-11-20 12:43:38.825088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.100 [2024-11-20 12:43:38.825195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.100 [2024-11-20 12:43:38.825216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.100 [2024-11-20 12:43:38.825223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.100 [2024-11-20 12:43:38.825230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.100 [2024-11-20 12:43:38.825246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.100 qpair failed and we were unable to recover it. 00:29:33.100 [2024-11-20 12:43:38.835041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.100 [2024-11-20 12:43:38.835098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.100 [2024-11-20 12:43:38.835112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.100 [2024-11-20 12:43:38.835120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.100 [2024-11-20 12:43:38.835126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.100 [2024-11-20 12:43:38.835141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.100 qpair failed and we were unable to recover it. 00:29:33.100 [2024-11-20 12:43:38.845006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.100 [2024-11-20 12:43:38.845062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.100 [2024-11-20 12:43:38.845077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.100 [2024-11-20 12:43:38.845085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.100 [2024-11-20 12:43:38.845091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.100 [2024-11-20 12:43:38.845106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.100 qpair failed and we were unable to recover it. 00:29:33.100 [2024-11-20 12:43:38.855134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.100 [2024-11-20 12:43:38.855211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.100 [2024-11-20 12:43:38.855227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.100 [2024-11-20 12:43:38.855235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.100 [2024-11-20 12:43:38.855241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.100 [2024-11-20 12:43:38.855256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.100 qpair failed and we were unable to recover it. 00:29:33.361 [2024-11-20 12:43:38.865180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.361 [2024-11-20 12:43:38.865290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.361 [2024-11-20 12:43:38.865306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.361 [2024-11-20 12:43:38.865313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.361 [2024-11-20 12:43:38.865319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.361 [2024-11-20 12:43:38.865334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.361 qpair failed and we were unable to recover it. 00:29:33.361 [2024-11-20 12:43:38.875151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.361 [2024-11-20 12:43:38.875211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.361 [2024-11-20 12:43:38.875225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.361 [2024-11-20 12:43:38.875233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.361 [2024-11-20 12:43:38.875239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.361 [2024-11-20 12:43:38.875253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.361 qpair failed and we were unable to recover it. 00:29:33.362 [2024-11-20 12:43:38.885204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.362 [2024-11-20 12:43:38.885283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.362 [2024-11-20 12:43:38.885298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.362 [2024-11-20 12:43:38.885305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.362 [2024-11-20 12:43:38.885312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.362 [2024-11-20 12:43:38.885326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.362 qpair failed and we were unable to recover it. 00:29:33.362 [2024-11-20 12:43:38.895232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.362 [2024-11-20 12:43:38.895292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.362 [2024-11-20 12:43:38.895306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.362 [2024-11-20 12:43:38.895313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.362 [2024-11-20 12:43:38.895319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.362 [2024-11-20 12:43:38.895334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.362 qpair failed and we were unable to recover it. 00:29:33.362 [2024-11-20 12:43:38.905240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.362 [2024-11-20 12:43:38.905294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.362 [2024-11-20 12:43:38.905309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.362 [2024-11-20 12:43:38.905319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.362 [2024-11-20 12:43:38.905326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.362 [2024-11-20 12:43:38.905341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.362 qpair failed and we were unable to recover it. 00:29:33.362 [2024-11-20 12:43:38.915268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.362 [2024-11-20 12:43:38.915319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.362 [2024-11-20 12:43:38.915334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.362 [2024-11-20 12:43:38.915342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.362 [2024-11-20 12:43:38.915348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.362 [2024-11-20 12:43:38.915364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.362 qpair failed and we were unable to recover it. 00:29:33.362 [2024-11-20 12:43:38.925327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.362 [2024-11-20 12:43:38.925398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.362 [2024-11-20 12:43:38.925413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.362 [2024-11-20 12:43:38.925421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.362 [2024-11-20 12:43:38.925427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.362 [2024-11-20 12:43:38.925442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.362 qpair failed and we were unable to recover it. 00:29:33.362 [2024-11-20 12:43:38.935362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.362 [2024-11-20 12:43:38.935422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.362 [2024-11-20 12:43:38.935436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.362 [2024-11-20 12:43:38.935444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.362 [2024-11-20 12:43:38.935450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.362 [2024-11-20 12:43:38.935465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.362 qpair failed and we were unable to recover it. 00:29:33.362 [2024-11-20 12:43:38.945370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.362 [2024-11-20 12:43:38.945468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.362 [2024-11-20 12:43:38.945483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.362 [2024-11-20 12:43:38.945490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.362 [2024-11-20 12:43:38.945496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.362 [2024-11-20 12:43:38.945515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.362 qpair failed and we were unable to recover it. 00:29:33.362 [2024-11-20 12:43:38.955386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.362 [2024-11-20 12:43:38.955444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.362 [2024-11-20 12:43:38.955459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.362 [2024-11-20 12:43:38.955466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.362 [2024-11-20 12:43:38.955473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.362 [2024-11-20 12:43:38.955487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.362 qpair failed and we were unable to recover it. 00:29:33.362 [2024-11-20 12:43:38.965443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.362 [2024-11-20 12:43:38.965497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.362 [2024-11-20 12:43:38.965512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.362 [2024-11-20 12:43:38.965519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.362 [2024-11-20 12:43:38.965526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.362 [2024-11-20 12:43:38.965540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.362 qpair failed and we were unable to recover it. 00:29:33.362 [2024-11-20 12:43:38.975451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.363 [2024-11-20 12:43:38.975510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.363 [2024-11-20 12:43:38.975525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.363 [2024-11-20 12:43:38.975533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.363 [2024-11-20 12:43:38.975539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.363 [2024-11-20 12:43:38.975553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.363 qpair failed and we were unable to recover it. 00:29:33.363 [2024-11-20 12:43:38.985466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.363 [2024-11-20 12:43:38.985515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.363 [2024-11-20 12:43:38.985529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.363 [2024-11-20 12:43:38.985536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.363 [2024-11-20 12:43:38.985542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.363 [2024-11-20 12:43:38.985557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.363 qpair failed and we were unable to recover it. 00:29:33.363 [2024-11-20 12:43:38.995537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.363 [2024-11-20 12:43:38.995608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.363 [2024-11-20 12:43:38.995623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.363 [2024-11-20 12:43:38.995630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.363 [2024-11-20 12:43:38.995637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.363 [2024-11-20 12:43:38.995651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.363 qpair failed and we were unable to recover it. 00:29:33.363 [2024-11-20 12:43:39.005556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.363 [2024-11-20 12:43:39.005620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.363 [2024-11-20 12:43:39.005634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.363 [2024-11-20 12:43:39.005642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.363 [2024-11-20 12:43:39.005648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.363 [2024-11-20 12:43:39.005663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.363 qpair failed and we were unable to recover it. 00:29:33.363 [2024-11-20 12:43:39.015562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.363 [2024-11-20 12:43:39.015619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.363 [2024-11-20 12:43:39.015634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.363 [2024-11-20 12:43:39.015641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.363 [2024-11-20 12:43:39.015648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.363 [2024-11-20 12:43:39.015662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.363 qpair failed and we were unable to recover it. 00:29:33.363 [2024-11-20 12:43:39.025714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.363 [2024-11-20 12:43:39.025774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.363 [2024-11-20 12:43:39.025788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.363 [2024-11-20 12:43:39.025795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.363 [2024-11-20 12:43:39.025802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.363 [2024-11-20 12:43:39.025816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.363 qpair failed and we were unable to recover it. 00:29:33.363 [2024-11-20 12:43:39.035693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.363 [2024-11-20 12:43:39.035748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.363 [2024-11-20 12:43:39.035762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.363 [2024-11-20 12:43:39.035774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.363 [2024-11-20 12:43:39.035780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.363 [2024-11-20 12:43:39.035795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.363 qpair failed and we were unable to recover it. 00:29:33.363 [2024-11-20 12:43:39.045666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.363 [2024-11-20 12:43:39.045720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.363 [2024-11-20 12:43:39.045735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.363 [2024-11-20 12:43:39.045741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.363 [2024-11-20 12:43:39.045748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.363 [2024-11-20 12:43:39.045762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.363 qpair failed and we were unable to recover it. 00:29:33.363 [2024-11-20 12:43:39.055707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.363 [2024-11-20 12:43:39.055768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.363 [2024-11-20 12:43:39.055785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.363 [2024-11-20 12:43:39.055793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.363 [2024-11-20 12:43:39.055799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.363 [2024-11-20 12:43:39.055814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.363 qpair failed and we were unable to recover it. 00:29:33.363 [2024-11-20 12:43:39.065628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.364 [2024-11-20 12:43:39.065695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.364 [2024-11-20 12:43:39.065710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.364 [2024-11-20 12:43:39.065718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.364 [2024-11-20 12:43:39.065724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.364 [2024-11-20 12:43:39.065738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.364 qpair failed and we were unable to recover it. 00:29:33.364 [2024-11-20 12:43:39.075750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.364 [2024-11-20 12:43:39.075818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.364 [2024-11-20 12:43:39.075832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.364 [2024-11-20 12:43:39.075840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.364 [2024-11-20 12:43:39.075846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.364 [2024-11-20 12:43:39.075863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.364 qpair failed and we were unable to recover it. 00:29:33.364 [2024-11-20 12:43:39.085840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.364 [2024-11-20 12:43:39.085925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.364 [2024-11-20 12:43:39.085939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.364 [2024-11-20 12:43:39.085946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.364 [2024-11-20 12:43:39.085952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.364 [2024-11-20 12:43:39.085967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.364 qpair failed and we were unable to recover it. 00:29:33.364 [2024-11-20 12:43:39.095818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.364 [2024-11-20 12:43:39.095882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.364 [2024-11-20 12:43:39.095897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.364 [2024-11-20 12:43:39.095905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.364 [2024-11-20 12:43:39.095911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.364 [2024-11-20 12:43:39.095926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.364 qpair failed and we were unable to recover it. 00:29:33.364 [2024-11-20 12:43:39.105821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.364 [2024-11-20 12:43:39.105877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.364 [2024-11-20 12:43:39.105892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.364 [2024-11-20 12:43:39.105900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.364 [2024-11-20 12:43:39.105907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.364 [2024-11-20 12:43:39.105922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.364 qpair failed and we were unable to recover it. 00:29:33.364 [2024-11-20 12:43:39.115764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.364 [2024-11-20 12:43:39.115861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.364 [2024-11-20 12:43:39.115877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.364 [2024-11-20 12:43:39.115884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.364 [2024-11-20 12:43:39.115890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.364 [2024-11-20 12:43:39.115905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.364 qpair failed and we were unable to recover it. 00:29:33.623 [2024-11-20 12:43:39.125871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.623 [2024-11-20 12:43:39.125944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.623 [2024-11-20 12:43:39.125959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.623 [2024-11-20 12:43:39.125966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.623 [2024-11-20 12:43:39.125973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.623 [2024-11-20 12:43:39.125987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.623 qpair failed and we were unable to recover it. 00:29:33.623 [2024-11-20 12:43:39.135958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.623 [2024-11-20 12:43:39.136012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.623 [2024-11-20 12:43:39.136026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.623 [2024-11-20 12:43:39.136033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.623 [2024-11-20 12:43:39.136040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.623 [2024-11-20 12:43:39.136056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.623 qpair failed and we were unable to recover it. 00:29:33.623 [2024-11-20 12:43:39.145933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.623 [2024-11-20 12:43:39.145985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.623 [2024-11-20 12:43:39.146000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.623 [2024-11-20 12:43:39.146007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.623 [2024-11-20 12:43:39.146013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.623 [2024-11-20 12:43:39.146028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.623 qpair failed and we were unable to recover it. 00:29:33.623 [2024-11-20 12:43:39.155996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.623 [2024-11-20 12:43:39.156058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.623 [2024-11-20 12:43:39.156073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.623 [2024-11-20 12:43:39.156081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.623 [2024-11-20 12:43:39.156087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.623 [2024-11-20 12:43:39.156102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.623 qpair failed and we were unable to recover it. 00:29:33.623 [2024-11-20 12:43:39.165973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.623 [2024-11-20 12:43:39.166025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.623 [2024-11-20 12:43:39.166039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.623 [2024-11-20 12:43:39.166049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.623 [2024-11-20 12:43:39.166056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.623 [2024-11-20 12:43:39.166070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.623 qpair failed and we were unable to recover it. 00:29:33.623 [2024-11-20 12:43:39.176012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.623 [2024-11-20 12:43:39.176068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.623 [2024-11-20 12:43:39.176082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.623 [2024-11-20 12:43:39.176090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.623 [2024-11-20 12:43:39.176096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.623 [2024-11-20 12:43:39.176110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.623 qpair failed and we were unable to recover it. 00:29:33.623 [2024-11-20 12:43:39.186010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.623 [2024-11-20 12:43:39.186109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.623 [2024-11-20 12:43:39.186124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.623 [2024-11-20 12:43:39.186132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.623 [2024-11-20 12:43:39.186139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.623 [2024-11-20 12:43:39.186153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.623 qpair failed and we were unable to recover it. 00:29:33.623 [2024-11-20 12:43:39.196066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.623 [2024-11-20 12:43:39.196183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.623 [2024-11-20 12:43:39.196199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.623 [2024-11-20 12:43:39.196210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.623 [2024-11-20 12:43:39.196216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.623 [2024-11-20 12:43:39.196233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.623 qpair failed and we were unable to recover it. 00:29:33.623 [2024-11-20 12:43:39.206147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.623 [2024-11-20 12:43:39.206213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.623 [2024-11-20 12:43:39.206228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.623 [2024-11-20 12:43:39.206236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.623 [2024-11-20 12:43:39.206242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.623 [2024-11-20 12:43:39.206260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.623 qpair failed and we were unable to recover it. 00:29:33.623 [2024-11-20 12:43:39.216127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.623 [2024-11-20 12:43:39.216183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.623 [2024-11-20 12:43:39.216198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.623 [2024-11-20 12:43:39.216211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.623 [2024-11-20 12:43:39.216218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.623 [2024-11-20 12:43:39.216233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.623 qpair failed and we were unable to recover it. 00:29:33.623 [2024-11-20 12:43:39.226161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.623 [2024-11-20 12:43:39.226245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.623 [2024-11-20 12:43:39.226259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.623 [2024-11-20 12:43:39.226267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.623 [2024-11-20 12:43:39.226273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.623 [2024-11-20 12:43:39.226288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.623 qpair failed and we were unable to recover it. 00:29:33.623 [2024-11-20 12:43:39.236177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.623 [2024-11-20 12:43:39.236264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.623 [2024-11-20 12:43:39.236278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.623 [2024-11-20 12:43:39.236285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.623 [2024-11-20 12:43:39.236291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.623 [2024-11-20 12:43:39.236306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.623 qpair failed and we were unable to recover it. 00:29:33.623 [2024-11-20 12:43:39.246229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.623 [2024-11-20 12:43:39.246313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.623 [2024-11-20 12:43:39.246327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.623 [2024-11-20 12:43:39.246335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.623 [2024-11-20 12:43:39.246341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.623 [2024-11-20 12:43:39.246355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.623 qpair failed and we were unable to recover it. 00:29:33.623 [2024-11-20 12:43:39.256235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.623 [2024-11-20 12:43:39.256293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.623 [2024-11-20 12:43:39.256308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.623 [2024-11-20 12:43:39.256316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.623 [2024-11-20 12:43:39.256322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.623 [2024-11-20 12:43:39.256337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.623 qpair failed and we were unable to recover it. 00:29:33.623 [2024-11-20 12:43:39.266261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.623 [2024-11-20 12:43:39.266317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.624 [2024-11-20 12:43:39.266331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.624 [2024-11-20 12:43:39.266338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.624 [2024-11-20 12:43:39.266345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.624 [2024-11-20 12:43:39.266358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.624 qpair failed and we were unable to recover it. 00:29:33.624 [2024-11-20 12:43:39.276234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.624 [2024-11-20 12:43:39.276287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.624 [2024-11-20 12:43:39.276301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.624 [2024-11-20 12:43:39.276308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.624 [2024-11-20 12:43:39.276314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.624 [2024-11-20 12:43:39.276329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.624 qpair failed and we were unable to recover it. 00:29:33.624 [2024-11-20 12:43:39.286315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.624 [2024-11-20 12:43:39.286368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.624 [2024-11-20 12:43:39.286382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.624 [2024-11-20 12:43:39.286389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.624 [2024-11-20 12:43:39.286395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.624 [2024-11-20 12:43:39.286410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.624 qpair failed and we were unable to recover it. 00:29:33.624 [2024-11-20 12:43:39.296367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.624 [2024-11-20 12:43:39.296436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.624 [2024-11-20 12:43:39.296450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.624 [2024-11-20 12:43:39.296461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.624 [2024-11-20 12:43:39.296467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.624 [2024-11-20 12:43:39.296482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.624 qpair failed and we were unable to recover it. 00:29:33.624 [2024-11-20 12:43:39.306395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.624 [2024-11-20 12:43:39.306451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.624 [2024-11-20 12:43:39.306466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.624 [2024-11-20 12:43:39.306474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.624 [2024-11-20 12:43:39.306481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.624 [2024-11-20 12:43:39.306495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.624 qpair failed and we were unable to recover it. 00:29:33.624 [2024-11-20 12:43:39.316402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.624 [2024-11-20 12:43:39.316461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.624 [2024-11-20 12:43:39.316475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.624 [2024-11-20 12:43:39.316484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.624 [2024-11-20 12:43:39.316490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.624 [2024-11-20 12:43:39.316504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.624 qpair failed and we were unable to recover it. 00:29:33.624 [2024-11-20 12:43:39.326441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.624 [2024-11-20 12:43:39.326492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.624 [2024-11-20 12:43:39.326508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.624 [2024-11-20 12:43:39.326515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.624 [2024-11-20 12:43:39.326522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.624 [2024-11-20 12:43:39.326536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.624 qpair failed and we were unable to recover it. 00:29:33.624 [2024-11-20 12:43:39.336469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.624 [2024-11-20 12:43:39.336558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.624 [2024-11-20 12:43:39.336573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.624 [2024-11-20 12:43:39.336580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.624 [2024-11-20 12:43:39.336586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.624 [2024-11-20 12:43:39.336604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.624 qpair failed and we were unable to recover it. 00:29:33.624 [2024-11-20 12:43:39.346492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.624 [2024-11-20 12:43:39.346548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.624 [2024-11-20 12:43:39.346562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.624 [2024-11-20 12:43:39.346569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.624 [2024-11-20 12:43:39.346575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.624 [2024-11-20 12:43:39.346590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.624 qpair failed and we were unable to recover it. 00:29:33.624 [2024-11-20 12:43:39.356446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.624 [2024-11-20 12:43:39.356502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.624 [2024-11-20 12:43:39.356517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.624 [2024-11-20 12:43:39.356524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.624 [2024-11-20 12:43:39.356531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.624 [2024-11-20 12:43:39.356545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.624 qpair failed and we were unable to recover it. 00:29:33.624 [2024-11-20 12:43:39.366554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.624 [2024-11-20 12:43:39.366611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.624 [2024-11-20 12:43:39.366628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.624 [2024-11-20 12:43:39.366636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.624 [2024-11-20 12:43:39.366642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.624 [2024-11-20 12:43:39.366657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.624 qpair failed and we were unable to recover it. 00:29:33.624 [2024-11-20 12:43:39.376573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.624 [2024-11-20 12:43:39.376629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.624 [2024-11-20 12:43:39.376643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.624 [2024-11-20 12:43:39.376650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.624 [2024-11-20 12:43:39.376657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.624 [2024-11-20 12:43:39.376671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.624 qpair failed and we were unable to recover it. 00:29:33.885 [2024-11-20 12:43:39.386617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.885 [2024-11-20 12:43:39.386678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.885 [2024-11-20 12:43:39.386692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.885 [2024-11-20 12:43:39.386699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.885 [2024-11-20 12:43:39.386706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.885 [2024-11-20 12:43:39.386720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.885 qpair failed and we were unable to recover it. 00:29:33.885 [2024-11-20 12:43:39.396637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.885 [2024-11-20 12:43:39.396720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.885 [2024-11-20 12:43:39.396734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.885 [2024-11-20 12:43:39.396741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.885 [2024-11-20 12:43:39.396748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.885 [2024-11-20 12:43:39.396762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.885 qpair failed and we were unable to recover it. 00:29:33.885 [2024-11-20 12:43:39.406694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.885 [2024-11-20 12:43:39.406755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.885 [2024-11-20 12:43:39.406770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.885 [2024-11-20 12:43:39.406777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.885 [2024-11-20 12:43:39.406783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.885 [2024-11-20 12:43:39.406798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.885 qpair failed and we were unable to recover it. 00:29:33.885 [2024-11-20 12:43:39.416732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.885 [2024-11-20 12:43:39.416792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.885 [2024-11-20 12:43:39.416805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.885 [2024-11-20 12:43:39.416813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.885 [2024-11-20 12:43:39.416820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.885 [2024-11-20 12:43:39.416834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.885 qpair failed and we were unable to recover it. 00:29:33.885 [2024-11-20 12:43:39.426731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.885 [2024-11-20 12:43:39.426819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.885 [2024-11-20 12:43:39.426834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.885 [2024-11-20 12:43:39.426845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.885 [2024-11-20 12:43:39.426851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.885 [2024-11-20 12:43:39.426865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.885 qpair failed and we were unable to recover it. 00:29:33.885 [2024-11-20 12:43:39.436747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.885 [2024-11-20 12:43:39.436822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.885 [2024-11-20 12:43:39.436836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.885 [2024-11-20 12:43:39.436842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.885 [2024-11-20 12:43:39.436849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.885 [2024-11-20 12:43:39.436863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.885 qpair failed and we were unable to recover it. 00:29:33.885 [2024-11-20 12:43:39.446785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.885 [2024-11-20 12:43:39.446842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.885 [2024-11-20 12:43:39.446856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.885 [2024-11-20 12:43:39.446864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.885 [2024-11-20 12:43:39.446870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.885 [2024-11-20 12:43:39.446885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.885 qpair failed and we were unable to recover it. 00:29:33.885 [2024-11-20 12:43:39.456736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.885 [2024-11-20 12:43:39.456790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.885 [2024-11-20 12:43:39.456805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.885 [2024-11-20 12:43:39.456813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.885 [2024-11-20 12:43:39.456820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.885 [2024-11-20 12:43:39.456835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.885 qpair failed and we were unable to recover it. 00:29:33.885 [2024-11-20 12:43:39.466836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.885 [2024-11-20 12:43:39.466935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.885 [2024-11-20 12:43:39.466949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.885 [2024-11-20 12:43:39.466956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.885 [2024-11-20 12:43:39.466963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.885 [2024-11-20 12:43:39.466981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.886 qpair failed and we were unable to recover it. 00:29:33.886 [2024-11-20 12:43:39.476859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.886 [2024-11-20 12:43:39.476917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.886 [2024-11-20 12:43:39.476931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.886 [2024-11-20 12:43:39.476938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.886 [2024-11-20 12:43:39.476945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.886 [2024-11-20 12:43:39.476959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.886 qpair failed and we were unable to recover it. 00:29:33.886 [2024-11-20 12:43:39.486934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.886 [2024-11-20 12:43:39.487001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.886 [2024-11-20 12:43:39.487016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.886 [2024-11-20 12:43:39.487024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.886 [2024-11-20 12:43:39.487030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.886 [2024-11-20 12:43:39.487044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.886 qpair failed and we were unable to recover it. 00:29:33.886 [2024-11-20 12:43:39.496917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.886 [2024-11-20 12:43:39.496994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.886 [2024-11-20 12:43:39.497009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.886 [2024-11-20 12:43:39.497017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.886 [2024-11-20 12:43:39.497023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.886 [2024-11-20 12:43:39.497038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.886 qpair failed and we were unable to recover it. 00:29:33.886 [2024-11-20 12:43:39.506949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.886 [2024-11-20 12:43:39.507024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.886 [2024-11-20 12:43:39.507040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.886 [2024-11-20 12:43:39.507047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.886 [2024-11-20 12:43:39.507053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.886 [2024-11-20 12:43:39.507068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.886 qpair failed and we were unable to recover it. 00:29:33.886 [2024-11-20 12:43:39.517011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.886 [2024-11-20 12:43:39.517075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.886 [2024-11-20 12:43:39.517089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.886 [2024-11-20 12:43:39.517097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.886 [2024-11-20 12:43:39.517104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.886 [2024-11-20 12:43:39.517118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.886 qpair failed and we were unable to recover it. 00:29:33.886 [2024-11-20 12:43:39.527052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.886 [2024-11-20 12:43:39.527126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.886 [2024-11-20 12:43:39.527141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.886 [2024-11-20 12:43:39.527148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.886 [2024-11-20 12:43:39.527154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.886 [2024-11-20 12:43:39.527169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.886 qpair failed and we were unable to recover it. 00:29:33.886 [2024-11-20 12:43:39.537038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.886 [2024-11-20 12:43:39.537092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.886 [2024-11-20 12:43:39.537106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.886 [2024-11-20 12:43:39.537114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.886 [2024-11-20 12:43:39.537121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.886 [2024-11-20 12:43:39.537135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.886 qpair failed and we were unable to recover it. 00:29:33.886 [2024-11-20 12:43:39.547041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.886 [2024-11-20 12:43:39.547098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.886 [2024-11-20 12:43:39.547112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.886 [2024-11-20 12:43:39.547120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.886 [2024-11-20 12:43:39.547126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.886 [2024-11-20 12:43:39.547141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.886 qpair failed and we were unable to recover it. 00:29:33.886 [2024-11-20 12:43:39.557097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.886 [2024-11-20 12:43:39.557154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.886 [2024-11-20 12:43:39.557169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.886 [2024-11-20 12:43:39.557184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.886 [2024-11-20 12:43:39.557190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.886 [2024-11-20 12:43:39.557209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.886 qpair failed and we were unable to recover it. 00:29:33.886 [2024-11-20 12:43:39.567147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.886 [2024-11-20 12:43:39.567211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.886 [2024-11-20 12:43:39.567225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.886 [2024-11-20 12:43:39.567233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.886 [2024-11-20 12:43:39.567239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.886 [2024-11-20 12:43:39.567254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.886 qpair failed and we were unable to recover it. 00:29:33.886 [2024-11-20 12:43:39.577160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.887 [2024-11-20 12:43:39.577219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.887 [2024-11-20 12:43:39.577235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.887 [2024-11-20 12:43:39.577243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.887 [2024-11-20 12:43:39.577249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.887 [2024-11-20 12:43:39.577264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.887 qpair failed and we were unable to recover it. 00:29:33.887 [2024-11-20 12:43:39.587195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.887 [2024-11-20 12:43:39.587253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.887 [2024-11-20 12:43:39.587268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.887 [2024-11-20 12:43:39.587275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.887 [2024-11-20 12:43:39.587282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.887 [2024-11-20 12:43:39.587297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.887 qpair failed and we were unable to recover it. 00:29:33.887 [2024-11-20 12:43:39.597218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.887 [2024-11-20 12:43:39.597275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.887 [2024-11-20 12:43:39.597290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.887 [2024-11-20 12:43:39.597297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.887 [2024-11-20 12:43:39.597303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.887 [2024-11-20 12:43:39.597322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.887 qpair failed and we were unable to recover it. 00:29:33.887 [2024-11-20 12:43:39.607274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.887 [2024-11-20 12:43:39.607328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.887 [2024-11-20 12:43:39.607343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.887 [2024-11-20 12:43:39.607350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.887 [2024-11-20 12:43:39.607357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.887 [2024-11-20 12:43:39.607372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.887 qpair failed and we were unable to recover it. 00:29:33.887 [2024-11-20 12:43:39.617226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.887 [2024-11-20 12:43:39.617288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.887 [2024-11-20 12:43:39.617303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.887 [2024-11-20 12:43:39.617313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.887 [2024-11-20 12:43:39.617320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.887 [2024-11-20 12:43:39.617335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.887 qpair failed and we were unable to recover it. 00:29:33.887 [2024-11-20 12:43:39.627324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.887 [2024-11-20 12:43:39.627384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.887 [2024-11-20 12:43:39.627399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.887 [2024-11-20 12:43:39.627406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.887 [2024-11-20 12:43:39.627413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.887 [2024-11-20 12:43:39.627428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.887 qpair failed and we were unable to recover it. 00:29:33.887 [2024-11-20 12:43:39.637365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.887 [2024-11-20 12:43:39.637458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.887 [2024-11-20 12:43:39.637474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.887 [2024-11-20 12:43:39.637481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.887 [2024-11-20 12:43:39.637488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:33.887 [2024-11-20 12:43:39.637501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.887 qpair failed and we were unable to recover it. 00:29:34.148 [2024-11-20 12:43:39.647357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.148 [2024-11-20 12:43:39.647423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.148 [2024-11-20 12:43:39.647437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.148 [2024-11-20 12:43:39.647445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.148 [2024-11-20 12:43:39.647451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.148 [2024-11-20 12:43:39.647465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.148 qpair failed and we were unable to recover it. 00:29:34.148 [2024-11-20 12:43:39.657401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.148 [2024-11-20 12:43:39.657460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.148 [2024-11-20 12:43:39.657475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.148 [2024-11-20 12:43:39.657482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.148 [2024-11-20 12:43:39.657488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.148 [2024-11-20 12:43:39.657503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.148 qpair failed and we were unable to recover it. 00:29:34.148 [2024-11-20 12:43:39.667473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.148 [2024-11-20 12:43:39.667533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.148 [2024-11-20 12:43:39.667548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.148 [2024-11-20 12:43:39.667556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.148 [2024-11-20 12:43:39.667562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.148 [2024-11-20 12:43:39.667576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.148 qpair failed and we were unable to recover it. 00:29:34.148 [2024-11-20 12:43:39.677449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.148 [2024-11-20 12:43:39.677540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.148 [2024-11-20 12:43:39.677555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.148 [2024-11-20 12:43:39.677561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.148 [2024-11-20 12:43:39.677567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.148 [2024-11-20 12:43:39.677582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.148 qpair failed and we were unable to recover it. 00:29:34.148 [2024-11-20 12:43:39.687453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.148 [2024-11-20 12:43:39.687506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.148 [2024-11-20 12:43:39.687520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.148 [2024-11-20 12:43:39.687531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.148 [2024-11-20 12:43:39.687537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.148 [2024-11-20 12:43:39.687551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.148 qpair failed and we were unable to recover it. 00:29:34.148 [2024-11-20 12:43:39.697434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.148 [2024-11-20 12:43:39.697491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.148 [2024-11-20 12:43:39.697506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.148 [2024-11-20 12:43:39.697513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.148 [2024-11-20 12:43:39.697520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.148 [2024-11-20 12:43:39.697534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.148 qpair failed and we were unable to recover it. 00:29:34.148 [2024-11-20 12:43:39.707520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.148 [2024-11-20 12:43:39.707593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.148 [2024-11-20 12:43:39.707608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.148 [2024-11-20 12:43:39.707615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.148 [2024-11-20 12:43:39.707621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.148 [2024-11-20 12:43:39.707636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.148 qpair failed and we were unable to recover it. 00:29:34.148 [2024-11-20 12:43:39.717602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.148 [2024-11-20 12:43:39.717662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.149 [2024-11-20 12:43:39.717676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.149 [2024-11-20 12:43:39.717684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.149 [2024-11-20 12:43:39.717690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.149 [2024-11-20 12:43:39.717705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.149 qpair failed and we were unable to recover it. 00:29:34.149 [2024-11-20 12:43:39.727609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.149 [2024-11-20 12:43:39.727714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.149 [2024-11-20 12:43:39.727731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.149 [2024-11-20 12:43:39.727738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.149 [2024-11-20 12:43:39.727744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.149 [2024-11-20 12:43:39.727763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.149 qpair failed and we were unable to recover it. 00:29:34.149 [2024-11-20 12:43:39.737541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.149 [2024-11-20 12:43:39.737597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.149 [2024-11-20 12:43:39.737611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.149 [2024-11-20 12:43:39.737618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.149 [2024-11-20 12:43:39.737625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.149 [2024-11-20 12:43:39.737640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.149 qpair failed and we were unable to recover it. 00:29:34.149 [2024-11-20 12:43:39.747563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.149 [2024-11-20 12:43:39.747624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.149 [2024-11-20 12:43:39.747638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.149 [2024-11-20 12:43:39.747646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.149 [2024-11-20 12:43:39.747652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.149 [2024-11-20 12:43:39.747666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.149 qpair failed and we were unable to recover it. 00:29:34.149 [2024-11-20 12:43:39.757637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.149 [2024-11-20 12:43:39.757695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.149 [2024-11-20 12:43:39.757710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.149 [2024-11-20 12:43:39.757718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.149 [2024-11-20 12:43:39.757725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.149 [2024-11-20 12:43:39.757738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.149 qpair failed and we were unable to recover it. 00:29:34.149 [2024-11-20 12:43:39.767665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.149 [2024-11-20 12:43:39.767714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.149 [2024-11-20 12:43:39.767728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.149 [2024-11-20 12:43:39.767735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.149 [2024-11-20 12:43:39.767741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.149 [2024-11-20 12:43:39.767757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.149 qpair failed and we were unable to recover it. 00:29:34.149 [2024-11-20 12:43:39.777720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.149 [2024-11-20 12:43:39.777780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.149 [2024-11-20 12:43:39.777794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.149 [2024-11-20 12:43:39.777801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.149 [2024-11-20 12:43:39.777807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.149 [2024-11-20 12:43:39.777821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.149 qpair failed and we were unable to recover it. 00:29:34.149 [2024-11-20 12:43:39.787771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.149 [2024-11-20 12:43:39.787830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.149 [2024-11-20 12:43:39.787844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.149 [2024-11-20 12:43:39.787852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.149 [2024-11-20 12:43:39.787858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.149 [2024-11-20 12:43:39.787873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.149 qpair failed and we were unable to recover it. 00:29:34.149 [2024-11-20 12:43:39.797727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.149 [2024-11-20 12:43:39.797820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.149 [2024-11-20 12:43:39.797835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.149 [2024-11-20 12:43:39.797842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.149 [2024-11-20 12:43:39.797848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.149 [2024-11-20 12:43:39.797864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.149 qpair failed and we were unable to recover it. 00:29:34.149 [2024-11-20 12:43:39.807738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.149 [2024-11-20 12:43:39.807828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.149 [2024-11-20 12:43:39.807842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.149 [2024-11-20 12:43:39.807850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.149 [2024-11-20 12:43:39.807856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.149 [2024-11-20 12:43:39.807871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.149 qpair failed and we were unable to recover it. 00:29:34.149 [2024-11-20 12:43:39.817849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.149 [2024-11-20 12:43:39.817928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.149 [2024-11-20 12:43:39.817944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.149 [2024-11-20 12:43:39.817955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.149 [2024-11-20 12:43:39.817962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.150 [2024-11-20 12:43:39.817978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.150 qpair failed and we were unable to recover it. 00:29:34.150 [2024-11-20 12:43:39.827864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.150 [2024-11-20 12:43:39.827951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.150 [2024-11-20 12:43:39.827970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.150 [2024-11-20 12:43:39.827979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.150 [2024-11-20 12:43:39.827986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.150 [2024-11-20 12:43:39.828002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.150 qpair failed and we were unable to recover it. 00:29:34.150 [2024-11-20 12:43:39.837801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.150 [2024-11-20 12:43:39.837871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.150 [2024-11-20 12:43:39.837886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.150 [2024-11-20 12:43:39.837894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.150 [2024-11-20 12:43:39.837901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.150 [2024-11-20 12:43:39.837915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.150 qpair failed and we were unable to recover it. 00:29:34.150 [2024-11-20 12:43:39.847847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.150 [2024-11-20 12:43:39.847933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.150 [2024-11-20 12:43:39.847948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.150 [2024-11-20 12:43:39.847955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.150 [2024-11-20 12:43:39.847962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.150 [2024-11-20 12:43:39.847976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.150 qpair failed and we were unable to recover it. 00:29:34.150 [2024-11-20 12:43:39.857895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.150 [2024-11-20 12:43:39.857990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.150 [2024-11-20 12:43:39.858005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.150 [2024-11-20 12:43:39.858012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.150 [2024-11-20 12:43:39.858018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.150 [2024-11-20 12:43:39.858037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.150 qpair failed and we were unable to recover it. 00:29:34.150 [2024-11-20 12:43:39.867961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.150 [2024-11-20 12:43:39.868019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.150 [2024-11-20 12:43:39.868034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.150 [2024-11-20 12:43:39.868042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.150 [2024-11-20 12:43:39.868047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.150 [2024-11-20 12:43:39.868062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.150 qpair failed and we were unable to recover it. 00:29:34.150 [2024-11-20 12:43:39.878036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.150 [2024-11-20 12:43:39.878090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.150 [2024-11-20 12:43:39.878104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.150 [2024-11-20 12:43:39.878111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.150 [2024-11-20 12:43:39.878118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.150 [2024-11-20 12:43:39.878132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.150 qpair failed and we were unable to recover it. 00:29:34.150 [2024-11-20 12:43:39.888013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.150 [2024-11-20 12:43:39.888068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.150 [2024-11-20 12:43:39.888082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.150 [2024-11-20 12:43:39.888089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.150 [2024-11-20 12:43:39.888096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.150 [2024-11-20 12:43:39.888111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.150 qpair failed and we were unable to recover it. 00:29:34.150 [2024-11-20 12:43:39.898112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.150 [2024-11-20 12:43:39.898167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.150 [2024-11-20 12:43:39.898181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.150 [2024-11-20 12:43:39.898188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.150 [2024-11-20 12:43:39.898195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.150 [2024-11-20 12:43:39.898215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.150 qpair failed and we were unable to recover it. 00:29:34.150 [2024-11-20 12:43:39.908037] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.150 [2024-11-20 12:43:39.908099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.150 [2024-11-20 12:43:39.908116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.150 [2024-11-20 12:43:39.908124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.150 [2024-11-20 12:43:39.908131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.150 [2024-11-20 12:43:39.908146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.150 qpair failed and we were unable to recover it. 00:29:34.411 [2024-11-20 12:43:39.918061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.411 [2024-11-20 12:43:39.918159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.411 [2024-11-20 12:43:39.918174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.411 [2024-11-20 12:43:39.918182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.411 [2024-11-20 12:43:39.918188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.411 [2024-11-20 12:43:39.918208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.411 qpair failed and we were unable to recover it. 00:29:34.411 [2024-11-20 12:43:39.928119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.411 [2024-11-20 12:43:39.928167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.411 [2024-11-20 12:43:39.928182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.411 [2024-11-20 12:43:39.928189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.411 [2024-11-20 12:43:39.928195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.411 [2024-11-20 12:43:39.928214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.411 qpair failed and we were unable to recover it. 00:29:34.411 [2024-11-20 12:43:39.938145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.411 [2024-11-20 12:43:39.938235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.411 [2024-11-20 12:43:39.938250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.411 [2024-11-20 12:43:39.938257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.411 [2024-11-20 12:43:39.938263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.411 [2024-11-20 12:43:39.938278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.411 qpair failed and we were unable to recover it. 00:29:34.411 [2024-11-20 12:43:39.948243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.411 [2024-11-20 12:43:39.948312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.411 [2024-11-20 12:43:39.948327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.411 [2024-11-20 12:43:39.948337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.411 [2024-11-20 12:43:39.948343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.411 [2024-11-20 12:43:39.948358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.411 qpair failed and we were unable to recover it. 00:29:34.411 [2024-11-20 12:43:39.958215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.411 [2024-11-20 12:43:39.958271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.411 [2024-11-20 12:43:39.958288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.411 [2024-11-20 12:43:39.958296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.411 [2024-11-20 12:43:39.958303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.411 [2024-11-20 12:43:39.958318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.411 qpair failed and we were unable to recover it. 00:29:34.411 [2024-11-20 12:43:39.968188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.411 [2024-11-20 12:43:39.968253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.411 [2024-11-20 12:43:39.968268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.411 [2024-11-20 12:43:39.968276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.411 [2024-11-20 12:43:39.968282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.411 [2024-11-20 12:43:39.968296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.411 qpair failed and we were unable to recover it. 00:29:34.411 [2024-11-20 12:43:39.978322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.411 [2024-11-20 12:43:39.978408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.411 [2024-11-20 12:43:39.978423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.411 [2024-11-20 12:43:39.978430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.411 [2024-11-20 12:43:39.978436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.411 [2024-11-20 12:43:39.978450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.411 qpair failed and we were unable to recover it. 00:29:34.411 [2024-11-20 12:43:39.988250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.412 [2024-11-20 12:43:39.988302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.412 [2024-11-20 12:43:39.988315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.412 [2024-11-20 12:43:39.988323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.412 [2024-11-20 12:43:39.988328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.412 [2024-11-20 12:43:39.988347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.412 qpair failed and we were unable to recover it. 00:29:34.412 [2024-11-20 12:43:39.998376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.412 [2024-11-20 12:43:39.998464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.412 [2024-11-20 12:43:39.998478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.412 [2024-11-20 12:43:39.998485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.412 [2024-11-20 12:43:39.998491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.412 [2024-11-20 12:43:39.998506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.412 qpair failed and we were unable to recover it. 00:29:34.412 [2024-11-20 12:43:40.008384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.412 [2024-11-20 12:43:40.008450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.412 [2024-11-20 12:43:40.008466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.412 [2024-11-20 12:43:40.008474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.412 [2024-11-20 12:43:40.008481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.412 [2024-11-20 12:43:40.008497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.412 qpair failed and we were unable to recover it. 00:29:34.412 [2024-11-20 12:43:40.018524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.412 [2024-11-20 12:43:40.018625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.412 [2024-11-20 12:43:40.018641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.412 [2024-11-20 12:43:40.018650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.412 [2024-11-20 12:43:40.018658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.412 [2024-11-20 12:43:40.018675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.412 qpair failed and we were unable to recover it. 00:29:34.412 [2024-11-20 12:43:40.028499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.412 [2024-11-20 12:43:40.028571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.412 [2024-11-20 12:43:40.028586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.412 [2024-11-20 12:43:40.028594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.412 [2024-11-20 12:43:40.028601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.412 [2024-11-20 12:43:40.028616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.412 qpair failed and we were unable to recover it. 00:29:34.412 [2024-11-20 12:43:40.038482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.412 [2024-11-20 12:43:40.038543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.412 [2024-11-20 12:43:40.038557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.412 [2024-11-20 12:43:40.038564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.412 [2024-11-20 12:43:40.038572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.412 [2024-11-20 12:43:40.038587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.412 qpair failed and we were unable to recover it. 00:29:34.412 [2024-11-20 12:43:40.048535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.412 [2024-11-20 12:43:40.048590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.412 [2024-11-20 12:43:40.048605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.412 [2024-11-20 12:43:40.048612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.412 [2024-11-20 12:43:40.048619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.412 [2024-11-20 12:43:40.048634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.412 qpair failed and we were unable to recover it. 00:29:34.412 [2024-11-20 12:43:40.058547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.412 [2024-11-20 12:43:40.058609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.412 [2024-11-20 12:43:40.058625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.412 [2024-11-20 12:43:40.058633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.412 [2024-11-20 12:43:40.058639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.412 [2024-11-20 12:43:40.058655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.412 qpair failed and we were unable to recover it. 00:29:34.412 [2024-11-20 12:43:40.068515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.412 [2024-11-20 12:43:40.068569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.412 [2024-11-20 12:43:40.068585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.412 [2024-11-20 12:43:40.068593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.412 [2024-11-20 12:43:40.068599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.412 [2024-11-20 12:43:40.068614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.412 qpair failed and we were unable to recover it. 00:29:34.412 [2024-11-20 12:43:40.078596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.412 [2024-11-20 12:43:40.078653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.412 [2024-11-20 12:43:40.078669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.412 [2024-11-20 12:43:40.078681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.412 [2024-11-20 12:43:40.078687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.412 [2024-11-20 12:43:40.078703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.412 qpair failed and we were unable to recover it. 00:29:34.412 [2024-11-20 12:43:40.088659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.412 [2024-11-20 12:43:40.088719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.412 [2024-11-20 12:43:40.088736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.412 [2024-11-20 12:43:40.088744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.412 [2024-11-20 12:43:40.088751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.412 [2024-11-20 12:43:40.088767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.412 qpair failed and we were unable to recover it. 00:29:34.412 [2024-11-20 12:43:40.098680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.412 [2024-11-20 12:43:40.098787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.413 [2024-11-20 12:43:40.098808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.413 [2024-11-20 12:43:40.098817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.413 [2024-11-20 12:43:40.098824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.413 [2024-11-20 12:43:40.098842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.413 qpair failed and we were unable to recover it. 00:29:34.413 [2024-11-20 12:43:40.108655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.413 [2024-11-20 12:43:40.108743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.413 [2024-11-20 12:43:40.108761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.413 [2024-11-20 12:43:40.108768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.413 [2024-11-20 12:43:40.108775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.413 [2024-11-20 12:43:40.108790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.413 qpair failed and we were unable to recover it. 00:29:34.413 [2024-11-20 12:43:40.118631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.413 [2024-11-20 12:43:40.118687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.413 [2024-11-20 12:43:40.118704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.413 [2024-11-20 12:43:40.118714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.413 [2024-11-20 12:43:40.118721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.413 [2024-11-20 12:43:40.118740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.413 qpair failed and we were unable to recover it. 00:29:34.413 [2024-11-20 12:43:40.128730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.413 [2024-11-20 12:43:40.128785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.413 [2024-11-20 12:43:40.128801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.413 [2024-11-20 12:43:40.128809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.413 [2024-11-20 12:43:40.128816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.413 [2024-11-20 12:43:40.128831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.413 qpair failed and we were unable to recover it. 00:29:34.413 [2024-11-20 12:43:40.138723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.413 [2024-11-20 12:43:40.138780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.413 [2024-11-20 12:43:40.138796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.413 [2024-11-20 12:43:40.138804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.413 [2024-11-20 12:43:40.138812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.413 [2024-11-20 12:43:40.138826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.413 qpair failed and we were unable to recover it. 00:29:34.413 [2024-11-20 12:43:40.148792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.413 [2024-11-20 12:43:40.148849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.413 [2024-11-20 12:43:40.148866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.413 [2024-11-20 12:43:40.148874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.413 [2024-11-20 12:43:40.148881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.413 [2024-11-20 12:43:40.148896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.413 qpair failed and we were unable to recover it. 00:29:34.413 [2024-11-20 12:43:40.158852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.413 [2024-11-20 12:43:40.158904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.413 [2024-11-20 12:43:40.158919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.413 [2024-11-20 12:43:40.158926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.413 [2024-11-20 12:43:40.158933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.413 [2024-11-20 12:43:40.158947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.413 qpair failed and we were unable to recover it. 00:29:34.413 [2024-11-20 12:43:40.168830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.413 [2024-11-20 12:43:40.168882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.413 [2024-11-20 12:43:40.168896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.413 [2024-11-20 12:43:40.168903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.413 [2024-11-20 12:43:40.168910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.413 [2024-11-20 12:43:40.168925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.413 qpair failed and we were unable to recover it. 00:29:34.674 [2024-11-20 12:43:40.178828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.674 [2024-11-20 12:43:40.178884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.674 [2024-11-20 12:43:40.178898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.674 [2024-11-20 12:43:40.178905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.674 [2024-11-20 12:43:40.178912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.674 [2024-11-20 12:43:40.178927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.674 qpair failed and we were unable to recover it. 00:29:34.674 [2024-11-20 12:43:40.188842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.674 [2024-11-20 12:43:40.188904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.674 [2024-11-20 12:43:40.188917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.674 [2024-11-20 12:43:40.188925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.674 [2024-11-20 12:43:40.188931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.674 [2024-11-20 12:43:40.188946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.674 qpair failed and we were unable to recover it. 00:29:34.674 [2024-11-20 12:43:40.198936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.674 [2024-11-20 12:43:40.199000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.674 [2024-11-20 12:43:40.199014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.674 [2024-11-20 12:43:40.199021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.674 [2024-11-20 12:43:40.199028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.674 [2024-11-20 12:43:40.199042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.674 qpair failed and we were unable to recover it. 00:29:34.674 [2024-11-20 12:43:40.209008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.674 [2024-11-20 12:43:40.209072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.674 [2024-11-20 12:43:40.209089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.674 [2024-11-20 12:43:40.209100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.674 [2024-11-20 12:43:40.209106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.674 [2024-11-20 12:43:40.209121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.674 qpair failed and we were unable to recover it. 00:29:34.674 [2024-11-20 12:43:40.218993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.674 [2024-11-20 12:43:40.219054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.674 [2024-11-20 12:43:40.219068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.674 [2024-11-20 12:43:40.219076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.674 [2024-11-20 12:43:40.219082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.674 [2024-11-20 12:43:40.219097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.674 qpair failed and we were unable to recover it. 00:29:34.674 [2024-11-20 12:43:40.229022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.674 [2024-11-20 12:43:40.229076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.674 [2024-11-20 12:43:40.229091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.674 [2024-11-20 12:43:40.229098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.674 [2024-11-20 12:43:40.229105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.674 [2024-11-20 12:43:40.229119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.674 qpair failed and we were unable to recover it. 00:29:34.674 [2024-11-20 12:43:40.239052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.674 [2024-11-20 12:43:40.239149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.674 [2024-11-20 12:43:40.239163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.674 [2024-11-20 12:43:40.239171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.674 [2024-11-20 12:43:40.239177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.674 [2024-11-20 12:43:40.239192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.674 qpair failed and we were unable to recover it. 00:29:34.674 [2024-11-20 12:43:40.249076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.674 [2024-11-20 12:43:40.249126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.674 [2024-11-20 12:43:40.249140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.674 [2024-11-20 12:43:40.249147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.674 [2024-11-20 12:43:40.249153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.674 [2024-11-20 12:43:40.249170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.674 qpair failed and we were unable to recover it. 00:29:34.674 [2024-11-20 12:43:40.259088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.674 [2024-11-20 12:43:40.259172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.674 [2024-11-20 12:43:40.259188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.674 [2024-11-20 12:43:40.259195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.674 [2024-11-20 12:43:40.259206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.674 [2024-11-20 12:43:40.259222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.674 qpair failed and we were unable to recover it. 00:29:34.674 [2024-11-20 12:43:40.269147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.674 [2024-11-20 12:43:40.269207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.674 [2024-11-20 12:43:40.269221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.674 [2024-11-20 12:43:40.269229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.674 [2024-11-20 12:43:40.269235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.674 [2024-11-20 12:43:40.269249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.674 qpair failed and we were unable to recover it. 00:29:34.674 [2024-11-20 12:43:40.279170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.674 [2024-11-20 12:43:40.279228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.674 [2024-11-20 12:43:40.279243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.674 [2024-11-20 12:43:40.279250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.674 [2024-11-20 12:43:40.279256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.674 [2024-11-20 12:43:40.279271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.674 qpair failed and we were unable to recover it. 00:29:34.674 [2024-11-20 12:43:40.289128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.674 [2024-11-20 12:43:40.289180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.674 [2024-11-20 12:43:40.289194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.674 [2024-11-20 12:43:40.289210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.674 [2024-11-20 12:43:40.289217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.674 [2024-11-20 12:43:40.289232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.674 qpair failed and we were unable to recover it. 00:29:34.674 [2024-11-20 12:43:40.299243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.675 [2024-11-20 12:43:40.299299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.675 [2024-11-20 12:43:40.299315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.675 [2024-11-20 12:43:40.299323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.675 [2024-11-20 12:43:40.299330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.675 [2024-11-20 12:43:40.299344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.675 qpair failed and we were unable to recover it. 00:29:34.675 [2024-11-20 12:43:40.309253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.675 [2024-11-20 12:43:40.309311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.675 [2024-11-20 12:43:40.309325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.675 [2024-11-20 12:43:40.309333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.675 [2024-11-20 12:43:40.309339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.675 [2024-11-20 12:43:40.309353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.675 qpair failed and we were unable to recover it. 00:29:34.675 [2024-11-20 12:43:40.319286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.675 [2024-11-20 12:43:40.319352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.675 [2024-11-20 12:43:40.319366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.675 [2024-11-20 12:43:40.319374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.675 [2024-11-20 12:43:40.319380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.675 [2024-11-20 12:43:40.319395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.675 qpair failed and we were unable to recover it. 00:29:34.675 [2024-11-20 12:43:40.329307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.675 [2024-11-20 12:43:40.329395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.675 [2024-11-20 12:43:40.329408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.675 [2024-11-20 12:43:40.329415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.675 [2024-11-20 12:43:40.329421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.675 [2024-11-20 12:43:40.329435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.675 qpair failed and we were unable to recover it. 00:29:34.675 [2024-11-20 12:43:40.339284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.675 [2024-11-20 12:43:40.339378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.675 [2024-11-20 12:43:40.339391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.675 [2024-11-20 12:43:40.339402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.675 [2024-11-20 12:43:40.339408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.675 [2024-11-20 12:43:40.339422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.675 qpair failed and we were unable to recover it. 00:29:34.675 [2024-11-20 12:43:40.349368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.675 [2024-11-20 12:43:40.349430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.675 [2024-11-20 12:43:40.349444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.675 [2024-11-20 12:43:40.349451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.675 [2024-11-20 12:43:40.349457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.675 [2024-11-20 12:43:40.349472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.675 qpair failed and we were unable to recover it. 00:29:34.675 [2024-11-20 12:43:40.359403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.675 [2024-11-20 12:43:40.359474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.675 [2024-11-20 12:43:40.359489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.675 [2024-11-20 12:43:40.359496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.675 [2024-11-20 12:43:40.359502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.675 [2024-11-20 12:43:40.359517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.675 qpair failed and we were unable to recover it. 00:29:34.675 [2024-11-20 12:43:40.369461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.675 [2024-11-20 12:43:40.369524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.675 [2024-11-20 12:43:40.369538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.675 [2024-11-20 12:43:40.369545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.675 [2024-11-20 12:43:40.369551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.675 [2024-11-20 12:43:40.369565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.675 qpair failed and we were unable to recover it. 00:29:34.675 [2024-11-20 12:43:40.379461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.675 [2024-11-20 12:43:40.379519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.675 [2024-11-20 12:43:40.379533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.675 [2024-11-20 12:43:40.379541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.675 [2024-11-20 12:43:40.379547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.675 [2024-11-20 12:43:40.379565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.675 qpair failed and we were unable to recover it. 00:29:34.675 [2024-11-20 12:43:40.389494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.675 [2024-11-20 12:43:40.389557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.675 [2024-11-20 12:43:40.389572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.675 [2024-11-20 12:43:40.389579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.675 [2024-11-20 12:43:40.389586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.675 [2024-11-20 12:43:40.389601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.675 qpair failed and we were unable to recover it. 00:29:34.675 [2024-11-20 12:43:40.399516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.675 [2024-11-20 12:43:40.399587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.675 [2024-11-20 12:43:40.399601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.675 [2024-11-20 12:43:40.399609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.675 [2024-11-20 12:43:40.399615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.675 [2024-11-20 12:43:40.399630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.675 qpair failed and we were unable to recover it. 00:29:34.675 [2024-11-20 12:43:40.409547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.675 [2024-11-20 12:43:40.409596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.675 [2024-11-20 12:43:40.409611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.675 [2024-11-20 12:43:40.409618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.675 [2024-11-20 12:43:40.409624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.675 [2024-11-20 12:43:40.409640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.676 qpair failed and we were unable to recover it. 00:29:34.676 [2024-11-20 12:43:40.419623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.676 [2024-11-20 12:43:40.419729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.676 [2024-11-20 12:43:40.419743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.676 [2024-11-20 12:43:40.419750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.676 [2024-11-20 12:43:40.419757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.676 [2024-11-20 12:43:40.419772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.676 qpair failed and we were unable to recover it. 00:29:34.676 [2024-11-20 12:43:40.429527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.676 [2024-11-20 12:43:40.429632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.676 [2024-11-20 12:43:40.429646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.676 [2024-11-20 12:43:40.429653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.676 [2024-11-20 12:43:40.429660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.676 [2024-11-20 12:43:40.429674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.676 qpair failed and we were unable to recover it. 00:29:34.937 [2024-11-20 12:43:40.439626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.937 [2024-11-20 12:43:40.439677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.937 [2024-11-20 12:43:40.439694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.937 [2024-11-20 12:43:40.439703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.937 [2024-11-20 12:43:40.439710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.937 [2024-11-20 12:43:40.439725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.937 qpair failed and we were unable to recover it. 00:29:34.937 [2024-11-20 12:43:40.449655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.937 [2024-11-20 12:43:40.449710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.937 [2024-11-20 12:43:40.449724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.937 [2024-11-20 12:43:40.449732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.937 [2024-11-20 12:43:40.449738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.937 [2024-11-20 12:43:40.449753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.937 qpair failed and we were unable to recover it. 00:29:34.937 [2024-11-20 12:43:40.459682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.937 [2024-11-20 12:43:40.459737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.937 [2024-11-20 12:43:40.459752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.937 [2024-11-20 12:43:40.459759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.937 [2024-11-20 12:43:40.459766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.937 [2024-11-20 12:43:40.459781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.937 qpair failed and we were unable to recover it. 00:29:34.937 [2024-11-20 12:43:40.469704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.937 [2024-11-20 12:43:40.469757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.938 [2024-11-20 12:43:40.469771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.938 [2024-11-20 12:43:40.469781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.938 [2024-11-20 12:43:40.469787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.938 [2024-11-20 12:43:40.469802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.938 qpair failed and we were unable to recover it. 00:29:34.938 [2024-11-20 12:43:40.479778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.938 [2024-11-20 12:43:40.479830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.938 [2024-11-20 12:43:40.479844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.938 [2024-11-20 12:43:40.479851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.938 [2024-11-20 12:43:40.479857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.938 [2024-11-20 12:43:40.479871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.938 qpair failed and we were unable to recover it. 00:29:34.938 [2024-11-20 12:43:40.489780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.938 [2024-11-20 12:43:40.489839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.938 [2024-11-20 12:43:40.489854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.938 [2024-11-20 12:43:40.489861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.938 [2024-11-20 12:43:40.489867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.938 [2024-11-20 12:43:40.489881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.938 qpair failed and we were unable to recover it. 00:29:34.938 [2024-11-20 12:43:40.499854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.938 [2024-11-20 12:43:40.499913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.938 [2024-11-20 12:43:40.499927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.938 [2024-11-20 12:43:40.499934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.938 [2024-11-20 12:43:40.499940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.938 [2024-11-20 12:43:40.499954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.938 qpair failed and we were unable to recover it. 00:29:34.938 [2024-11-20 12:43:40.509828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.938 [2024-11-20 12:43:40.509885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.938 [2024-11-20 12:43:40.509900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.938 [2024-11-20 12:43:40.509908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.938 [2024-11-20 12:43:40.509914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.938 [2024-11-20 12:43:40.509935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.938 qpair failed and we were unable to recover it. 00:29:34.938 [2024-11-20 12:43:40.519853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.938 [2024-11-20 12:43:40.519941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.938 [2024-11-20 12:43:40.519956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.938 [2024-11-20 12:43:40.519963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.938 [2024-11-20 12:43:40.519969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.938 [2024-11-20 12:43:40.519984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.938 qpair failed and we were unable to recover it. 00:29:34.938 [2024-11-20 12:43:40.529885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.938 [2024-11-20 12:43:40.529938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.938 [2024-11-20 12:43:40.529951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.938 [2024-11-20 12:43:40.529959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.938 [2024-11-20 12:43:40.529965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.938 [2024-11-20 12:43:40.529979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.938 qpair failed and we were unable to recover it. 00:29:34.938 [2024-11-20 12:43:40.539954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.938 [2024-11-20 12:43:40.540011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.938 [2024-11-20 12:43:40.540025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.938 [2024-11-20 12:43:40.540032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.938 [2024-11-20 12:43:40.540038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.938 [2024-11-20 12:43:40.540053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.938 qpair failed and we were unable to recover it. 00:29:34.938 [2024-11-20 12:43:40.549963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.938 [2024-11-20 12:43:40.550065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.938 [2024-11-20 12:43:40.550079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.938 [2024-11-20 12:43:40.550087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.938 [2024-11-20 12:43:40.550093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.938 [2024-11-20 12:43:40.550107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.938 qpair failed and we were unable to recover it. 00:29:34.938 [2024-11-20 12:43:40.559967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.938 [2024-11-20 12:43:40.560028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.938 [2024-11-20 12:43:40.560043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.938 [2024-11-20 12:43:40.560050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.938 [2024-11-20 12:43:40.560056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.938 [2024-11-20 12:43:40.560071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.938 qpair failed and we were unable to recover it. 00:29:34.938 [2024-11-20 12:43:40.570016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.938 [2024-11-20 12:43:40.570080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.938 [2024-11-20 12:43:40.570094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.938 [2024-11-20 12:43:40.570102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.938 [2024-11-20 12:43:40.570108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.938 [2024-11-20 12:43:40.570123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.938 qpair failed and we were unable to recover it. 00:29:34.939 [2024-11-20 12:43:40.580100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.939 [2024-11-20 12:43:40.580179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.939 [2024-11-20 12:43:40.580193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.939 [2024-11-20 12:43:40.580205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.939 [2024-11-20 12:43:40.580212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.939 [2024-11-20 12:43:40.580227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.939 qpair failed and we were unable to recover it. 00:29:34.939 [2024-11-20 12:43:40.590121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.939 [2024-11-20 12:43:40.590175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.939 [2024-11-20 12:43:40.590190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.939 [2024-11-20 12:43:40.590197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.939 [2024-11-20 12:43:40.590208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.939 [2024-11-20 12:43:40.590223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.939 qpair failed and we were unable to recover it. 00:29:34.939 [2024-11-20 12:43:40.600082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.939 [2024-11-20 12:43:40.600132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.939 [2024-11-20 12:43:40.600146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.939 [2024-11-20 12:43:40.600157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.939 [2024-11-20 12:43:40.600163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.939 [2024-11-20 12:43:40.600177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.939 qpair failed and we were unable to recover it. 00:29:34.939 [2024-11-20 12:43:40.610119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.939 [2024-11-20 12:43:40.610172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.939 [2024-11-20 12:43:40.610189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.939 [2024-11-20 12:43:40.610197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.939 [2024-11-20 12:43:40.610207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.939 [2024-11-20 12:43:40.610224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.939 qpair failed and we were unable to recover it. 00:29:34.939 [2024-11-20 12:43:40.620076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.939 [2024-11-20 12:43:40.620143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.939 [2024-11-20 12:43:40.620157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.939 [2024-11-20 12:43:40.620165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.939 [2024-11-20 12:43:40.620171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.939 [2024-11-20 12:43:40.620186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.939 qpair failed and we were unable to recover it. 00:29:34.939 [2024-11-20 12:43:40.630168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.939 [2024-11-20 12:43:40.630232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.939 [2024-11-20 12:43:40.630246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.939 [2024-11-20 12:43:40.630254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.939 [2024-11-20 12:43:40.630260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.939 [2024-11-20 12:43:40.630275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.939 qpair failed and we were unable to recover it. 00:29:34.939 [2024-11-20 12:43:40.640228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.939 [2024-11-20 12:43:40.640282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.939 [2024-11-20 12:43:40.640296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.939 [2024-11-20 12:43:40.640304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.939 [2024-11-20 12:43:40.640310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.939 [2024-11-20 12:43:40.640329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.939 qpair failed and we were unable to recover it. 00:29:34.939 [2024-11-20 12:43:40.650227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.939 [2024-11-20 12:43:40.650279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.939 [2024-11-20 12:43:40.650294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.939 [2024-11-20 12:43:40.650301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.939 [2024-11-20 12:43:40.650308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.939 [2024-11-20 12:43:40.650322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.939 qpair failed and we were unable to recover it. 00:29:34.939 [2024-11-20 12:43:40.660276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.939 [2024-11-20 12:43:40.660350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.939 [2024-11-20 12:43:40.660366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.939 [2024-11-20 12:43:40.660373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.939 [2024-11-20 12:43:40.660379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.939 [2024-11-20 12:43:40.660395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.939 qpair failed and we were unable to recover it. 00:29:34.939 [2024-11-20 12:43:40.670346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.939 [2024-11-20 12:43:40.670402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.939 [2024-11-20 12:43:40.670416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.939 [2024-11-20 12:43:40.670423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.939 [2024-11-20 12:43:40.670430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.939 [2024-11-20 12:43:40.670445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.939 qpair failed and we were unable to recover it. 00:29:34.939 [2024-11-20 12:43:40.680308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.939 [2024-11-20 12:43:40.680362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.939 [2024-11-20 12:43:40.680378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.939 [2024-11-20 12:43:40.680385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.939 [2024-11-20 12:43:40.680393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.939 [2024-11-20 12:43:40.680407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.939 qpair failed and we were unable to recover it. 00:29:34.939 [2024-11-20 12:43:40.690381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.939 [2024-11-20 12:43:40.690436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.940 [2024-11-20 12:43:40.690450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.940 [2024-11-20 12:43:40.690457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.940 [2024-11-20 12:43:40.690464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:34.940 [2024-11-20 12:43:40.690478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.940 qpair failed and we were unable to recover it. 00:29:35.200 [2024-11-20 12:43:40.700385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.200 [2024-11-20 12:43:40.700441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.200 [2024-11-20 12:43:40.700455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.200 [2024-11-20 12:43:40.700463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.200 [2024-11-20 12:43:40.700470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.200 [2024-11-20 12:43:40.700484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.200 qpair failed and we were unable to recover it. 00:29:35.200 [2024-11-20 12:43:40.710417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.200 [2024-11-20 12:43:40.710477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.200 [2024-11-20 12:43:40.710492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.200 [2024-11-20 12:43:40.710499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.200 [2024-11-20 12:43:40.710505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.200 [2024-11-20 12:43:40.710520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.200 qpair failed and we were unable to recover it. 00:29:35.200 [2024-11-20 12:43:40.720432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.200 [2024-11-20 12:43:40.720480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.200 [2024-11-20 12:43:40.720494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.200 [2024-11-20 12:43:40.720501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.200 [2024-11-20 12:43:40.720508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.200 [2024-11-20 12:43:40.720522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.200 qpair failed and we were unable to recover it. 00:29:35.200 [2024-11-20 12:43:40.730459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.200 [2024-11-20 12:43:40.730511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.200 [2024-11-20 12:43:40.730525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.200 [2024-11-20 12:43:40.730536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.200 [2024-11-20 12:43:40.730542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.200 [2024-11-20 12:43:40.730557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.200 qpair failed and we were unable to recover it. 00:29:35.200 [2024-11-20 12:43:40.740531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.201 [2024-11-20 12:43:40.740587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.201 [2024-11-20 12:43:40.740601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.201 [2024-11-20 12:43:40.740608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.201 [2024-11-20 12:43:40.740615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.201 [2024-11-20 12:43:40.740629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.201 qpair failed and we were unable to recover it. 00:29:35.201 [2024-11-20 12:43:40.750581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.201 [2024-11-20 12:43:40.750673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.201 [2024-11-20 12:43:40.750689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.201 [2024-11-20 12:43:40.750696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.201 [2024-11-20 12:43:40.750702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.201 [2024-11-20 12:43:40.750717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.201 qpair failed and we were unable to recover it. 00:29:35.201 [2024-11-20 12:43:40.760584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.201 [2024-11-20 12:43:40.760640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.201 [2024-11-20 12:43:40.760655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.201 [2024-11-20 12:43:40.760663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.201 [2024-11-20 12:43:40.760669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.201 [2024-11-20 12:43:40.760685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.201 qpair failed and we were unable to recover it. 00:29:35.201 [2024-11-20 12:43:40.770580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.201 [2024-11-20 12:43:40.770636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.201 [2024-11-20 12:43:40.770651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.201 [2024-11-20 12:43:40.770658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.201 [2024-11-20 12:43:40.770664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.201 [2024-11-20 12:43:40.770683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.201 qpair failed and we were unable to recover it. 00:29:35.201 [2024-11-20 12:43:40.780613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.201 [2024-11-20 12:43:40.780673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.201 [2024-11-20 12:43:40.780688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.201 [2024-11-20 12:43:40.780695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.201 [2024-11-20 12:43:40.780701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.201 [2024-11-20 12:43:40.780716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.201 qpair failed and we were unable to recover it. 00:29:35.201 [2024-11-20 12:43:40.790653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.201 [2024-11-20 12:43:40.790706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.201 [2024-11-20 12:43:40.790720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.201 [2024-11-20 12:43:40.790727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.201 [2024-11-20 12:43:40.790734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.201 [2024-11-20 12:43:40.790748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.201 qpair failed and we were unable to recover it. 00:29:35.201 [2024-11-20 12:43:40.800681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.201 [2024-11-20 12:43:40.800737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.201 [2024-11-20 12:43:40.800751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.201 [2024-11-20 12:43:40.800758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.201 [2024-11-20 12:43:40.800764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.201 [2024-11-20 12:43:40.800779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.201 qpair failed and we were unable to recover it. 00:29:35.201 [2024-11-20 12:43:40.810697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.201 [2024-11-20 12:43:40.810768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.201 [2024-11-20 12:43:40.810783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.201 [2024-11-20 12:43:40.810790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.201 [2024-11-20 12:43:40.810796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.201 [2024-11-20 12:43:40.810810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.201 qpair failed and we were unable to recover it. 00:29:35.201 [2024-11-20 12:43:40.820730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.201 [2024-11-20 12:43:40.820789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.201 [2024-11-20 12:43:40.820805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.201 [2024-11-20 12:43:40.820812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.201 [2024-11-20 12:43:40.820820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.201 [2024-11-20 12:43:40.820835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.201 qpair failed and we were unable to recover it. 00:29:35.201 [2024-11-20 12:43:40.830761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.201 [2024-11-20 12:43:40.830820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.201 [2024-11-20 12:43:40.830836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.201 [2024-11-20 12:43:40.830844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.201 [2024-11-20 12:43:40.830850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.201 [2024-11-20 12:43:40.830866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.201 qpair failed and we were unable to recover it. 00:29:35.201 [2024-11-20 12:43:40.840720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.201 [2024-11-20 12:43:40.840776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.201 [2024-11-20 12:43:40.840791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.201 [2024-11-20 12:43:40.840797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.201 [2024-11-20 12:43:40.840804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.201 [2024-11-20 12:43:40.840818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.201 qpair failed and we were unable to recover it. 00:29:35.201 [2024-11-20 12:43:40.850882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.201 [2024-11-20 12:43:40.850969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.201 [2024-11-20 12:43:40.850986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.201 [2024-11-20 12:43:40.850994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.201 [2024-11-20 12:43:40.851001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.201 [2024-11-20 12:43:40.851016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.201 qpair failed and we were unable to recover it. 00:29:35.201 [2024-11-20 12:43:40.860885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.201 [2024-11-20 12:43:40.860942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.201 [2024-11-20 12:43:40.860957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.201 [2024-11-20 12:43:40.860968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.201 [2024-11-20 12:43:40.860975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.201 [2024-11-20 12:43:40.860990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.202 qpair failed and we were unable to recover it. 00:29:35.202 [2024-11-20 12:43:40.870920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.202 [2024-11-20 12:43:40.870987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.202 [2024-11-20 12:43:40.871002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.202 [2024-11-20 12:43:40.871010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.202 [2024-11-20 12:43:40.871016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.202 [2024-11-20 12:43:40.871031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.202 qpair failed and we were unable to recover it. 00:29:35.202 [2024-11-20 12:43:40.880943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.202 [2024-11-20 12:43:40.880993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.202 [2024-11-20 12:43:40.881008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.202 [2024-11-20 12:43:40.881016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.202 [2024-11-20 12:43:40.881022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.202 [2024-11-20 12:43:40.881037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.202 qpair failed and we were unable to recover it. 00:29:35.202 [2024-11-20 12:43:40.890934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.202 [2024-11-20 12:43:40.890984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.202 [2024-11-20 12:43:40.890999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.202 [2024-11-20 12:43:40.891007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.202 [2024-11-20 12:43:40.891014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.202 [2024-11-20 12:43:40.891029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.202 qpair failed and we were unable to recover it. 00:29:35.202 [2024-11-20 12:43:40.901022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.202 [2024-11-20 12:43:40.901083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.202 [2024-11-20 12:43:40.901098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.202 [2024-11-20 12:43:40.901105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.202 [2024-11-20 12:43:40.901112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.202 [2024-11-20 12:43:40.901130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.202 qpair failed and we were unable to recover it. 00:29:35.202 [2024-11-20 12:43:40.911071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.202 [2024-11-20 12:43:40.911154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.202 [2024-11-20 12:43:40.911170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.202 [2024-11-20 12:43:40.911178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.202 [2024-11-20 12:43:40.911184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.202 [2024-11-20 12:43:40.911199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.202 qpair failed and we were unable to recover it. 00:29:35.202 [2024-11-20 12:43:40.921072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.202 [2024-11-20 12:43:40.921128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.202 [2024-11-20 12:43:40.921143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.202 [2024-11-20 12:43:40.921151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.202 [2024-11-20 12:43:40.921157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.202 [2024-11-20 12:43:40.921171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.202 qpair failed and we were unable to recover it. 00:29:35.202 [2024-11-20 12:43:40.931083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.202 [2024-11-20 12:43:40.931148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.202 [2024-11-20 12:43:40.931163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.202 [2024-11-20 12:43:40.931170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.202 [2024-11-20 12:43:40.931176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.202 [2024-11-20 12:43:40.931191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.202 qpair failed and we were unable to recover it. 00:29:35.202 [2024-11-20 12:43:40.941086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.202 [2024-11-20 12:43:40.941141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.202 [2024-11-20 12:43:40.941155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.202 [2024-11-20 12:43:40.941161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.202 [2024-11-20 12:43:40.941168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.202 [2024-11-20 12:43:40.941183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.202 qpair failed and we were unable to recover it. 00:29:35.202 [2024-11-20 12:43:40.951112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.202 [2024-11-20 12:43:40.951194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.202 [2024-11-20 12:43:40.951212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.202 [2024-11-20 12:43:40.951219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.202 [2024-11-20 12:43:40.951226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.202 [2024-11-20 12:43:40.951240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.202 qpair failed and we were unable to recover it. 00:29:35.202 [2024-11-20 12:43:40.961134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.202 [2024-11-20 12:43:40.961188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.202 [2024-11-20 12:43:40.961207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.202 [2024-11-20 12:43:40.961215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.202 [2024-11-20 12:43:40.961222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.202 [2024-11-20 12:43:40.961237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.202 qpair failed and we were unable to recover it. 00:29:35.463 [2024-11-20 12:43:40.971168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.463 [2024-11-20 12:43:40.971240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.463 [2024-11-20 12:43:40.971255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.463 [2024-11-20 12:43:40.971263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.463 [2024-11-20 12:43:40.971269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.463 [2024-11-20 12:43:40.971284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.463 qpair failed and we were unable to recover it. 00:29:35.463 [2024-11-20 12:43:40.981224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.463 [2024-11-20 12:43:40.981293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.463 [2024-11-20 12:43:40.981308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.463 [2024-11-20 12:43:40.981315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.463 [2024-11-20 12:43:40.981321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.463 [2024-11-20 12:43:40.981336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.463 qpair failed and we were unable to recover it. 00:29:35.463 [2024-11-20 12:43:40.991271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.463 [2024-11-20 12:43:40.991341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.463 [2024-11-20 12:43:40.991355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.463 [2024-11-20 12:43:40.991366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.463 [2024-11-20 12:43:40.991372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.463 [2024-11-20 12:43:40.991387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.463 qpair failed and we were unable to recover it. 00:29:35.463 [2024-11-20 12:43:41.001291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.463 [2024-11-20 12:43:41.001345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.463 [2024-11-20 12:43:41.001361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.463 [2024-11-20 12:43:41.001368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.463 [2024-11-20 12:43:41.001375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.463 [2024-11-20 12:43:41.001390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.463 qpair failed and we were unable to recover it. 00:29:35.463 [2024-11-20 12:43:41.011328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.463 [2024-11-20 12:43:41.011390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.463 [2024-11-20 12:43:41.011405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.463 [2024-11-20 12:43:41.011413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.463 [2024-11-20 12:43:41.011419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.463 [2024-11-20 12:43:41.011434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.463 qpair failed and we were unable to recover it. 00:29:35.463 [2024-11-20 12:43:41.021348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.463 [2024-11-20 12:43:41.021404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.463 [2024-11-20 12:43:41.021418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.463 [2024-11-20 12:43:41.021425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.463 [2024-11-20 12:43:41.021431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.463 [2024-11-20 12:43:41.021446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.463 qpair failed and we were unable to recover it. 00:29:35.463 [2024-11-20 12:43:41.031369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.463 [2024-11-20 12:43:41.031422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.463 [2024-11-20 12:43:41.031437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.463 [2024-11-20 12:43:41.031444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.463 [2024-11-20 12:43:41.031451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.464 [2024-11-20 12:43:41.031470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.464 qpair failed and we were unable to recover it. 00:29:35.464 [2024-11-20 12:43:41.041399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.464 [2024-11-20 12:43:41.041468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.464 [2024-11-20 12:43:41.041482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.464 [2024-11-20 12:43:41.041489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.464 [2024-11-20 12:43:41.041496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.464 [2024-11-20 12:43:41.041510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.464 qpair failed and we were unable to recover it. 00:29:35.464 [2024-11-20 12:43:41.051401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.464 [2024-11-20 12:43:41.051457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.464 [2024-11-20 12:43:41.051471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.464 [2024-11-20 12:43:41.051480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.464 [2024-11-20 12:43:41.051486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.464 [2024-11-20 12:43:41.051501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.464 qpair failed and we were unable to recover it. 00:29:35.464 [2024-11-20 12:43:41.061409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.464 [2024-11-20 12:43:41.061475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.464 [2024-11-20 12:43:41.061491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.464 [2024-11-20 12:43:41.061499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.464 [2024-11-20 12:43:41.061505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.464 [2024-11-20 12:43:41.061520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.464 qpair failed and we were unable to recover it. 00:29:35.464 [2024-11-20 12:43:41.071387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.464 [2024-11-20 12:43:41.071446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.464 [2024-11-20 12:43:41.071461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.464 [2024-11-20 12:43:41.071468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.464 [2024-11-20 12:43:41.071475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.464 [2024-11-20 12:43:41.071489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.464 qpair failed and we were unable to recover it. 00:29:35.464 [2024-11-20 12:43:41.081461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.464 [2024-11-20 12:43:41.081517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.464 [2024-11-20 12:43:41.081531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.464 [2024-11-20 12:43:41.081538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.464 [2024-11-20 12:43:41.081545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.464 [2024-11-20 12:43:41.081560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.464 qpair failed and we were unable to recover it. 00:29:35.464 [2024-11-20 12:43:41.091549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.464 [2024-11-20 12:43:41.091600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.464 [2024-11-20 12:43:41.091614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.464 [2024-11-20 12:43:41.091621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.464 [2024-11-20 12:43:41.091628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.464 [2024-11-20 12:43:41.091643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.464 qpair failed and we were unable to recover it. 00:29:35.464 [2024-11-20 12:43:41.101549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.464 [2024-11-20 12:43:41.101605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.464 [2024-11-20 12:43:41.101619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.464 [2024-11-20 12:43:41.101626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.464 [2024-11-20 12:43:41.101632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.464 [2024-11-20 12:43:41.101646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.464 qpair failed and we were unable to recover it. 00:29:35.464 [2024-11-20 12:43:41.111556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.464 [2024-11-20 12:43:41.111611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.464 [2024-11-20 12:43:41.111626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.464 [2024-11-20 12:43:41.111633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.464 [2024-11-20 12:43:41.111640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.464 [2024-11-20 12:43:41.111654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.464 qpair failed and we were unable to recover it. 00:29:35.464 [2024-11-20 12:43:41.121513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.464 [2024-11-20 12:43:41.121568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.464 [2024-11-20 12:43:41.121583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.464 [2024-11-20 12:43:41.121595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.464 [2024-11-20 12:43:41.121601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.464 [2024-11-20 12:43:41.121617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.464 qpair failed and we were unable to recover it. 00:29:35.464 [2024-11-20 12:43:41.131622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.464 [2024-11-20 12:43:41.131677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.464 [2024-11-20 12:43:41.131692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.464 [2024-11-20 12:43:41.131699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.464 [2024-11-20 12:43:41.131705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.464 [2024-11-20 12:43:41.131720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.464 qpair failed and we were unable to recover it. 00:29:35.464 [2024-11-20 12:43:41.141603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.464 [2024-11-20 12:43:41.141678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.464 [2024-11-20 12:43:41.141694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.464 [2024-11-20 12:43:41.141702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.464 [2024-11-20 12:43:41.141708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.464 [2024-11-20 12:43:41.141724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.464 qpair failed and we were unable to recover it. 00:29:35.464 [2024-11-20 12:43:41.151672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.464 [2024-11-20 12:43:41.151729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.464 [2024-11-20 12:43:41.151744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.464 [2024-11-20 12:43:41.151751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.464 [2024-11-20 12:43:41.151757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.464 [2024-11-20 12:43:41.151773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.464 qpair failed and we were unable to recover it. 00:29:35.464 [2024-11-20 12:43:41.161700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.464 [2024-11-20 12:43:41.161756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.464 [2024-11-20 12:43:41.161771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.464 [2024-11-20 12:43:41.161778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.464 [2024-11-20 12:43:41.161785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.465 [2024-11-20 12:43:41.161804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.465 qpair failed and we were unable to recover it. 00:29:35.465 [2024-11-20 12:43:41.171756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.465 [2024-11-20 12:43:41.171843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.465 [2024-11-20 12:43:41.171859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.465 [2024-11-20 12:43:41.171867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.465 [2024-11-20 12:43:41.171873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.465 [2024-11-20 12:43:41.171889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.465 qpair failed and we were unable to recover it. 00:29:35.465 [2024-11-20 12:43:41.181699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.465 [2024-11-20 12:43:41.181756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.465 [2024-11-20 12:43:41.181770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.465 [2024-11-20 12:43:41.181778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.465 [2024-11-20 12:43:41.181784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.465 [2024-11-20 12:43:41.181799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.465 qpair failed and we were unable to recover it. 00:29:35.465 [2024-11-20 12:43:41.191727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.465 [2024-11-20 12:43:41.191816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.465 [2024-11-20 12:43:41.191830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.465 [2024-11-20 12:43:41.191838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.465 [2024-11-20 12:43:41.191844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.465 [2024-11-20 12:43:41.191858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.465 qpair failed and we were unable to recover it. 00:29:35.465 [2024-11-20 12:43:41.201809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.465 [2024-11-20 12:43:41.201864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.465 [2024-11-20 12:43:41.201878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.465 [2024-11-20 12:43:41.201886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.465 [2024-11-20 12:43:41.201891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.465 [2024-11-20 12:43:41.201905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.465 qpair failed and we were unable to recover it. 00:29:35.465 [2024-11-20 12:43:41.211824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.465 [2024-11-20 12:43:41.211894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.465 [2024-11-20 12:43:41.211909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.465 [2024-11-20 12:43:41.211916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.465 [2024-11-20 12:43:41.211922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.465 [2024-11-20 12:43:41.211937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.465 qpair failed and we were unable to recover it. 00:29:35.465 [2024-11-20 12:43:41.221932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.465 [2024-11-20 12:43:41.222034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.465 [2024-11-20 12:43:41.222049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.465 [2024-11-20 12:43:41.222056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.465 [2024-11-20 12:43:41.222063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.465 [2024-11-20 12:43:41.222077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.465 qpair failed and we were unable to recover it. 00:29:35.726 [2024-11-20 12:43:41.231918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.726 [2024-11-20 12:43:41.232003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.726 [2024-11-20 12:43:41.232017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.726 [2024-11-20 12:43:41.232025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.726 [2024-11-20 12:43:41.232031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.726 [2024-11-20 12:43:41.232046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.726 qpair failed and we were unable to recover it. 00:29:35.726 [2024-11-20 12:43:41.241868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.726 [2024-11-20 12:43:41.241947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.726 [2024-11-20 12:43:41.241961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.726 [2024-11-20 12:43:41.241969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.726 [2024-11-20 12:43:41.241974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.726 [2024-11-20 12:43:41.241989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.726 qpair failed and we were unable to recover it. 00:29:35.726 [2024-11-20 12:43:41.251955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.726 [2024-11-20 12:43:41.252005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.726 [2024-11-20 12:43:41.252019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.726 [2024-11-20 12:43:41.252030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.726 [2024-11-20 12:43:41.252036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.726 [2024-11-20 12:43:41.252050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.726 qpair failed and we were unable to recover it. 00:29:35.726 [2024-11-20 12:43:41.262005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.726 [2024-11-20 12:43:41.262069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.726 [2024-11-20 12:43:41.262084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.726 [2024-11-20 12:43:41.262092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.726 [2024-11-20 12:43:41.262098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.726 [2024-11-20 12:43:41.262113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.726 qpair failed and we were unable to recover it. 00:29:35.726 [2024-11-20 12:43:41.272001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.726 [2024-11-20 12:43:41.272059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.726 [2024-11-20 12:43:41.272074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.726 [2024-11-20 12:43:41.272081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.726 [2024-11-20 12:43:41.272087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.726 [2024-11-20 12:43:41.272102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.726 qpair failed and we were unable to recover it. 00:29:35.726 [2024-11-20 12:43:41.282032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.726 [2024-11-20 12:43:41.282086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.726 [2024-11-20 12:43:41.282101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.726 [2024-11-20 12:43:41.282108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.726 [2024-11-20 12:43:41.282114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.726 [2024-11-20 12:43:41.282128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.726 qpair failed and we were unable to recover it. 00:29:35.726 [2024-11-20 12:43:41.292076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.726 [2024-11-20 12:43:41.292129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.726 [2024-11-20 12:43:41.292144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.726 [2024-11-20 12:43:41.292150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.726 [2024-11-20 12:43:41.292157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.726 [2024-11-20 12:43:41.292176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.726 qpair failed and we were unable to recover it. 00:29:35.726 [2024-11-20 12:43:41.302099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.726 [2024-11-20 12:43:41.302157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.726 [2024-11-20 12:43:41.302171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.726 [2024-11-20 12:43:41.302178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.726 [2024-11-20 12:43:41.302184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.726 [2024-11-20 12:43:41.302199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.726 qpair failed and we were unable to recover it. 00:29:35.726 [2024-11-20 12:43:41.312047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.726 [2024-11-20 12:43:41.312107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.726 [2024-11-20 12:43:41.312122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.726 [2024-11-20 12:43:41.312130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.726 [2024-11-20 12:43:41.312136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.726 [2024-11-20 12:43:41.312151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.726 qpair failed and we were unable to recover it. 00:29:35.726 [2024-11-20 12:43:41.322145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.727 [2024-11-20 12:43:41.322236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.727 [2024-11-20 12:43:41.322251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.727 [2024-11-20 12:43:41.322258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.727 [2024-11-20 12:43:41.322265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.727 [2024-11-20 12:43:41.322280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.727 qpair failed and we were unable to recover it. 00:29:35.727 [2024-11-20 12:43:41.332177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.727 [2024-11-20 12:43:41.332235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.727 [2024-11-20 12:43:41.332250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.727 [2024-11-20 12:43:41.332257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.727 [2024-11-20 12:43:41.332264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.727 [2024-11-20 12:43:41.332278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.727 qpair failed and we were unable to recover it. 00:29:35.727 [2024-11-20 12:43:41.342237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.727 [2024-11-20 12:43:41.342310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.727 [2024-11-20 12:43:41.342325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.727 [2024-11-20 12:43:41.342333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.727 [2024-11-20 12:43:41.342340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.727 [2024-11-20 12:43:41.342354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.727 qpair failed and we were unable to recover it. 00:29:35.727 [2024-11-20 12:43:41.352241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.727 [2024-11-20 12:43:41.352310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.727 [2024-11-20 12:43:41.352324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.727 [2024-11-20 12:43:41.352332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.727 [2024-11-20 12:43:41.352339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.727 [2024-11-20 12:43:41.352353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.727 qpair failed and we were unable to recover it. 00:29:35.727 [2024-11-20 12:43:41.362264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.727 [2024-11-20 12:43:41.362361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.727 [2024-11-20 12:43:41.362378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.727 [2024-11-20 12:43:41.362385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.727 [2024-11-20 12:43:41.362391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.727 [2024-11-20 12:43:41.362407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.727 qpair failed and we were unable to recover it. 00:29:35.727 [2024-11-20 12:43:41.372274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.727 [2024-11-20 12:43:41.372332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.727 [2024-11-20 12:43:41.372346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.727 [2024-11-20 12:43:41.372354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.727 [2024-11-20 12:43:41.372360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.727 [2024-11-20 12:43:41.372375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.727 qpair failed and we were unable to recover it. 00:29:35.727 [2024-11-20 12:43:41.382262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.727 [2024-11-20 12:43:41.382318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.727 [2024-11-20 12:43:41.382332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.727 [2024-11-20 12:43:41.382342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.727 [2024-11-20 12:43:41.382349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.727 [2024-11-20 12:43:41.382364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.727 qpair failed and we were unable to recover it. 00:29:35.727 [2024-11-20 12:43:41.392387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.727 [2024-11-20 12:43:41.392444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.727 [2024-11-20 12:43:41.392459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.727 [2024-11-20 12:43:41.392466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.727 [2024-11-20 12:43:41.392472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.727 [2024-11-20 12:43:41.392487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.727 qpair failed and we were unable to recover it. 00:29:35.727 [2024-11-20 12:43:41.402369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.727 [2024-11-20 12:43:41.402453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.727 [2024-11-20 12:43:41.402468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.727 [2024-11-20 12:43:41.402475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.727 [2024-11-20 12:43:41.402481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.727 [2024-11-20 12:43:41.402496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.727 qpair failed and we were unable to recover it. 00:29:35.727 [2024-11-20 12:43:41.412385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.727 [2024-11-20 12:43:41.412442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.727 [2024-11-20 12:43:41.412457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.727 [2024-11-20 12:43:41.412465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.727 [2024-11-20 12:43:41.412471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.727 [2024-11-20 12:43:41.412486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.727 qpair failed and we were unable to recover it. 00:29:35.727 [2024-11-20 12:43:41.422393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.727 [2024-11-20 12:43:41.422452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.727 [2024-11-20 12:43:41.422466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.727 [2024-11-20 12:43:41.422473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.727 [2024-11-20 12:43:41.422480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.727 [2024-11-20 12:43:41.422497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.727 qpair failed and we were unable to recover it. 00:29:35.727 [2024-11-20 12:43:41.432457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.727 [2024-11-20 12:43:41.432528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.727 [2024-11-20 12:43:41.432543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.727 [2024-11-20 12:43:41.432550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.727 [2024-11-20 12:43:41.432556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.727 [2024-11-20 12:43:41.432570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.727 qpair failed and we were unable to recover it. 00:29:35.727 [2024-11-20 12:43:41.442448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.727 [2024-11-20 12:43:41.442530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.727 [2024-11-20 12:43:41.442545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.727 [2024-11-20 12:43:41.442553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.727 [2024-11-20 12:43:41.442559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.727 [2024-11-20 12:43:41.442573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.727 qpair failed and we were unable to recover it. 00:29:35.727 [2024-11-20 12:43:41.452455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.728 [2024-11-20 12:43:41.452540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.728 [2024-11-20 12:43:41.452555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.728 [2024-11-20 12:43:41.452562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.728 [2024-11-20 12:43:41.452569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.728 [2024-11-20 12:43:41.452583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.728 qpair failed and we were unable to recover it. 00:29:35.728 [2024-11-20 12:43:41.462481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.728 [2024-11-20 12:43:41.462538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.728 [2024-11-20 12:43:41.462553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.728 [2024-11-20 12:43:41.462560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.728 [2024-11-20 12:43:41.462566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.728 [2024-11-20 12:43:41.462581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.728 qpair failed and we were unable to recover it. 00:29:35.728 [2024-11-20 12:43:41.472486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.728 [2024-11-20 12:43:41.472542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.728 [2024-11-20 12:43:41.472557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.728 [2024-11-20 12:43:41.472564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.728 [2024-11-20 12:43:41.472570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.728 [2024-11-20 12:43:41.472585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.728 qpair failed and we were unable to recover it. 00:29:35.728 [2024-11-20 12:43:41.482641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.728 [2024-11-20 12:43:41.482698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.728 [2024-11-20 12:43:41.482712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.728 [2024-11-20 12:43:41.482720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.728 [2024-11-20 12:43:41.482727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.728 [2024-11-20 12:43:41.482741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.728 qpair failed and we were unable to recover it. 00:29:35.988 [2024-11-20 12:43:41.492621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.988 [2024-11-20 12:43:41.492718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.988 [2024-11-20 12:43:41.492735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.988 [2024-11-20 12:43:41.492742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.988 [2024-11-20 12:43:41.492749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.988 [2024-11-20 12:43:41.492764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.988 qpair failed and we were unable to recover it. 00:29:35.988 [2024-11-20 12:43:41.502589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.988 [2024-11-20 12:43:41.502647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.988 [2024-11-20 12:43:41.502661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.988 [2024-11-20 12:43:41.502669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.988 [2024-11-20 12:43:41.502676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.988 [2024-11-20 12:43:41.502690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.988 qpair failed and we were unable to recover it. 00:29:35.988 [2024-11-20 12:43:41.512678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.988 [2024-11-20 12:43:41.512729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.988 [2024-11-20 12:43:41.512743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.988 [2024-11-20 12:43:41.512757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.988 [2024-11-20 12:43:41.512763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.988 [2024-11-20 12:43:41.512778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.988 qpair failed and we were unable to recover it. 00:29:35.988 [2024-11-20 12:43:41.522725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.988 [2024-11-20 12:43:41.522785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.988 [2024-11-20 12:43:41.522798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.989 [2024-11-20 12:43:41.522805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.989 [2024-11-20 12:43:41.522812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.989 [2024-11-20 12:43:41.522826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.989 qpair failed and we were unable to recover it. 00:29:35.989 [2024-11-20 12:43:41.532778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.989 [2024-11-20 12:43:41.532836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.989 [2024-11-20 12:43:41.532850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.989 [2024-11-20 12:43:41.532858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.989 [2024-11-20 12:43:41.532864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.989 [2024-11-20 12:43:41.532878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.989 qpair failed and we were unable to recover it. 00:29:35.989 [2024-11-20 12:43:41.542751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.989 [2024-11-20 12:43:41.542812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.989 [2024-11-20 12:43:41.542826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.989 [2024-11-20 12:43:41.542834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.989 [2024-11-20 12:43:41.542840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.989 [2024-11-20 12:43:41.542855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.989 qpair failed and we were unable to recover it. 00:29:35.989 [2024-11-20 12:43:41.552776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.989 [2024-11-20 12:43:41.552838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.989 [2024-11-20 12:43:41.552852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.989 [2024-11-20 12:43:41.552859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.989 [2024-11-20 12:43:41.552865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.989 [2024-11-20 12:43:41.552883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.989 qpair failed and we were unable to recover it. 00:29:35.989 [2024-11-20 12:43:41.562818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.989 [2024-11-20 12:43:41.562891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.989 [2024-11-20 12:43:41.562907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.989 [2024-11-20 12:43:41.562914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.989 [2024-11-20 12:43:41.562920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.989 [2024-11-20 12:43:41.562935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.989 qpair failed and we were unable to recover it. 00:29:35.989 [2024-11-20 12:43:41.572884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.989 [2024-11-20 12:43:41.572942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.989 [2024-11-20 12:43:41.572956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.989 [2024-11-20 12:43:41.572963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.989 [2024-11-20 12:43:41.572969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.989 [2024-11-20 12:43:41.572984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.989 qpair failed and we were unable to recover it. 00:29:35.989 [2024-11-20 12:43:41.582879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.989 [2024-11-20 12:43:41.582942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.989 [2024-11-20 12:43:41.582956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.989 [2024-11-20 12:43:41.582964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.989 [2024-11-20 12:43:41.582970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.989 [2024-11-20 12:43:41.582984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.989 qpair failed and we were unable to recover it. 00:29:35.989 [2024-11-20 12:43:41.592881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.989 [2024-11-20 12:43:41.592936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.989 [2024-11-20 12:43:41.592951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.989 [2024-11-20 12:43:41.592958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.989 [2024-11-20 12:43:41.592965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.989 [2024-11-20 12:43:41.592980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.989 qpair failed and we were unable to recover it. 00:29:35.989 [2024-11-20 12:43:41.602974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.989 [2024-11-20 12:43:41.603039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.989 [2024-11-20 12:43:41.603054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.989 [2024-11-20 12:43:41.603061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.989 [2024-11-20 12:43:41.603068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.989 [2024-11-20 12:43:41.603082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.989 qpair failed and we were unable to recover it. 00:29:35.989 [2024-11-20 12:43:41.613000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.989 [2024-11-20 12:43:41.613058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.989 [2024-11-20 12:43:41.613073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.989 [2024-11-20 12:43:41.613081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.989 [2024-11-20 12:43:41.613087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.989 [2024-11-20 12:43:41.613102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.989 qpair failed and we were unable to recover it. 00:29:35.989 [2024-11-20 12:43:41.622994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.989 [2024-11-20 12:43:41.623052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.989 [2024-11-20 12:43:41.623067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.989 [2024-11-20 12:43:41.623074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.989 [2024-11-20 12:43:41.623081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.989 [2024-11-20 12:43:41.623095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.989 qpair failed and we were unable to recover it. 00:29:35.989 [2024-11-20 12:43:41.633090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.989 [2024-11-20 12:43:41.633148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.989 [2024-11-20 12:43:41.633162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.989 [2024-11-20 12:43:41.633169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.989 [2024-11-20 12:43:41.633176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.989 [2024-11-20 12:43:41.633191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.989 qpair failed and we were unable to recover it. 00:29:35.989 [2024-11-20 12:43:41.643045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.989 [2024-11-20 12:43:41.643096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.989 [2024-11-20 12:43:41.643110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.989 [2024-11-20 12:43:41.643120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.989 [2024-11-20 12:43:41.643127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.989 [2024-11-20 12:43:41.643141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.989 qpair failed and we were unable to recover it. 00:29:35.989 [2024-11-20 12:43:41.653115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.989 [2024-11-20 12:43:41.653169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.989 [2024-11-20 12:43:41.653183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.990 [2024-11-20 12:43:41.653191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.990 [2024-11-20 12:43:41.653198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.990 [2024-11-20 12:43:41.653216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.990 qpair failed and we were unable to recover it. 00:29:35.990 [2024-11-20 12:43:41.663183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.990 [2024-11-20 12:43:41.663297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.990 [2024-11-20 12:43:41.663315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.990 [2024-11-20 12:43:41.663322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.990 [2024-11-20 12:43:41.663329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.990 [2024-11-20 12:43:41.663343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.990 qpair failed and we were unable to recover it. 00:29:35.990 [2024-11-20 12:43:41.673127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.990 [2024-11-20 12:43:41.673218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.990 [2024-11-20 12:43:41.673233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.990 [2024-11-20 12:43:41.673240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.990 [2024-11-20 12:43:41.673247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.990 [2024-11-20 12:43:41.673261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.990 qpair failed and we were unable to recover it. 00:29:35.990 [2024-11-20 12:43:41.683171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.990 [2024-11-20 12:43:41.683233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.990 [2024-11-20 12:43:41.683247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.990 [2024-11-20 12:43:41.683255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.990 [2024-11-20 12:43:41.683262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.990 [2024-11-20 12:43:41.683280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.990 qpair failed and we were unable to recover it. 00:29:35.990 [2024-11-20 12:43:41.693205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.990 [2024-11-20 12:43:41.693258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.990 [2024-11-20 12:43:41.693272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.990 [2024-11-20 12:43:41.693280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.990 [2024-11-20 12:43:41.693286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.990 [2024-11-20 12:43:41.693301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.990 qpair failed and we were unable to recover it. 00:29:35.990 [2024-11-20 12:43:41.703224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.990 [2024-11-20 12:43:41.703306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.990 [2024-11-20 12:43:41.703321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.990 [2024-11-20 12:43:41.703328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.990 [2024-11-20 12:43:41.703335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.990 [2024-11-20 12:43:41.703350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.990 qpair failed and we were unable to recover it. 00:29:35.990 [2024-11-20 12:43:41.713288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.990 [2024-11-20 12:43:41.713351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.990 [2024-11-20 12:43:41.713366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.990 [2024-11-20 12:43:41.713374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.990 [2024-11-20 12:43:41.713380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.990 [2024-11-20 12:43:41.713395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.990 qpair failed and we were unable to recover it. 00:29:35.990 [2024-11-20 12:43:41.723307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.990 [2024-11-20 12:43:41.723406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.990 [2024-11-20 12:43:41.723423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.990 [2024-11-20 12:43:41.723431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.990 [2024-11-20 12:43:41.723438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.990 [2024-11-20 12:43:41.723453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.990 qpair failed and we were unable to recover it. 00:29:35.990 [2024-11-20 12:43:41.733342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.990 [2024-11-20 12:43:41.733457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.990 [2024-11-20 12:43:41.733473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.990 [2024-11-20 12:43:41.733481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.990 [2024-11-20 12:43:41.733487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.990 [2024-11-20 12:43:41.733502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.990 qpair failed and we were unable to recover it. 00:29:35.990 [2024-11-20 12:43:41.743385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.990 [2024-11-20 12:43:41.743445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.990 [2024-11-20 12:43:41.743459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.990 [2024-11-20 12:43:41.743466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.990 [2024-11-20 12:43:41.743473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:35.990 [2024-11-20 12:43:41.743487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.990 qpair failed and we were unable to recover it. 00:29:36.251 [2024-11-20 12:43:41.753378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.251 [2024-11-20 12:43:41.753440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.251 [2024-11-20 12:43:41.753454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.251 [2024-11-20 12:43:41.753462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.251 [2024-11-20 12:43:41.753469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:36.251 [2024-11-20 12:43:41.753484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.251 qpair failed and we were unable to recover it. 00:29:36.251 [2024-11-20 12:43:41.763465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.251 [2024-11-20 12:43:41.763519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.251 [2024-11-20 12:43:41.763534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.251 [2024-11-20 12:43:41.763542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.251 [2024-11-20 12:43:41.763548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:36.251 [2024-11-20 12:43:41.763563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.251 qpair failed and we were unable to recover it. 00:29:36.251 [2024-11-20 12:43:41.773430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.251 [2024-11-20 12:43:41.773486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.251 [2024-11-20 12:43:41.773501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.251 [2024-11-20 12:43:41.773512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.251 [2024-11-20 12:43:41.773518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:36.251 [2024-11-20 12:43:41.773533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.251 qpair failed and we were unable to recover it. 00:29:36.251 [2024-11-20 12:43:41.783491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.251 [2024-11-20 12:43:41.783550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.251 [2024-11-20 12:43:41.783565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.251 [2024-11-20 12:43:41.783573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.251 [2024-11-20 12:43:41.783579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:36.251 [2024-11-20 12:43:41.783593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.251 qpair failed and we were unable to recover it. 00:29:36.251 [2024-11-20 12:43:41.793423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.251 [2024-11-20 12:43:41.793475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.251 [2024-11-20 12:43:41.793490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.251 [2024-11-20 12:43:41.793497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.251 [2024-11-20 12:43:41.793504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:36.251 [2024-11-20 12:43:41.793519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.251 qpair failed and we were unable to recover it. 00:29:36.251 [2024-11-20 12:43:41.803508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.251 [2024-11-20 12:43:41.803578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.251 [2024-11-20 12:43:41.803593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.251 [2024-11-20 12:43:41.803599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.251 [2024-11-20 12:43:41.803605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:36.251 [2024-11-20 12:43:41.803619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.251 qpair failed and we were unable to recover it. 00:29:36.251 [2024-11-20 12:43:41.813558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.251 [2024-11-20 12:43:41.813612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.251 [2024-11-20 12:43:41.813626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.251 [2024-11-20 12:43:41.813633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.251 [2024-11-20 12:43:41.813639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:36.251 [2024-11-20 12:43:41.813657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.251 qpair failed and we were unable to recover it. 00:29:36.251 [2024-11-20 12:43:41.823609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.251 [2024-11-20 12:43:41.823673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.251 [2024-11-20 12:43:41.823690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.251 [2024-11-20 12:43:41.823698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.251 [2024-11-20 12:43:41.823705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:36.251 [2024-11-20 12:43:41.823721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.251 qpair failed and we were unable to recover it. 00:29:36.251 [2024-11-20 12:43:41.833606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.251 [2024-11-20 12:43:41.833660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.251 [2024-11-20 12:43:41.833675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.251 [2024-11-20 12:43:41.833682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.251 [2024-11-20 12:43:41.833690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:36.251 [2024-11-20 12:43:41.833704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.251 qpair failed and we were unable to recover it. 00:29:36.251 [2024-11-20 12:43:41.843683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.251 [2024-11-20 12:43:41.843744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.251 [2024-11-20 12:43:41.843758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.251 [2024-11-20 12:43:41.843766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.252 [2024-11-20 12:43:41.843772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:36.252 [2024-11-20 12:43:41.843786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.252 qpair failed and we were unable to recover it. 00:29:36.252 [2024-11-20 12:43:41.853680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.252 [2024-11-20 12:43:41.853745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.252 [2024-11-20 12:43:41.853761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.252 [2024-11-20 12:43:41.853769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.252 [2024-11-20 12:43:41.853775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:36.252 [2024-11-20 12:43:41.853790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.252 qpair failed and we were unable to recover it. 00:29:36.252 [2024-11-20 12:43:41.863705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.252 [2024-11-20 12:43:41.863775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.252 [2024-11-20 12:43:41.863791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.252 [2024-11-20 12:43:41.863799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.252 [2024-11-20 12:43:41.863805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:36.252 [2024-11-20 12:43:41.863820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.252 qpair failed and we were unable to recover it. 00:29:36.252 [2024-11-20 12:43:41.873685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.252 [2024-11-20 12:43:41.873740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.252 [2024-11-20 12:43:41.873755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.252 [2024-11-20 12:43:41.873762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.252 [2024-11-20 12:43:41.873769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:36.252 [2024-11-20 12:43:41.873784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.252 qpair failed and we were unable to recover it. 00:29:36.252 [2024-11-20 12:43:41.883796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.252 [2024-11-20 12:43:41.883901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.252 [2024-11-20 12:43:41.883915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.252 [2024-11-20 12:43:41.883922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.252 [2024-11-20 12:43:41.883929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:36.252 [2024-11-20 12:43:41.883943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.252 qpair failed and we were unable to recover it. 00:29:36.252 [2024-11-20 12:43:41.893792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.252 [2024-11-20 12:43:41.893843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.252 [2024-11-20 12:43:41.893857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.252 [2024-11-20 12:43:41.893864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.252 [2024-11-20 12:43:41.893871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:36.252 [2024-11-20 12:43:41.893886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.252 qpair failed and we were unable to recover it. 00:29:36.252 [2024-11-20 12:43:41.903870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.252 [2024-11-20 12:43:41.903974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.252 [2024-11-20 12:43:41.903988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.252 [2024-11-20 12:43:41.903998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.252 [2024-11-20 12:43:41.904005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:36.252 [2024-11-20 12:43:41.904019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.252 qpair failed and we were unable to recover it. 00:29:36.252 [2024-11-20 12:43:41.913848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.252 [2024-11-20 12:43:41.913905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.252 [2024-11-20 12:43:41.913921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.252 [2024-11-20 12:43:41.913929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.252 [2024-11-20 12:43:41.913935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:36.252 [2024-11-20 12:43:41.913951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.252 qpair failed and we were unable to recover it. 00:29:36.252 [2024-11-20 12:43:41.923928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.252 [2024-11-20 12:43:41.923989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.252 [2024-11-20 12:43:41.924004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.252 [2024-11-20 12:43:41.924012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.252 [2024-11-20 12:43:41.924018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:36.252 [2024-11-20 12:43:41.924033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.252 qpair failed and we were unable to recover it. 00:29:36.252 [2024-11-20 12:43:41.933944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.252 [2024-11-20 12:43:41.933998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.252 [2024-11-20 12:43:41.934012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.252 [2024-11-20 12:43:41.934020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.252 [2024-11-20 12:43:41.934026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:36.252 [2024-11-20 12:43:41.934041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.252 qpair failed and we were unable to recover it. 00:29:36.252 [2024-11-20 12:43:41.943938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.252 [2024-11-20 12:43:41.943992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.252 [2024-11-20 12:43:41.944007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.252 [2024-11-20 12:43:41.944014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.252 [2024-11-20 12:43:41.944020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:36.252 [2024-11-20 12:43:41.944038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.253 qpair failed and we were unable to recover it. 00:29:36.253 [2024-11-20 12:43:41.954025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.253 [2024-11-20 12:43:41.954077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.253 [2024-11-20 12:43:41.954094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.253 [2024-11-20 12:43:41.954102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.253 [2024-11-20 12:43:41.954109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:36.253 [2024-11-20 12:43:41.954123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.253 qpair failed and we were unable to recover it. 00:29:36.253 [2024-11-20 12:43:41.964016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.253 [2024-11-20 12:43:41.964071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.253 [2024-11-20 12:43:41.964087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.253 [2024-11-20 12:43:41.964095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.253 [2024-11-20 12:43:41.964101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:36.253 [2024-11-20 12:43:41.964116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.253 qpair failed and we were unable to recover it. 00:29:36.253 [2024-11-20 12:43:41.973940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.253 [2024-11-20 12:43:41.974036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.253 [2024-11-20 12:43:41.974050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.253 [2024-11-20 12:43:41.974057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.253 [2024-11-20 12:43:41.974063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:36.253 [2024-11-20 12:43:41.974079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.253 qpair failed and we were unable to recover it. 00:29:36.253 [2024-11-20 12:43:41.984046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.253 [2024-11-20 12:43:41.984105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.253 [2024-11-20 12:43:41.984120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.253 [2024-11-20 12:43:41.984127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.253 [2024-11-20 12:43:41.984134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:36.253 [2024-11-20 12:43:41.984148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.253 qpair failed and we were unable to recover it. 00:29:36.253 [2024-11-20 12:43:41.994085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.253 [2024-11-20 12:43:41.994142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.253 [2024-11-20 12:43:41.994157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.253 [2024-11-20 12:43:41.994165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.253 [2024-11-20 12:43:41.994171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:36.253 [2024-11-20 12:43:41.994186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.253 qpair failed and we were unable to recover it. 00:29:36.253 [2024-11-20 12:43:42.004140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.253 [2024-11-20 12:43:42.004213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.253 [2024-11-20 12:43:42.004227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.253 [2024-11-20 12:43:42.004235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.253 [2024-11-20 12:43:42.004241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:36.253 [2024-11-20 12:43:42.004256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.253 qpair failed and we were unable to recover it. 00:29:36.513 [2024-11-20 12:43:42.014145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.513 [2024-11-20 12:43:42.014209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.513 [2024-11-20 12:43:42.014224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.513 [2024-11-20 12:43:42.014232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.513 [2024-11-20 12:43:42.014239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:36.513 [2024-11-20 12:43:42.014254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.513 qpair failed and we were unable to recover it. 00:29:36.513 [2024-11-20 12:43:42.024185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.513 [2024-11-20 12:43:42.024252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.513 [2024-11-20 12:43:42.024266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.513 [2024-11-20 12:43:42.024275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.513 [2024-11-20 12:43:42.024280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:36.513 [2024-11-20 12:43:42.024295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.513 qpair failed and we were unable to recover it. 00:29:36.513 [2024-11-20 12:43:42.034210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.513 [2024-11-20 12:43:42.034268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.513 [2024-11-20 12:43:42.034283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.513 [2024-11-20 12:43:42.034293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.513 [2024-11-20 12:43:42.034300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:36.513 [2024-11-20 12:43:42.034315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.513 qpair failed and we were unable to recover it. 00:29:36.513 [2024-11-20 12:43:42.044225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.513 [2024-11-20 12:43:42.044283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.513 [2024-11-20 12:43:42.044298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.513 [2024-11-20 12:43:42.044305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.513 [2024-11-20 12:43:42.044311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:36.513 [2024-11-20 12:43:42.044326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.513 qpair failed and we were unable to recover it. 00:29:36.513 [2024-11-20 12:43:42.054299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.513 [2024-11-20 12:43:42.054364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.513 [2024-11-20 12:43:42.054379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.513 [2024-11-20 12:43:42.054387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.513 [2024-11-20 12:43:42.054392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:36.513 [2024-11-20 12:43:42.054407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.513 qpair failed and we were unable to recover it. 00:29:36.513 [2024-11-20 12:43:42.064361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.513 [2024-11-20 12:43:42.064440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.513 [2024-11-20 12:43:42.064455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.513 [2024-11-20 12:43:42.064462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.513 [2024-11-20 12:43:42.064468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:36.513 [2024-11-20 12:43:42.064483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.513 qpair failed and we were unable to recover it. 00:29:36.513 [2024-11-20 12:43:42.074314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.513 [2024-11-20 12:43:42.074387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.513 [2024-11-20 12:43:42.074402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.513 [2024-11-20 12:43:42.074409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.514 [2024-11-20 12:43:42.074415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:36.514 [2024-11-20 12:43:42.074433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.514 qpair failed and we were unable to recover it. 00:29:36.514 [2024-11-20 12:43:42.084331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.514 [2024-11-20 12:43:42.084393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.514 [2024-11-20 12:43:42.084409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.514 [2024-11-20 12:43:42.084417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.514 [2024-11-20 12:43:42.084424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:36.514 [2024-11-20 12:43:42.084438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.514 qpair failed and we were unable to recover it. 00:29:36.514 [2024-11-20 12:43:42.094305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.514 [2024-11-20 12:43:42.094401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.514 [2024-11-20 12:43:42.094416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.514 [2024-11-20 12:43:42.094423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.514 [2024-11-20 12:43:42.094430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:36.514 [2024-11-20 12:43:42.094445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.514 qpair failed and we were unable to recover it. 00:29:36.514 [2024-11-20 12:43:42.104393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.514 [2024-11-20 12:43:42.104447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.514 [2024-11-20 12:43:42.104461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.514 [2024-11-20 12:43:42.104468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.514 [2024-11-20 12:43:42.104474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:36.514 [2024-11-20 12:43:42.104489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.514 qpair failed and we were unable to recover it. 00:29:36.514 [2024-11-20 12:43:42.114458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.514 [2024-11-20 12:43:42.114516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.514 [2024-11-20 12:43:42.114531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.514 [2024-11-20 12:43:42.114538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.514 [2024-11-20 12:43:42.114545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:36.514 [2024-11-20 12:43:42.114560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.514 qpair failed and we were unable to recover it. 00:29:36.514 [2024-11-20 12:43:42.124438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.514 [2024-11-20 12:43:42.124497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.514 [2024-11-20 12:43:42.124510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.514 [2024-11-20 12:43:42.124518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.514 [2024-11-20 12:43:42.124524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7b9ba0 00:29:36.514 [2024-11-20 12:43:42.124538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.514 qpair failed and we were unable to recover it. 00:29:36.514 [2024-11-20 12:43:42.134548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.514 [2024-11-20 12:43:42.134679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.514 [2024-11-20 12:43:42.134738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.514 [2024-11-20 12:43:42.134764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.514 [2024-11-20 12:43:42.134787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1acc000b90 00:29:36.514 [2024-11-20 12:43:42.134837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.514 qpair failed and we were unable to recover it. 00:29:36.514 [2024-11-20 12:43:42.144512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.514 [2024-11-20 12:43:42.144591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.514 [2024-11-20 12:43:42.144618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.514 [2024-11-20 12:43:42.144633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.514 [2024-11-20 12:43:42.144646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1acc000b90 00:29:36.514 [2024-11-20 12:43:42.144677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.514 qpair failed and we were unable to recover it. 00:29:36.514 [2024-11-20 12:43:42.144782] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:29:36.514 A controller has encountered a failure and is being reset. 00:29:36.514 Controller properly reset. 00:29:36.514 Initializing NVMe Controllers 00:29:36.514 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:36.514 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:36.514 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:36.514 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:36.514 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:36.514 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:36.514 Initialization complete. Launching workers. 00:29:36.514 Starting thread on core 1 00:29:36.514 Starting thread on core 2 00:29:36.514 Starting thread on core 3 00:29:36.514 Starting thread on core 0 00:29:36.514 12:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:36.514 00:29:36.514 real 0m11.390s 00:29:36.514 user 0m21.758s 00:29:36.514 sys 0m4.754s 00:29:36.514 12:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:36.514 12:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:36.514 ************************************ 00:29:36.514 END TEST nvmf_target_disconnect_tc2 00:29:36.514 ************************************ 00:29:36.514 12:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:36.514 12:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:36.514 12:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:36.514 12:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:36.514 12:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:29:36.515 12:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:36.515 12:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:29:36.515 12:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:36.515 12:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:36.515 rmmod nvme_tcp 00:29:36.515 rmmod nvme_fabrics 00:29:36.515 rmmod nvme_keyring 00:29:36.773 12:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:36.773 12:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:29:36.773 12:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:29:36.773 12:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 343812 ']' 00:29:36.773 12:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 343812 00:29:36.773 12:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 343812 ']' 00:29:36.773 12:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 343812 00:29:36.773 12:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:29:36.773 12:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:36.773 12:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 343812 00:29:36.773 12:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:29:36.773 12:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:29:36.773 12:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 343812' 00:29:36.773 killing process with pid 343812 00:29:36.773 12:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 343812 00:29:36.773 12:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 343812 00:29:36.773 12:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:36.773 12:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:36.773 12:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:36.773 12:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:29:36.773 12:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:29:36.773 12:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:29:36.773 12:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:36.773 12:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:36.773 12:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:36.773 12:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:36.773 12:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:36.773 12:43:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.309 12:43:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:39.309 00:29:39.309 real 0m20.164s 00:29:39.309 user 0m49.434s 00:29:39.309 sys 0m9.585s 00:29:39.309 12:43:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:39.309 12:43:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:39.309 ************************************ 00:29:39.309 END TEST nvmf_target_disconnect 00:29:39.309 ************************************ 00:29:39.309 12:43:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:39.309 00:29:39.309 real 5m53.504s 00:29:39.309 user 10m35.782s 00:29:39.309 sys 1m58.617s 00:29:39.309 12:43:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:39.309 12:43:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.309 ************************************ 00:29:39.309 END TEST nvmf_host 00:29:39.309 ************************************ 00:29:39.309 12:43:44 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:29:39.309 12:43:44 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:29:39.309 12:43:44 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:39.309 12:43:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:39.309 12:43:44 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:39.309 12:43:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:39.309 ************************************ 00:29:39.309 START TEST nvmf_target_core_interrupt_mode 00:29:39.309 ************************************ 00:29:39.309 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:39.309 * Looking for test storage... 00:29:39.309 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:39.309 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:39.309 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:29:39.309 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:39.309 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:39.309 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:39.309 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:39.309 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:39.309 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:29:39.309 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:29:39.309 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:29:39.309 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:29:39.309 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:29:39.309 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:29:39.309 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:29:39.309 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:39.309 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:29:39.309 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:29:39.309 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:39.309 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:39.309 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:29:39.309 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:29:39.309 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:39.309 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:29:39.309 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:29:39.309 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:29:39.309 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:29:39.309 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:39.309 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:29:39.309 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:29:39.309 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:39.309 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:39.309 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:29:39.309 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:39.309 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:39.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.309 --rc genhtml_branch_coverage=1 00:29:39.309 --rc genhtml_function_coverage=1 00:29:39.309 --rc genhtml_legend=1 00:29:39.309 --rc geninfo_all_blocks=1 00:29:39.310 --rc geninfo_unexecuted_blocks=1 00:29:39.310 00:29:39.310 ' 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:39.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.310 --rc genhtml_branch_coverage=1 00:29:39.310 --rc genhtml_function_coverage=1 00:29:39.310 --rc genhtml_legend=1 00:29:39.310 --rc geninfo_all_blocks=1 00:29:39.310 --rc geninfo_unexecuted_blocks=1 00:29:39.310 00:29:39.310 ' 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:39.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.310 --rc genhtml_branch_coverage=1 00:29:39.310 --rc genhtml_function_coverage=1 00:29:39.310 --rc genhtml_legend=1 00:29:39.310 --rc geninfo_all_blocks=1 00:29:39.310 --rc geninfo_unexecuted_blocks=1 00:29:39.310 00:29:39.310 ' 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:39.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.310 --rc genhtml_branch_coverage=1 00:29:39.310 --rc genhtml_function_coverage=1 00:29:39.310 --rc genhtml_legend=1 00:29:39.310 --rc geninfo_all_blocks=1 00:29:39.310 --rc geninfo_unexecuted_blocks=1 00:29:39.310 00:29:39.310 ' 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:39.310 ************************************ 00:29:39.310 START TEST nvmf_abort 00:29:39.310 ************************************ 00:29:39.310 12:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:39.310 * Looking for test storage... 00:29:39.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:39.310 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:39.310 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:29:39.310 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:39.570 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:39.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.571 --rc genhtml_branch_coverage=1 00:29:39.571 --rc genhtml_function_coverage=1 00:29:39.571 --rc genhtml_legend=1 00:29:39.571 --rc geninfo_all_blocks=1 00:29:39.571 --rc geninfo_unexecuted_blocks=1 00:29:39.571 00:29:39.571 ' 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:39.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.571 --rc genhtml_branch_coverage=1 00:29:39.571 --rc genhtml_function_coverage=1 00:29:39.571 --rc genhtml_legend=1 00:29:39.571 --rc geninfo_all_blocks=1 00:29:39.571 --rc geninfo_unexecuted_blocks=1 00:29:39.571 00:29:39.571 ' 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:39.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.571 --rc genhtml_branch_coverage=1 00:29:39.571 --rc genhtml_function_coverage=1 00:29:39.571 --rc genhtml_legend=1 00:29:39.571 --rc geninfo_all_blocks=1 00:29:39.571 --rc geninfo_unexecuted_blocks=1 00:29:39.571 00:29:39.571 ' 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:39.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.571 --rc genhtml_branch_coverage=1 00:29:39.571 --rc genhtml_function_coverage=1 00:29:39.571 --rc genhtml_legend=1 00:29:39.571 --rc geninfo_all_blocks=1 00:29:39.571 --rc geninfo_unexecuted_blocks=1 00:29:39.571 00:29:39.571 ' 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:39.571 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:39.572 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:29:39.572 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:29:39.572 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:39.572 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:39.572 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:39.572 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:39.572 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:39.572 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:39.572 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:39.572 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.572 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:39.572 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:39.572 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:29:39.572 12:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:46.141 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:46.141 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:29:46.141 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:46.141 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:46.141 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:46.141 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:46.141 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:46.141 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:29:46.141 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:46.141 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:29:46.141 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:29:46.141 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:29:46.141 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:29:46.141 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:29:46.141 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:29:46.141 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:46.141 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:46.141 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:46.141 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:46.141 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:46.142 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:46.142 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:46.142 Found net devices under 0000:86:00.0: cvl_0_0 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:46.142 Found net devices under 0000:86:00.1: cvl_0_1 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:46.142 12:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:46.142 12:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:46.142 12:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:46.142 12:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:46.142 12:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:46.142 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:46.142 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:29:46.142 00:29:46.142 --- 10.0.0.2 ping statistics --- 00:29:46.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:46.142 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:29:46.142 12:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:46.142 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:46.142 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:29:46.142 00:29:46.142 --- 10.0.0.1 ping statistics --- 00:29:46.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:46.142 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:29:46.142 12:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:46.142 12:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:29:46.142 12:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:46.142 12:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:46.142 12:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:46.142 12:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:46.142 12:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:46.142 12:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:46.142 12:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:46.142 12:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:29:46.142 12:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:46.142 12:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:46.142 12:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:46.142 12:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=348352 00:29:46.143 12:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:46.143 12:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 348352 00:29:46.143 12:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 348352 ']' 00:29:46.143 12:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:46.143 12:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:46.143 12:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:46.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:46.143 12:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:46.143 12:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:46.143 [2024-11-20 12:43:51.170445] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:46.143 [2024-11-20 12:43:51.171329] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:29:46.143 [2024-11-20 12:43:51.171362] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:46.143 [2024-11-20 12:43:51.252793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:46.143 [2024-11-20 12:43:51.295231] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:46.143 [2024-11-20 12:43:51.295268] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:46.143 [2024-11-20 12:43:51.295276] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:46.143 [2024-11-20 12:43:51.295282] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:46.143 [2024-11-20 12:43:51.295287] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:46.143 [2024-11-20 12:43:51.296718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:46.143 [2024-11-20 12:43:51.296829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:46.143 [2024-11-20 12:43:51.296829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:46.143 [2024-11-20 12:43:51.363460] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:46.143 [2024-11-20 12:43:51.364247] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:46.143 [2024-11-20 12:43:51.364508] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:46.143 [2024-11-20 12:43:51.364658] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:46.403 12:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:46.403 12:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:29:46.403 12:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:46.403 12:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:46.403 12:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:46.403 12:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:46.403 12:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:29:46.403 12:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.403 12:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:46.403 [2024-11-20 12:43:52.053680] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:46.403 12:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.403 12:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:29:46.403 12:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.403 12:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:46.403 Malloc0 00:29:46.403 12:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.403 12:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:46.403 12:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.403 12:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:46.403 Delay0 00:29:46.403 12:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.403 12:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:46.403 12:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.403 12:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:46.403 12:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.403 12:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:29:46.403 12:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.403 12:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:46.403 12:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.403 12:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:46.403 12:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.403 12:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:46.403 [2024-11-20 12:43:52.149629] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:46.403 12:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.403 12:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:46.403 12:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.403 12:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:46.662 12:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.662 12:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:29:46.662 [2024-11-20 12:43:52.240404] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:29:48.565 Initializing NVMe Controllers 00:29:48.565 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:48.565 controller IO queue size 128 less than required 00:29:48.565 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:29:48.565 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:29:48.565 Initialization complete. Launching workers. 00:29:48.565 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 38005 00:29:48.565 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 38062, failed to submit 66 00:29:48.565 success 38005, unsuccessful 57, failed 0 00:29:48.565 12:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:48.565 12:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.565 12:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:48.565 12:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.565 12:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:29:48.565 12:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:29:48.565 12:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:48.565 12:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:29:48.565 12:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:48.565 12:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:29:48.565 12:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:48.565 12:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:48.565 rmmod nvme_tcp 00:29:48.565 rmmod nvme_fabrics 00:29:48.825 rmmod nvme_keyring 00:29:48.825 12:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:48.825 12:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:29:48.825 12:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:29:48.825 12:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 348352 ']' 00:29:48.825 12:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 348352 00:29:48.825 12:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 348352 ']' 00:29:48.825 12:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 348352 00:29:48.825 12:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:29:48.825 12:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:48.825 12:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 348352 00:29:48.825 12:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:48.825 12:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:48.825 12:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 348352' 00:29:48.825 killing process with pid 348352 00:29:48.825 12:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 348352 00:29:48.825 12:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 348352 00:29:48.825 12:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:48.825 12:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:48.825 12:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:48.825 12:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:29:49.084 12:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:29:49.084 12:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:49.085 12:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:29:49.085 12:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:49.085 12:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:49.085 12:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:49.085 12:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:49.085 12:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:51.050 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:51.050 00:29:51.050 real 0m11.723s 00:29:51.050 user 0m10.166s 00:29:51.050 sys 0m5.777s 00:29:51.050 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:51.050 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:51.050 ************************************ 00:29:51.050 END TEST nvmf_abort 00:29:51.050 ************************************ 00:29:51.050 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:29:51.050 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:51.050 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:51.050 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:51.050 ************************************ 00:29:51.050 START TEST nvmf_ns_hotplug_stress 00:29:51.050 ************************************ 00:29:51.050 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:29:51.310 * Looking for test storage... 00:29:51.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:51.310 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:51.310 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:29:51.310 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:51.310 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:51.310 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:51.310 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:51.310 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:51.310 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:29:51.310 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:29:51.310 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:29:51.310 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:29:51.310 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:29:51.310 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:29:51.310 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:29:51.310 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:51.310 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:29:51.310 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:29:51.310 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:51.310 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:51.310 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:29:51.310 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:29:51.310 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:51.310 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:29:51.310 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:29:51.310 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:29:51.310 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:29:51.310 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:51.310 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:29:51.310 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:29:51.310 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:51.310 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:51.310 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:29:51.310 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:51.310 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:51.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.310 --rc genhtml_branch_coverage=1 00:29:51.310 --rc genhtml_function_coverage=1 00:29:51.310 --rc genhtml_legend=1 00:29:51.310 --rc geninfo_all_blocks=1 00:29:51.310 --rc geninfo_unexecuted_blocks=1 00:29:51.310 00:29:51.310 ' 00:29:51.310 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:51.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.311 --rc genhtml_branch_coverage=1 00:29:51.311 --rc genhtml_function_coverage=1 00:29:51.311 --rc genhtml_legend=1 00:29:51.311 --rc geninfo_all_blocks=1 00:29:51.311 --rc geninfo_unexecuted_blocks=1 00:29:51.311 00:29:51.311 ' 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:51.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.311 --rc genhtml_branch_coverage=1 00:29:51.311 --rc genhtml_function_coverage=1 00:29:51.311 --rc genhtml_legend=1 00:29:51.311 --rc geninfo_all_blocks=1 00:29:51.311 --rc geninfo_unexecuted_blocks=1 00:29:51.311 00:29:51.311 ' 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:51.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.311 --rc genhtml_branch_coverage=1 00:29:51.311 --rc genhtml_function_coverage=1 00:29:51.311 --rc genhtml_legend=1 00:29:51.311 --rc geninfo_all_blocks=1 00:29:51.311 --rc geninfo_unexecuted_blocks=1 00:29:51.311 00:29:51.311 ' 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:51.311 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:51.312 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:51.312 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:29:51.312 12:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:57.882 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:57.882 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:57.882 Found net devices under 0000:86:00.0: cvl_0_0 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:57.882 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:57.883 Found net devices under 0000:86:00.1: cvl_0_1 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:57.883 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:57.883 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.424 ms 00:29:57.883 00:29:57.883 --- 10.0.0.2 ping statistics --- 00:29:57.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:57.883 rtt min/avg/max/mdev = 0.424/0.424/0.424/0.000 ms 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:57.883 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:57.883 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:29:57.883 00:29:57.883 --- 10.0.0.1 ping statistics --- 00:29:57.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:57.883 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=352373 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 352373 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 352373 ']' 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:57.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:57.883 12:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:57.883 [2024-11-20 12:44:02.860630] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:57.883 [2024-11-20 12:44:02.861529] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:29:57.883 [2024-11-20 12:44:02.861561] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:57.883 [2024-11-20 12:44:02.939600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:57.883 [2024-11-20 12:44:02.979428] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:57.883 [2024-11-20 12:44:02.979466] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:57.883 [2024-11-20 12:44:02.979473] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:57.883 [2024-11-20 12:44:02.979479] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:57.883 [2024-11-20 12:44:02.979484] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:57.883 [2024-11-20 12:44:02.980900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:57.883 [2024-11-20 12:44:02.981010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:57.883 [2024-11-20 12:44:02.981011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:57.883 [2024-11-20 12:44:03.048081] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:57.883 [2024-11-20 12:44:03.048872] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:57.883 [2024-11-20 12:44:03.049191] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:57.883 [2024-11-20 12:44:03.049329] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:57.883 12:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:57.883 12:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:29:57.883 12:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:57.883 12:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:57.883 12:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:57.883 12:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:57.883 12:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:29:57.883 12:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:57.883 [2024-11-20 12:44:03.297809] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:57.883 12:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:57.883 12:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:58.143 [2024-11-20 12:44:03.702299] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:58.143 12:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:58.402 12:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:29:58.402 Malloc0 00:29:58.402 12:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:58.661 Delay0 00:29:58.661 12:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:58.920 12:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:29:59.179 NULL1 00:29:59.179 12:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:29:59.179 12:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=352836 00:29:59.179 12:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:29:59.179 12:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 352836 00:29:59.179 12:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:00.562 Read completed with error (sct=0, sc=11) 00:30:00.562 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:00.562 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:00.562 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:00.562 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:00.562 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:00.562 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:00.562 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:00.562 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:30:00.562 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:30:00.820 true 00:30:00.820 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 352836 00:30:00.820 12:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:01.754 12:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:02.013 12:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:30:02.013 12:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:30:02.014 true 00:30:02.014 12:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 352836 00:30:02.014 12:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:02.285 12:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:02.583 12:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:30:02.583 12:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:30:02.583 true 00:30:02.583 12:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 352836 00:30:02.583 12:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:03.974 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:03.975 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:30:03.975 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:30:03.975 true 00:30:03.975 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 352836 00:30:03.975 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:04.234 12:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:04.492 12:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:30:04.492 12:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:30:04.750 true 00:30:04.750 12:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 352836 00:30:04.750 12:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:05.685 12:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:05.943 12:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:30:05.943 12:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:30:06.202 true 00:30:06.202 12:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 352836 00:30:06.202 12:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:06.461 12:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:06.461 12:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:30:06.461 12:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:30:06.720 true 00:30:06.720 12:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 352836 00:30:06.720 12:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:08.094 12:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:08.094 12:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:30:08.094 12:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:30:08.094 true 00:30:08.094 12:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 352836 00:30:08.094 12:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:08.353 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:08.611 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:30:08.611 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:30:08.870 true 00:30:08.870 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 352836 00:30:08.870 12:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:09.804 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:09.804 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:09.804 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:09.804 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:10.062 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:10.062 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:10.062 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:10.062 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:10.062 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:30:10.062 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:30:10.321 true 00:30:10.321 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 352836 00:30:10.321 12:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:11.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:11.257 12:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:11.257 12:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:30:11.257 12:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:30:11.514 true 00:30:11.514 12:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 352836 00:30:11.514 12:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:11.772 12:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:12.030 12:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:30:12.030 12:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:30:12.030 true 00:30:12.030 12:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 352836 00:30:12.030 12:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:13.409 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:13.409 12:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:13.409 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:13.409 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:13.409 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:13.409 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:13.409 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:13.409 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:13.409 12:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:30:13.409 12:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:30:13.669 true 00:30:13.669 12:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 352836 00:30:13.669 12:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:14.604 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:14.604 12:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:14.604 12:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:30:14.604 12:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:30:14.862 true 00:30:14.862 12:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 352836 00:30:14.862 12:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:15.122 12:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:15.380 12:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:30:15.380 12:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:30:15.380 true 00:30:15.380 12:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 352836 00:30:15.380 12:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:16.756 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:16.756 12:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:16.756 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:16.756 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:16.756 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:16.756 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:16.756 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:16.756 12:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:30:16.756 12:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:30:17.015 true 00:30:17.015 12:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 352836 00:30:17.015 12:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:17.951 12:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:17.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:17.951 12:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:30:17.951 12:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:30:18.209 true 00:30:18.209 12:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 352836 00:30:18.209 12:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:18.468 12:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:18.726 12:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:30:18.726 12:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:30:18.726 true 00:30:18.986 12:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 352836 00:30:18.986 12:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:19.923 12:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:19.923 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:19.923 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:19.923 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:19.923 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:19.923 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:20.182 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:20.182 12:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:30:20.182 12:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:30:20.182 true 00:30:20.440 12:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 352836 00:30:20.440 12:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:21.007 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:21.267 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:30:21.267 12:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:30:21.526 true 00:30:21.526 12:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 352836 00:30:21.526 12:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:21.785 12:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:22.043 12:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:30:22.043 12:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:30:22.043 true 00:30:22.043 12:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 352836 00:30:22.043 12:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:23.418 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:23.418 12:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:23.418 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:23.418 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:23.418 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:23.418 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:23.418 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:23.418 12:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:30:23.418 12:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:30:23.675 true 00:30:23.675 12:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 352836 00:30:23.675 12:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:24.611 12:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:24.611 12:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:30:24.611 12:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:30:24.869 true 00:30:24.869 12:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 352836 00:30:24.869 12:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:25.128 12:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:25.387 12:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:30:25.387 12:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:30:25.387 true 00:30:25.387 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 352836 00:30:25.387 12:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:26.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:26.765 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:26.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:26.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:26.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:26.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:26.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:26.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:26.765 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:30:26.765 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:30:27.023 true 00:30:27.023 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 352836 00:30:27.023 12:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:27.959 12:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:27.959 12:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:30:27.959 12:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:30:28.218 true 00:30:28.218 12:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 352836 00:30:28.218 12:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:28.477 12:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:28.478 12:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:30:28.478 12:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:30:28.736 true 00:30:28.736 12:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 352836 00:30:28.736 12:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:30.113 Initializing NVMe Controllers 00:30:30.113 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:30.113 Controller IO queue size 128, less than required. 00:30:30.113 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:30.113 Controller IO queue size 128, less than required. 00:30:30.113 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:30.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:30.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:30.113 Initialization complete. Launching workers. 00:30:30.113 ======================================================== 00:30:30.113 Latency(us) 00:30:30.113 Device Information : IOPS MiB/s Average min max 00:30:30.113 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1687.87 0.82 51351.10 2930.60 1026319.97 00:30:30.113 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17871.90 8.73 7161.79 1575.16 296360.34 00:30:30.113 ======================================================== 00:30:30.113 Total : 19559.77 9.55 10975.01 1575.16 1026319.97 00:30:30.113 00:30:30.113 12:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:30.113 12:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:30:30.113 12:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:30:30.113 true 00:30:30.372 12:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 352836 00:30:30.372 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (352836) - No such process 00:30:30.372 12:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 352836 00:30:30.372 12:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:30.372 12:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:30.631 12:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:30:30.631 12:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:30:30.631 12:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:30:30.631 12:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:30.631 12:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:30:30.890 null0 00:30:30.890 12:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:30.890 12:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:30.890 12:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:30:30.890 null1 00:30:31.149 12:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:31.149 12:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:31.149 12:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:30:31.149 null2 00:30:31.149 12:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:31.149 12:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:31.149 12:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:30:31.408 null3 00:30:31.408 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:31.408 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:31.408 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:30:31.667 null4 00:30:31.667 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:31.667 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:31.667 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:30:31.667 null5 00:30:31.667 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:31.667 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:31.667 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:30:31.926 null6 00:30:31.926 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:31.926 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:31.926 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:30:32.186 null7 00:30:32.186 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:32.186 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:32.186 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:30:32.186 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:32.186 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:30:32.186 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:32.186 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:30:32.186 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:32.186 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:32.186 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:32.186 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:32.186 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:32.186 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:32.186 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:32.186 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:30:32.186 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:32.186 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:30:32.186 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:32.186 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 358169 358171 358172 358174 358176 358178 358180 358182 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:32.187 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:32.446 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:32.446 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:32.446 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:32.446 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:32.446 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:32.446 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:32.446 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:32.446 12:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:32.446 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:32.446 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:32.446 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:32.446 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:32.446 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:32.446 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:32.446 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:32.446 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:32.446 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:32.446 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:32.446 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:32.446 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:32.446 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:32.446 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:32.446 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:32.446 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:32.446 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:32.446 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:32.446 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:32.446 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:32.446 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:32.446 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:32.446 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:32.446 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:32.705 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:32.705 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:32.705 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:32.705 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:32.705 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:32.705 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:32.705 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:32.705 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:32.963 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:32.963 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:32.964 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:32.964 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:32.964 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:32.964 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:32.964 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:32.964 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:32.964 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:32.964 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:32.964 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:32.964 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:32.964 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:32.964 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:32.964 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:32.964 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:32.964 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:32.964 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:32.964 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:32.964 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:32.964 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:32.964 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:32.964 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:32.964 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:33.223 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:33.223 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:33.223 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:33.223 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:33.223 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:33.223 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:33.223 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:33.223 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:33.223 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:33.223 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:33.223 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:33.482 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:33.482 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:33.482 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:33.482 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:33.482 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:33.482 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:33.482 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:33.482 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:33.482 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:33.482 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:33.482 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:33.482 12:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:33.482 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:33.482 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:33.482 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:33.482 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:33.482 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:33.482 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:33.482 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:33.482 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:33.482 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:33.482 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:33.482 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:33.482 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:33.482 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:33.482 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:33.482 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:33.482 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:33.482 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:33.742 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:33.742 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:33.742 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:33.742 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:33.742 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:33.742 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:33.742 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:33.742 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:33.742 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:33.742 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:33.742 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:33.742 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:33.742 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:33.742 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:33.742 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:33.742 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:33.742 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:33.742 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:33.742 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:33.742 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:33.742 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:33.742 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:33.742 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:33.742 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:34.001 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:34.002 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:34.002 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:34.002 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:34.002 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:34.002 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:34.002 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:34.002 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:34.260 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:34.260 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:34.260 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:34.260 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:34.260 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:34.260 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:34.260 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:34.260 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:34.260 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:34.260 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:34.260 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:34.260 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:34.260 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:34.260 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:34.260 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:34.260 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:34.260 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:34.260 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:34.260 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:34.260 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:34.260 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:34.260 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:34.260 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:34.260 12:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:34.260 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:34.260 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:34.519 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:34.519 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:34.519 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:34.519 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:34.519 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:34.519 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:34.519 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:34.519 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:34.519 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:34.519 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:34.519 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:34.519 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:34.519 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:34.519 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:34.519 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:34.519 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:34.519 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:34.519 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:34.519 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:34.519 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:34.519 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:34.519 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:34.519 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:34.519 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:34.519 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:34.519 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:34.519 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:34.519 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:34.519 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:34.519 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:34.779 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:34.779 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:34.779 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:34.779 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:34.779 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:34.779 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:34.779 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:34.779 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:35.039 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.039 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.039 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:35.039 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.039 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.039 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:35.039 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.039 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.039 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:35.039 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.039 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.039 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:35.039 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.039 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.039 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.039 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:35.039 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.039 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:35.039 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.039 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.039 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:35.039 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.039 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.039 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:35.299 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:35.299 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:35.299 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:35.299 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:35.299 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:35.299 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:35.299 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:35.299 12:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:35.299 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.299 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.299 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:35.299 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.299 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.299 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:35.299 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.559 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.559 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:35.559 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.559 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.559 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:35.559 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.559 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.559 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:35.559 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.559 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.559 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:35.559 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.559 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.559 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:35.559 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.559 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.559 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:35.559 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:35.559 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:35.559 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:35.559 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:35.559 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:35.559 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:35.559 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:35.559 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:35.819 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.819 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.819 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:35.819 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.819 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.819 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:35.819 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.819 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.819 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:35.819 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.819 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.820 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:35.820 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.820 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.820 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:35.820 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.820 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.820 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:35.820 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.820 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.820 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:35.820 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.820 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.820 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:36.079 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:36.079 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:36.079 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:36.079 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:36.079 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:36.079 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:36.079 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:36.079 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:36.339 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.339 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.339 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.339 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.339 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.339 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.339 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.339 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.339 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.339 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.339 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.339 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.339 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.339 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.339 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.339 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.339 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:30:36.339 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:30:36.339 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:36.339 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:30:36.339 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:36.339 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:30:36.339 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:36.339 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:36.339 rmmod nvme_tcp 00:30:36.339 rmmod nvme_fabrics 00:30:36.339 rmmod nvme_keyring 00:30:36.339 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:36.339 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:30:36.339 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:30:36.339 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 352373 ']' 00:30:36.339 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 352373 00:30:36.339 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 352373 ']' 00:30:36.339 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 352373 00:30:36.339 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:30:36.339 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:36.339 12:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 352373 00:30:36.339 12:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:36.339 12:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:36.339 12:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 352373' 00:30:36.339 killing process with pid 352373 00:30:36.339 12:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 352373 00:30:36.339 12:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 352373 00:30:36.599 12:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:36.599 12:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:36.599 12:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:36.599 12:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:30:36.599 12:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:30:36.599 12:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:36.599 12:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:30:36.599 12:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:36.599 12:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:36.599 12:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:36.599 12:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:36.599 12:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.502 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:38.761 00:30:38.761 real 0m47.533s 00:30:38.761 user 2m57.254s 00:30:38.761 sys 0m20.046s 00:30:38.761 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:38.761 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:38.761 ************************************ 00:30:38.761 END TEST nvmf_ns_hotplug_stress 00:30:38.761 ************************************ 00:30:38.761 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:38.761 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:38.761 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:38.761 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:38.761 ************************************ 00:30:38.761 START TEST nvmf_delete_subsystem 00:30:38.761 ************************************ 00:30:38.761 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:38.761 * Looking for test storage... 00:30:38.761 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:38.761 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:38.761 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:30:38.761 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:38.761 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:38.761 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:38.761 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:38.761 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:38.761 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:30:38.761 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:30:38.761 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:30:38.761 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:30:38.761 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:30:38.761 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:30:38.761 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:30:38.761 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:38.761 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:30:38.761 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:30:38.761 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:38.761 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:38.761 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:30:38.761 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:30:38.761 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:38.761 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:30:38.761 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:30:38.761 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:30:38.761 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:30:38.761 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:38.761 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:30:38.761 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:30:38.761 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:38.761 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:39.020 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:30:39.020 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:39.020 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:39.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.020 --rc genhtml_branch_coverage=1 00:30:39.020 --rc genhtml_function_coverage=1 00:30:39.020 --rc genhtml_legend=1 00:30:39.020 --rc geninfo_all_blocks=1 00:30:39.020 --rc geninfo_unexecuted_blocks=1 00:30:39.020 00:30:39.020 ' 00:30:39.020 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:39.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.020 --rc genhtml_branch_coverage=1 00:30:39.020 --rc genhtml_function_coverage=1 00:30:39.020 --rc genhtml_legend=1 00:30:39.020 --rc geninfo_all_blocks=1 00:30:39.020 --rc geninfo_unexecuted_blocks=1 00:30:39.020 00:30:39.020 ' 00:30:39.020 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:39.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.020 --rc genhtml_branch_coverage=1 00:30:39.020 --rc genhtml_function_coverage=1 00:30:39.020 --rc genhtml_legend=1 00:30:39.020 --rc geninfo_all_blocks=1 00:30:39.020 --rc geninfo_unexecuted_blocks=1 00:30:39.020 00:30:39.020 ' 00:30:39.020 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:39.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.020 --rc genhtml_branch_coverage=1 00:30:39.020 --rc genhtml_function_coverage=1 00:30:39.020 --rc genhtml_legend=1 00:30:39.020 --rc geninfo_all_blocks=1 00:30:39.020 --rc geninfo_unexecuted_blocks=1 00:30:39.020 00:30:39.020 ' 00:30:39.020 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:39.020 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:30:39.020 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:39.020 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:39.020 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:39.020 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:39.020 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:39.020 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:39.020 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:39.020 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:39.020 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:39.020 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:39.020 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:39.020 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:39.020 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:39.020 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:39.020 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:39.020 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:39.020 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:39.020 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:30:39.020 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:39.020 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:39.020 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:39.021 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.021 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.021 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.021 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:30:39.021 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.021 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:30:39.021 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:39.021 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:39.021 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:39.021 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:39.021 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:39.021 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:39.021 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:39.021 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:39.021 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:39.021 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:39.021 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:30:39.021 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:39.021 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:39.021 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:39.021 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:39.021 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:39.021 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:39.021 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:39.021 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:39.021 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:39.021 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:39.021 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:30:39.021 12:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:45.591 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:45.591 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:45.591 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:45.592 Found net devices under 0000:86:00.0: cvl_0_0 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:45.592 Found net devices under 0000:86:00.1: cvl_0_1 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:45.592 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:45.592 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:30:45.592 00:30:45.592 --- 10.0.0.2 ping statistics --- 00:30:45.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.592 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:45.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:45.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:30:45.592 00:30:45.592 --- 10.0.0.1 ping statistics --- 00:30:45.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.592 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=362521 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 362521 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 362521 ']' 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:45.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:45.592 12:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:45.592 [2024-11-20 12:44:50.521756] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:45.592 [2024-11-20 12:44:50.522703] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:30:45.592 [2024-11-20 12:44:50.522753] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:45.592 [2024-11-20 12:44:50.599951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:45.592 [2024-11-20 12:44:50.638881] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:45.592 [2024-11-20 12:44:50.638917] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:45.593 [2024-11-20 12:44:50.638925] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:45.593 [2024-11-20 12:44:50.638931] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:45.593 [2024-11-20 12:44:50.638936] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:45.593 [2024-11-20 12:44:50.640151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:45.593 [2024-11-20 12:44:50.640152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:45.593 [2024-11-20 12:44:50.706644] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:45.593 [2024-11-20 12:44:50.707177] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:45.593 [2024-11-20 12:44:50.707417] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:45.852 12:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:45.852 12:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:30:45.852 12:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:45.852 12:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:45.852 12:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:45.852 12:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:45.852 12:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:45.852 12:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.852 12:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:45.852 [2024-11-20 12:44:51.400977] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:45.852 12:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.852 12:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:45.852 12:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.852 12:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:45.852 12:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.852 12:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:45.852 12:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.852 12:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:45.852 [2024-11-20 12:44:51.429377] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:45.852 12:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.852 12:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:30:45.852 12:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.852 12:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:45.852 NULL1 00:30:45.852 12:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.852 12:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:45.852 12:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.852 12:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:45.852 Delay0 00:30:45.852 12:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.852 12:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:45.852 12:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.852 12:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:45.852 12:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.852 12:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=362570 00:30:45.852 12:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:30:45.852 12:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:45.852 [2024-11-20 12:44:51.541182] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:47.758 12:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:47.758 12:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.758 12:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:48.017 Write completed with error (sct=0, sc=8) 00:30:48.017 starting I/O failed: -6 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Write completed with error (sct=0, sc=8) 00:30:48.017 Write completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 starting I/O failed: -6 00:30:48.017 Write completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Write completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 starting I/O failed: -6 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 starting I/O failed: -6 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Write completed with error (sct=0, sc=8) 00:30:48.017 starting I/O failed: -6 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Write completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 starting I/O failed: -6 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Write completed with error (sct=0, sc=8) 00:30:48.017 starting I/O failed: -6 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Write completed with error (sct=0, sc=8) 00:30:48.017 Write completed with error (sct=0, sc=8) 00:30:48.017 starting I/O failed: -6 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Write completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 starting I/O failed: -6 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Write completed with error (sct=0, sc=8) 00:30:48.017 starting I/O failed: -6 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Write completed with error (sct=0, sc=8) 00:30:48.017 Write completed with error (sct=0, sc=8) 00:30:48.017 starting I/O failed: -6 00:30:48.017 Write completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Write completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 starting I/O failed: -6 00:30:48.017 [2024-11-20 12:44:53.615144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19384a0 is same with the state(6) to be set 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Write completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Write completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Write completed with error (sct=0, sc=8) 00:30:48.017 Write completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Write completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Write completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Write completed with error (sct=0, sc=8) 00:30:48.017 Write completed with error (sct=0, sc=8) 00:30:48.017 Write completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Write completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Write completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Write completed with error (sct=0, sc=8) 00:30:48.017 Write completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Write completed with error (sct=0, sc=8) 00:30:48.017 Write completed with error (sct=0, sc=8) 00:30:48.017 Write completed with error (sct=0, sc=8) 00:30:48.017 Write completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Write completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Write completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 starting I/O failed: -6 00:30:48.017 Write completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Write completed with error (sct=0, sc=8) 00:30:48.017 starting I/O failed: -6 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 starting I/O failed: -6 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 starting I/O failed: -6 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Write completed with error (sct=0, sc=8) 00:30:48.017 Write completed with error (sct=0, sc=8) 00:30:48.017 Write completed with error (sct=0, sc=8) 00:30:48.017 starting I/O failed: -6 00:30:48.017 Write completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Write completed with error (sct=0, sc=8) 00:30:48.017 Write completed with error (sct=0, sc=8) 00:30:48.017 starting I/O failed: -6 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 starting I/O failed: -6 00:30:48.017 Write completed with error (sct=0, sc=8) 00:30:48.017 Write completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 Read completed with error (sct=0, sc=8) 00:30:48.017 starting I/O failed: -6 00:30:48.017 Write completed with error (sct=0, sc=8) 00:30:48.018 Read completed with error (sct=0, sc=8) 00:30:48.018 Read completed with error (sct=0, sc=8) 00:30:48.018 Read completed with error (sct=0, sc=8) 00:30:48.018 starting I/O failed: -6 00:30:48.018 Read completed with error (sct=0, sc=8) 00:30:48.018 Read completed with error (sct=0, sc=8) 00:30:48.018 Read completed with error (sct=0, sc=8) 00:30:48.018 Read completed with error (sct=0, sc=8) 00:30:48.018 starting I/O failed: -6 00:30:48.018 [2024-11-20 12:44:53.618836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fdc24000c40 is same with the state(6) to be set 00:30:48.018 Write completed with error (sct=0, sc=8) 00:30:48.018 Read completed with error (sct=0, sc=8) 00:30:48.018 Read completed with error (sct=0, sc=8) 00:30:48.018 Read completed with error (sct=0, sc=8) 00:30:48.018 Read completed with error (sct=0, sc=8) 00:30:48.018 Read completed with error (sct=0, sc=8) 00:30:48.018 Read completed with error (sct=0, sc=8) 00:30:48.018 Read completed with error (sct=0, sc=8) 00:30:48.018 Read completed with error (sct=0, sc=8) 00:30:48.018 Read completed with error (sct=0, sc=8) 00:30:48.018 Write completed with error (sct=0, sc=8) 00:30:48.018 Write completed with error (sct=0, sc=8) 00:30:48.018 Read completed with error (sct=0, sc=8) 00:30:48.018 Write completed with error (sct=0, sc=8) 00:30:48.018 Write completed with error (sct=0, sc=8) 00:30:48.018 Write completed with error (sct=0, sc=8) 00:30:48.018 Read completed with error (sct=0, sc=8) 00:30:48.018 Write completed with error (sct=0, sc=8) 00:30:48.018 Read completed with error (sct=0, sc=8) 00:30:48.018 Write completed with error (sct=0, sc=8) 00:30:48.018 Read completed with error (sct=0, sc=8) 00:30:48.018 Write completed with error (sct=0, sc=8) 00:30:48.018 Read completed with error (sct=0, sc=8) 00:30:48.018 Read completed with error (sct=0, sc=8) 00:30:48.018 Write completed with error (sct=0, sc=8) 00:30:48.018 Read completed with error (sct=0, sc=8) 00:30:48.018 Write completed with error (sct=0, sc=8) 00:30:48.018 Write completed with error (sct=0, sc=8) 00:30:48.018 Write completed with error (sct=0, sc=8) 00:30:48.018 Write completed with error (sct=0, sc=8) 00:30:48.018 Write completed with error (sct=0, sc=8) 00:30:48.018 Write completed with error (sct=0, sc=8) 00:30:48.018 Write completed with error (sct=0, sc=8) 00:30:48.018 Write completed with error (sct=0, sc=8) 00:30:48.018 Read completed with error (sct=0, sc=8) 00:30:48.018 Read completed with error (sct=0, sc=8) 00:30:48.018 Read completed with error (sct=0, sc=8) 00:30:48.018 Write completed with error (sct=0, sc=8) 00:30:48.018 Read completed with error (sct=0, sc=8) 00:30:48.018 Read completed with error (sct=0, sc=8) 00:30:48.018 Read completed with error (sct=0, sc=8) 00:30:48.018 Write completed with error (sct=0, sc=8) 00:30:48.018 Write completed with error (sct=0, sc=8) 00:30:48.018 Read completed with error (sct=0, sc=8) 00:30:48.018 Write completed with error (sct=0, sc=8) 00:30:48.018 Read completed with error (sct=0, sc=8) 00:30:48.018 Write completed with error (sct=0, sc=8) 00:30:48.018 [2024-11-20 12:44:53.619183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fdc2400d4b0 is same with the state(6) to be set 00:30:48.954 [2024-11-20 12:44:54.594240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19399a0 is same with the state(6) to be set 00:30:48.954 Read completed with error (sct=0, sc=8) 00:30:48.954 Write completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Write completed with error (sct=0, sc=8) 00:30:48.955 Write completed with error (sct=0, sc=8) 00:30:48.955 Write completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Write completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Write completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Write completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 [2024-11-20 12:44:54.618306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1938680 is same with the state(6) to be set 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Write completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Write completed with error (sct=0, sc=8) 00:30:48.955 Write completed with error (sct=0, sc=8) 00:30:48.955 Write completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Write completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Write completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Write completed with error (sct=0, sc=8) 00:30:48.955 Write completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 [2024-11-20 12:44:54.618672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19382c0 is same with the state(6) to be set 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Write completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Write completed with error (sct=0, sc=8) 00:30:48.955 Write completed with error (sct=0, sc=8) 00:30:48.955 Write completed with error (sct=0, sc=8) 00:30:48.955 Write completed with error (sct=0, sc=8) 00:30:48.955 [2024-11-20 12:44:54.620247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fdc2400d7e0 is same with the state(6) to be set 00:30:48.955 Write completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Write completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Write completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 Write completed with error (sct=0, sc=8) 00:30:48.955 Read completed with error (sct=0, sc=8) 00:30:48.955 [2024-11-20 12:44:54.622029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fdc2400d020 is same with the state(6) to be set 00:30:48.955 Initializing NVMe Controllers 00:30:48.955 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:48.955 Controller IO queue size 128, less than required. 00:30:48.955 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:48.955 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:48.955 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:48.955 Initialization complete. Launching workers. 00:30:48.955 ======================================================== 00:30:48.955 Latency(us) 00:30:48.955 Device Information : IOPS MiB/s Average min max 00:30:48.955 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 171.17 0.08 892797.03 281.23 1006329.96 00:30:48.955 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 152.76 0.07 936322.26 363.04 1010737.74 00:30:48.955 ======================================================== 00:30:48.955 Total : 323.92 0.16 913322.76 281.23 1010737.74 00:30:48.955 00:30:48.955 [2024-11-20 12:44:54.622586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19399a0 (9): Bad file descriptor 00:30:48.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:30:48.955 12:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.955 12:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:30:48.955 12:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 362570 00:30:48.955 12:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:30:49.524 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:30:49.524 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 362570 00:30:49.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (362570) - No such process 00:30:49.524 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 362570 00:30:49.524 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:30:49.524 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 362570 00:30:49.524 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:30:49.524 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:49.524 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:30:49.524 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:49.524 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 362570 00:30:49.524 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:30:49.524 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:49.524 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:49.524 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:49.525 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:49.525 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.525 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:49.525 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.525 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:49.525 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.525 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:49.525 [2024-11-20 12:44:55.157278] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:49.525 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.525 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:49.525 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.525 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:49.525 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.525 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=363258 00:30:49.525 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:30:49.525 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:49.525 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 363258 00:30:49.525 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:49.525 [2024-11-20 12:44:55.243955] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:50.093 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:50.093 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 363258 00:30:50.093 12:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:50.661 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:50.661 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 363258 00:30:50.661 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:51.229 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:51.229 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 363258 00:30:51.229 12:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:51.487 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:51.487 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 363258 00:30:51.487 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:52.056 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:52.056 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 363258 00:30:52.056 12:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:52.625 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:52.625 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 363258 00:30:52.625 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:52.884 Initializing NVMe Controllers 00:30:52.884 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:52.884 Controller IO queue size 128, less than required. 00:30:52.884 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:52.884 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:52.884 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:52.884 Initialization complete. Launching workers. 00:30:52.884 ======================================================== 00:30:52.884 Latency(us) 00:30:52.884 Device Information : IOPS MiB/s Average min max 00:30:52.884 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002193.80 1000125.36 1040486.62 00:30:52.884 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004621.74 1000245.38 1042285.71 00:30:52.884 ======================================================== 00:30:52.884 Total : 256.00 0.12 1003407.77 1000125.36 1042285.71 00:30:52.884 00:30:53.143 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:53.143 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 363258 00:30:53.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (363258) - No such process 00:30:53.143 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 363258 00:30:53.143 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:30:53.143 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:30:53.143 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:53.143 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:30:53.143 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:53.143 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:30:53.143 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:53.143 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:53.143 rmmod nvme_tcp 00:30:53.143 rmmod nvme_fabrics 00:30:53.143 rmmod nvme_keyring 00:30:53.143 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:53.144 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:30:53.144 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:30:53.144 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 362521 ']' 00:30:53.144 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 362521 00:30:53.144 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 362521 ']' 00:30:53.144 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 362521 00:30:53.144 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:30:53.144 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:53.144 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 362521 00:30:53.144 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:53.144 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:53.144 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 362521' 00:30:53.144 killing process with pid 362521 00:30:53.144 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 362521 00:30:53.144 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 362521 00:30:53.403 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:53.403 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:53.403 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:53.403 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:30:53.403 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:30:53.403 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:53.403 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:30:53.403 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:53.403 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:53.403 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:53.403 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:53.403 12:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:55.420 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:55.420 00:30:55.420 real 0m16.710s 00:30:55.420 user 0m25.982s 00:30:55.420 sys 0m6.286s 00:30:55.420 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:55.420 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:55.420 ************************************ 00:30:55.420 END TEST nvmf_delete_subsystem 00:30:55.420 ************************************ 00:30:55.420 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:55.420 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:55.420 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:55.420 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:55.420 ************************************ 00:30:55.420 START TEST nvmf_host_management 00:30:55.420 ************************************ 00:30:55.420 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:55.681 * Looking for test storage... 00:30:55.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:55.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:55.681 --rc genhtml_branch_coverage=1 00:30:55.681 --rc genhtml_function_coverage=1 00:30:55.681 --rc genhtml_legend=1 00:30:55.681 --rc geninfo_all_blocks=1 00:30:55.681 --rc geninfo_unexecuted_blocks=1 00:30:55.681 00:30:55.681 ' 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:55.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:55.681 --rc genhtml_branch_coverage=1 00:30:55.681 --rc genhtml_function_coverage=1 00:30:55.681 --rc genhtml_legend=1 00:30:55.681 --rc geninfo_all_blocks=1 00:30:55.681 --rc geninfo_unexecuted_blocks=1 00:30:55.681 00:30:55.681 ' 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:55.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:55.681 --rc genhtml_branch_coverage=1 00:30:55.681 --rc genhtml_function_coverage=1 00:30:55.681 --rc genhtml_legend=1 00:30:55.681 --rc geninfo_all_blocks=1 00:30:55.681 --rc geninfo_unexecuted_blocks=1 00:30:55.681 00:30:55.681 ' 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:55.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:55.681 --rc genhtml_branch_coverage=1 00:30:55.681 --rc genhtml_function_coverage=1 00:30:55.681 --rc genhtml_legend=1 00:30:55.681 --rc geninfo_all_blocks=1 00:30:55.681 --rc geninfo_unexecuted_blocks=1 00:30:55.681 00:30:55.681 ' 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:55.681 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:55.682 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.682 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.682 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.682 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:30:55.682 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.682 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:30:55.682 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:55.682 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:55.682 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:55.682 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:55.682 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:55.682 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:55.682 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:55.682 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:55.682 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:55.682 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:55.682 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:55.682 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:55.682 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:30:55.682 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:55.682 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:55.682 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:55.682 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:55.682 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:55.682 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:55.682 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:55.682 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:55.682 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:55.682 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:55.682 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:30:55.682 12:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:02.256 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:02.256 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:02.256 Found net devices under 0000:86:00.0: cvl_0_0 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:02.256 Found net devices under 0000:86:00.1: cvl_0_1 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:02.256 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:02.257 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:02.257 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:02.257 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:02.257 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:02.257 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:02.257 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:02.257 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:02.257 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:02.257 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:02.257 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:02.257 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:02.257 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:02.257 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:02.257 12:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:02.257 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:02.257 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.422 ms 00:31:02.257 00:31:02.257 --- 10.0.0.2 ping statistics --- 00:31:02.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:02.257 rtt min/avg/max/mdev = 0.422/0.422/0.422/0.000 ms 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:02.257 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:02.257 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:31:02.257 00:31:02.257 --- 10.0.0.1 ping statistics --- 00:31:02.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:02.257 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=367765 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 367765 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 367765 ']' 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:02.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:02.257 [2024-11-20 12:45:07.313619] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:02.257 [2024-11-20 12:45:07.314491] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:31:02.257 [2024-11-20 12:45:07.314523] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:02.257 [2024-11-20 12:45:07.392185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:02.257 [2024-11-20 12:45:07.434226] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:02.257 [2024-11-20 12:45:07.434266] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:02.257 [2024-11-20 12:45:07.434275] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:02.257 [2024-11-20 12:45:07.434280] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:02.257 [2024-11-20 12:45:07.434285] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:02.257 [2024-11-20 12:45:07.435663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:02.257 [2024-11-20 12:45:07.435769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:02.257 [2024-11-20 12:45:07.435879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:02.257 [2024-11-20 12:45:07.435881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:02.257 [2024-11-20 12:45:07.501738] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:02.257 [2024-11-20 12:45:07.502815] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:02.257 [2024-11-20 12:45:07.502816] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:02.257 [2024-11-20 12:45:07.503200] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:02.257 [2024-11-20 12:45:07.503255] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:02.257 [2024-11-20 12:45:07.576677] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:02.257 Malloc0 00:31:02.257 [2024-11-20 12:45:07.664867] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:02.257 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.258 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:31:02.258 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:02.258 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:02.258 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=367934 00:31:02.258 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 367934 /var/tmp/bdevperf.sock 00:31:02.258 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 367934 ']' 00:31:02.258 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:02.258 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:31:02.258 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:31:02.258 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:02.258 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:02.258 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:02.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:02.258 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:02.258 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:02.258 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:02.258 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:02.258 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:02.258 { 00:31:02.258 "params": { 00:31:02.258 "name": "Nvme$subsystem", 00:31:02.258 "trtype": "$TEST_TRANSPORT", 00:31:02.258 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:02.258 "adrfam": "ipv4", 00:31:02.258 "trsvcid": "$NVMF_PORT", 00:31:02.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:02.258 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:02.258 "hdgst": ${hdgst:-false}, 00:31:02.258 "ddgst": ${ddgst:-false} 00:31:02.258 }, 00:31:02.258 "method": "bdev_nvme_attach_controller" 00:31:02.258 } 00:31:02.258 EOF 00:31:02.258 )") 00:31:02.258 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:02.258 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:02.258 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:02.258 12:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:02.258 "params": { 00:31:02.258 "name": "Nvme0", 00:31:02.258 "trtype": "tcp", 00:31:02.258 "traddr": "10.0.0.2", 00:31:02.258 "adrfam": "ipv4", 00:31:02.258 "trsvcid": "4420", 00:31:02.258 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:02.258 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:02.258 "hdgst": false, 00:31:02.258 "ddgst": false 00:31:02.258 }, 00:31:02.258 "method": "bdev_nvme_attach_controller" 00:31:02.258 }' 00:31:02.258 [2024-11-20 12:45:07.757695] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:31:02.258 [2024-11-20 12:45:07.757742] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid367934 ] 00:31:02.258 [2024-11-20 12:45:07.835860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:02.258 [2024-11-20 12:45:07.876910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:02.517 Running I/O for 10 seconds... 00:31:02.517 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:02.517 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:31:02.517 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:02.517 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.517 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:02.517 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.517 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:02.517 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:31:02.517 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:31:02.517 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:31:02.517 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:31:02.517 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:31:02.517 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:31:02.517 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:31:02.517 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:31:02.517 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:31:02.517 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.517 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:02.517 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.517 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=82 00:31:02.517 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 82 -ge 100 ']' 00:31:02.517 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:31:02.776 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:31:02.776 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:31:02.776 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:31:02.776 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:31:02.776 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.776 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:02.776 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.036 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:31:03.037 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:31:03.037 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:31:03.037 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:31:03.037 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:31:03.037 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:03.037 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.037 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:03.037 [2024-11-20 12:45:08.556705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.037 [2024-11-20 12:45:08.556744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.037 [2024-11-20 12:45:08.556760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.037 [2024-11-20 12:45:08.556768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.037 [2024-11-20 12:45:08.556777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.037 [2024-11-20 12:45:08.556784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.037 [2024-11-20 12:45:08.556793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.037 [2024-11-20 12:45:08.556801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.037 [2024-11-20 12:45:08.556809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.037 [2024-11-20 12:45:08.556816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.037 [2024-11-20 12:45:08.556824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.037 [2024-11-20 12:45:08.556830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.037 [2024-11-20 12:45:08.556838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.037 [2024-11-20 12:45:08.556844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.037 [2024-11-20 12:45:08.556852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.037 [2024-11-20 12:45:08.556859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.037 [2024-11-20 12:45:08.556867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.037 [2024-11-20 12:45:08.556873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.037 [2024-11-20 12:45:08.556885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.037 [2024-11-20 12:45:08.556892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.037 [2024-11-20 12:45:08.556900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.037 [2024-11-20 12:45:08.556906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.037 [2024-11-20 12:45:08.556914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.037 [2024-11-20 12:45:08.556920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.037 [2024-11-20 12:45:08.556929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.037 [2024-11-20 12:45:08.556935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.037 [2024-11-20 12:45:08.556943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.037 [2024-11-20 12:45:08.556950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.037 [2024-11-20 12:45:08.556957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.037 [2024-11-20 12:45:08.556964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.037 [2024-11-20 12:45:08.556972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.037 [2024-11-20 12:45:08.556978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.037 [2024-11-20 12:45:08.556986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.037 [2024-11-20 12:45:08.556993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.037 [2024-11-20 12:45:08.557002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.037 [2024-11-20 12:45:08.557008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.037 [2024-11-20 12:45:08.557016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.037 [2024-11-20 12:45:08.557023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.037 [2024-11-20 12:45:08.557030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.037 [2024-11-20 12:45:08.557037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.037 [2024-11-20 12:45:08.557044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.037 [2024-11-20 12:45:08.557051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.037 [2024-11-20 12:45:08.557059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.037 [2024-11-20 12:45:08.557066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.037 [2024-11-20 12:45:08.557074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.037 [2024-11-20 12:45:08.557081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.037 [2024-11-20 12:45:08.557089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.037 [2024-11-20 12:45:08.557095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.037 [2024-11-20 12:45:08.557102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.037 [2024-11-20 12:45:08.557108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.037 [2024-11-20 12:45:08.557117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.037 [2024-11-20 12:45:08.557123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.037 [2024-11-20 12:45:08.557131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.037 [2024-11-20 12:45:08.557137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.037 [2024-11-20 12:45:08.557145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.037 [2024-11-20 12:45:08.557151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.037 [2024-11-20 12:45:08.557159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.037 [2024-11-20 12:45:08.557165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.037 [2024-11-20 12:45:08.557173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.037 [2024-11-20 12:45:08.557179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.037 [2024-11-20 12:45:08.557187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.037 [2024-11-20 12:45:08.557194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.037 [2024-11-20 12:45:08.557206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.037 [2024-11-20 12:45:08.557213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.037 [2024-11-20 12:45:08.557221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.037 [2024-11-20 12:45:08.557227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.037 [2024-11-20 12:45:08.557236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.037 [2024-11-20 12:45:08.557242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.037 [2024-11-20 12:45:08.557251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.037 [2024-11-20 12:45:08.557258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.038 [2024-11-20 12:45:08.557266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.038 [2024-11-20 12:45:08.557273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.038 [2024-11-20 12:45:08.557282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.038 [2024-11-20 12:45:08.557288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.038 [2024-11-20 12:45:08.557296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.038 [2024-11-20 12:45:08.557302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.038 [2024-11-20 12:45:08.557310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.038 [2024-11-20 12:45:08.557316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.038 [2024-11-20 12:45:08.557324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.038 [2024-11-20 12:45:08.557331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.038 [2024-11-20 12:45:08.557339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.038 [2024-11-20 12:45:08.557345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.038 [2024-11-20 12:45:08.557354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.038 [2024-11-20 12:45:08.557360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.038 [2024-11-20 12:45:08.557368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.038 [2024-11-20 12:45:08.557374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.038 [2024-11-20 12:45:08.557382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.038 [2024-11-20 12:45:08.557388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.038 [2024-11-20 12:45:08.557396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.038 [2024-11-20 12:45:08.557402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.038 [2024-11-20 12:45:08.557410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.038 [2024-11-20 12:45:08.557416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.038 [2024-11-20 12:45:08.557424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.038 [2024-11-20 12:45:08.557436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.038 [2024-11-20 12:45:08.557444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.038 [2024-11-20 12:45:08.557451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.038 [2024-11-20 12:45:08.557458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.038 [2024-11-20 12:45:08.557465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.038 [2024-11-20 12:45:08.557474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.038 [2024-11-20 12:45:08.557481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.038 [2024-11-20 12:45:08.557489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.038 [2024-11-20 12:45:08.557495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.038 [2024-11-20 12:45:08.557506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.038 [2024-11-20 12:45:08.557513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.038 [2024-11-20 12:45:08.557521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.038 [2024-11-20 12:45:08.557528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.038 [2024-11-20 12:45:08.557536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.038 [2024-11-20 12:45:08.557543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.038 [2024-11-20 12:45:08.557552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.038 [2024-11-20 12:45:08.557559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.038 [2024-11-20 12:45:08.557568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.038 [2024-11-20 12:45:08.557575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.038 [2024-11-20 12:45:08.557583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.038 [2024-11-20 12:45:08.557591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.038 [2024-11-20 12:45:08.557600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.038 [2024-11-20 12:45:08.557607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.038 [2024-11-20 12:45:08.557615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.038 [2024-11-20 12:45:08.557623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.038 [2024-11-20 12:45:08.557632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.038 [2024-11-20 12:45:08.557640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.038 [2024-11-20 12:45:08.557648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.038 [2024-11-20 12:45:08.557655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.038 [2024-11-20 12:45:08.557664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.038 [2024-11-20 12:45:08.557670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.038 [2024-11-20 12:45:08.557679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.038 [2024-11-20 12:45:08.557686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.038 [2024-11-20 12:45:08.557695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.038 [2024-11-20 12:45:08.557701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.038 [2024-11-20 12:45:08.558654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:03.038 task offset: 102656 on job bdev=Nvme0n1 fails 00:31:03.038 00:31:03.038 Latency(us) 00:31:03.038 [2024-11-20T11:45:08.804Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:03.038 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:03.038 Job: Nvme0n1 ended in about 0.40 seconds with error 00:31:03.038 Verification LBA range: start 0x0 length 0x400 00:31:03.038 Nvme0n1 : 0.40 1908.75 119.30 159.06 0.00 30132.73 1451.15 26588.89 00:31:03.038 [2024-11-20T11:45:08.804Z] =================================================================================================================== 00:31:03.038 [2024-11-20T11:45:08.804Z] Total : 1908.75 119.30 159.06 0.00 30132.73 1451.15 26588.89 00:31:03.038 [2024-11-20 12:45:08.561001] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:03.038 [2024-11-20 12:45:08.561028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8e6500 (9): Bad file descriptor 00:31:03.038 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.038 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:03.038 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.038 [2024-11-20 12:45:08.562112] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:31:03.038 [2024-11-20 12:45:08.562185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:03.038 [2024-11-20 12:45:08.562234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.038 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:03.038 [2024-11-20 12:45:08.562248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:31:03.038 [2024-11-20 12:45:08.562257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:31:03.038 [2024-11-20 12:45:08.562267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:03.038 [2024-11-20 12:45:08.562274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8e6500 00:31:03.038 [2024-11-20 12:45:08.562294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8e6500 (9): Bad file descriptor 00:31:03.039 [2024-11-20 12:45:08.562306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:03.039 [2024-11-20 12:45:08.562313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:03.039 [2024-11-20 12:45:08.562323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:03.039 [2024-11-20 12:45:08.562331] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:03.039 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.039 12:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:31:03.975 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 367934 00:31:03.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (367934) - No such process 00:31:03.975 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:31:03.975 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:31:03.975 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:31:03.975 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:31:03.975 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:03.975 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:03.975 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:03.975 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:03.975 { 00:31:03.975 "params": { 00:31:03.975 "name": "Nvme$subsystem", 00:31:03.975 "trtype": "$TEST_TRANSPORT", 00:31:03.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:03.975 "adrfam": "ipv4", 00:31:03.975 "trsvcid": "$NVMF_PORT", 00:31:03.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:03.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:03.975 "hdgst": ${hdgst:-false}, 00:31:03.975 "ddgst": ${ddgst:-false} 00:31:03.975 }, 00:31:03.975 "method": "bdev_nvme_attach_controller" 00:31:03.975 } 00:31:03.975 EOF 00:31:03.975 )") 00:31:03.975 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:03.975 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:03.975 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:03.975 12:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:03.975 "params": { 00:31:03.975 "name": "Nvme0", 00:31:03.975 "trtype": "tcp", 00:31:03.975 "traddr": "10.0.0.2", 00:31:03.975 "adrfam": "ipv4", 00:31:03.975 "trsvcid": "4420", 00:31:03.975 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:03.975 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:03.975 "hdgst": false, 00:31:03.975 "ddgst": false 00:31:03.975 }, 00:31:03.975 "method": "bdev_nvme_attach_controller" 00:31:03.975 }' 00:31:03.975 [2024-11-20 12:45:09.627975] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:31:03.975 [2024-11-20 12:45:09.628025] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid368278 ] 00:31:03.975 [2024-11-20 12:45:09.705631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:04.234 [2024-11-20 12:45:09.744770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:04.494 Running I/O for 1 seconds... 00:31:05.430 1984.00 IOPS, 124.00 MiB/s 00:31:05.430 Latency(us) 00:31:05.430 [2024-11-20T11:45:11.196Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:05.430 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:05.430 Verification LBA range: start 0x0 length 0x400 00:31:05.430 Nvme0n1 : 1.01 2034.15 127.13 0.00 0.00 30972.70 7552.24 26963.38 00:31:05.430 [2024-11-20T11:45:11.196Z] =================================================================================================================== 00:31:05.430 [2024-11-20T11:45:11.196Z] Total : 2034.15 127.13 0.00 0.00 30972.70 7552.24 26963.38 00:31:05.687 12:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:31:05.687 12:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:31:05.687 12:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:05.687 12:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:05.687 12:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:31:05.687 12:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:05.687 12:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:31:05.687 12:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:05.687 12:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:31:05.687 12:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:05.687 12:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:05.687 rmmod nvme_tcp 00:31:05.687 rmmod nvme_fabrics 00:31:05.687 rmmod nvme_keyring 00:31:05.687 12:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:05.687 12:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:31:05.687 12:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:31:05.687 12:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 367765 ']' 00:31:05.687 12:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 367765 00:31:05.687 12:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 367765 ']' 00:31:05.687 12:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 367765 00:31:05.687 12:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:31:05.687 12:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:05.687 12:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 367765 00:31:05.687 12:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:05.687 12:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:05.687 12:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 367765' 00:31:05.687 killing process with pid 367765 00:31:05.687 12:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 367765 00:31:05.687 12:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 367765 00:31:05.946 [2024-11-20 12:45:11.502958] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:31:05.946 12:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:05.946 12:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:05.946 12:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:05.946 12:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:31:05.946 12:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:31:05.946 12:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:31:05.946 12:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:05.946 12:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:05.946 12:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:05.946 12:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:05.946 12:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:05.946 12:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:07.853 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:07.853 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:31:07.853 00:31:07.853 real 0m12.479s 00:31:07.853 user 0m18.388s 00:31:07.853 sys 0m6.441s 00:31:07.853 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:07.853 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:07.853 ************************************ 00:31:07.853 END TEST nvmf_host_management 00:31:07.853 ************************************ 00:31:08.112 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:08.112 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:08.112 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:08.112 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:08.112 ************************************ 00:31:08.112 START TEST nvmf_lvol 00:31:08.112 ************************************ 00:31:08.112 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:08.112 * Looking for test storage... 00:31:08.112 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:08.112 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:08.112 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:31:08.112 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:08.112 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:08.112 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:08.112 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:08.112 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:08.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.113 --rc genhtml_branch_coverage=1 00:31:08.113 --rc genhtml_function_coverage=1 00:31:08.113 --rc genhtml_legend=1 00:31:08.113 --rc geninfo_all_blocks=1 00:31:08.113 --rc geninfo_unexecuted_blocks=1 00:31:08.113 00:31:08.113 ' 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:08.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.113 --rc genhtml_branch_coverage=1 00:31:08.113 --rc genhtml_function_coverage=1 00:31:08.113 --rc genhtml_legend=1 00:31:08.113 --rc geninfo_all_blocks=1 00:31:08.113 --rc geninfo_unexecuted_blocks=1 00:31:08.113 00:31:08.113 ' 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:08.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.113 --rc genhtml_branch_coverage=1 00:31:08.113 --rc genhtml_function_coverage=1 00:31:08.113 --rc genhtml_legend=1 00:31:08.113 --rc geninfo_all_blocks=1 00:31:08.113 --rc geninfo_unexecuted_blocks=1 00:31:08.113 00:31:08.113 ' 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:08.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.113 --rc genhtml_branch_coverage=1 00:31:08.113 --rc genhtml_function_coverage=1 00:31:08.113 --rc genhtml_legend=1 00:31:08.113 --rc geninfo_all_blocks=1 00:31:08.113 --rc geninfo_unexecuted_blocks=1 00:31:08.113 00:31:08.113 ' 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:08.113 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:08.114 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:08.373 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:08.373 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:08.373 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:31:08.373 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:31:08.373 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:08.373 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:31:08.373 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:08.373 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:08.373 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:08.373 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:08.373 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:08.373 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:08.373 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:08.373 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:08.373 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:08.373 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:08.373 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:31:08.373 12:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:14.942 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:14.942 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:31:14.942 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:14.942 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:14.942 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:14.942 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:14.942 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:14.942 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:31:14.942 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:14.942 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:31:14.942 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:31:14.942 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:31:14.942 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:31:14.942 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:31:14.942 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:31:14.942 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:14.942 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:14.942 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:14.942 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:14.942 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:14.942 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:14.942 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:14.943 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:14.943 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:14.943 Found net devices under 0000:86:00.0: cvl_0_0 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:14.943 Found net devices under 0000:86:00.1: cvl_0_1 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:14.943 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:14.943 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.496 ms 00:31:14.943 00:31:14.943 --- 10.0.0.2 ping statistics --- 00:31:14.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:14.943 rtt min/avg/max/mdev = 0.496/0.496/0.496/0.000 ms 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:14.943 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:14.943 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:31:14.943 00:31:14.943 --- 10.0.0.1 ping statistics --- 00:31:14.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:14.943 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=372036 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:31:14.943 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 372036 00:31:14.944 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 372036 ']' 00:31:14.944 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:14.944 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:14.944 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:14.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:14.944 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:14.944 12:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:14.944 [2024-11-20 12:45:19.847026] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:14.944 [2024-11-20 12:45:19.847997] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:31:14.944 [2024-11-20 12:45:19.848035] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:14.944 [2024-11-20 12:45:19.924724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:14.944 [2024-11-20 12:45:19.966471] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:14.944 [2024-11-20 12:45:19.966507] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:14.944 [2024-11-20 12:45:19.966514] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:14.944 [2024-11-20 12:45:19.966520] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:14.944 [2024-11-20 12:45:19.966525] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:14.944 [2024-11-20 12:45:19.967784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:14.944 [2024-11-20 12:45:19.967889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:14.944 [2024-11-20 12:45:19.967891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:14.944 [2024-11-20 12:45:20.037709] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:14.944 [2024-11-20 12:45:20.038486] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:14.944 [2024-11-20 12:45:20.038520] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:14.944 [2024-11-20 12:45:20.038711] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:14.944 12:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:14.944 12:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:31:14.944 12:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:14.944 12:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:14.944 12:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:14.944 12:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:14.944 12:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:14.944 [2024-11-20 12:45:20.268696] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:14.944 12:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:14.944 12:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:31:14.944 12:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:15.203 12:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:31:15.203 12:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:31:15.203 12:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:31:15.462 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=9a5d923f-5cd3-44a6-ace7-7eb93a9582c8 00:31:15.462 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9a5d923f-5cd3-44a6-ace7-7eb93a9582c8 lvol 20 00:31:15.721 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=28bb17f2-06d0-496b-9d3d-23fb0d70ceeb 00:31:15.721 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:15.979 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 28bb17f2-06d0-496b-9d3d-23fb0d70ceeb 00:31:15.979 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:16.238 [2024-11-20 12:45:21.864546] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:16.238 12:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:16.497 12:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=372374 00:31:16.497 12:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:31:16.497 12:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:31:17.433 12:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 28bb17f2-06d0-496b-9d3d-23fb0d70ceeb MY_SNAPSHOT 00:31:17.692 12:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=dd54cedc-6985-4ac3-ad53-bd042407d204 00:31:17.692 12:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 28bb17f2-06d0-496b-9d3d-23fb0d70ceeb 30 00:31:17.951 12:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone dd54cedc-6985-4ac3-ad53-bd042407d204 MY_CLONE 00:31:18.210 12:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=aa786194-ab30-42b4-8678-7d3f666b55e5 00:31:18.210 12:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate aa786194-ab30-42b4-8678-7d3f666b55e5 00:31:18.779 12:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 372374 00:31:26.900 Initializing NVMe Controllers 00:31:26.900 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:31:26.900 Controller IO queue size 128, less than required. 00:31:26.900 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:26.900 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:31:26.900 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:31:26.900 Initialization complete. Launching workers. 00:31:26.900 ======================================================== 00:31:26.900 Latency(us) 00:31:26.900 Device Information : IOPS MiB/s Average min max 00:31:26.900 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12519.00 48.90 10228.70 1543.86 51435.85 00:31:26.900 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12726.60 49.71 10058.23 4348.20 106339.41 00:31:26.900 ======================================================== 00:31:26.900 Total : 25245.60 98.62 10142.76 1543.86 106339.41 00:31:26.900 00:31:26.900 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:27.159 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 28bb17f2-06d0-496b-9d3d-23fb0d70ceeb 00:31:27.419 12:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9a5d923f-5cd3-44a6-ace7-7eb93a9582c8 00:31:27.419 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:31:27.419 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:31:27.419 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:31:27.419 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:27.419 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:31:27.419 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:27.419 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:31:27.419 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:27.419 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:27.419 rmmod nvme_tcp 00:31:27.419 rmmod nvme_fabrics 00:31:27.419 rmmod nvme_keyring 00:31:27.678 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:27.678 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:31:27.678 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:31:27.678 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 372036 ']' 00:31:27.678 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 372036 00:31:27.678 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 372036 ']' 00:31:27.678 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 372036 00:31:27.678 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:31:27.678 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:27.678 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 372036 00:31:27.678 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:27.678 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:27.678 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 372036' 00:31:27.678 killing process with pid 372036 00:31:27.678 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 372036 00:31:27.678 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 372036 00:31:27.937 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:27.937 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:27.937 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:27.937 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:31:27.937 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:31:27.937 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:31:27.937 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:27.937 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:27.937 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:27.937 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:27.937 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:27.937 12:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:29.844 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:29.844 00:31:29.844 real 0m21.846s 00:31:29.844 user 0m55.799s 00:31:29.844 sys 0m9.765s 00:31:29.844 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:29.844 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:29.844 ************************************ 00:31:29.844 END TEST nvmf_lvol 00:31:29.844 ************************************ 00:31:29.844 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:29.844 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:29.844 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:29.844 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:29.844 ************************************ 00:31:29.844 START TEST nvmf_lvs_grow 00:31:29.844 ************************************ 00:31:29.844 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:30.104 * Looking for test storage... 00:31:30.104 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:30.104 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:30.104 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:31:30.104 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:30.104 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:30.104 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:30.104 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:30.104 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:30.104 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:31:30.104 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:31:30.104 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:31:30.104 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:31:30.104 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:31:30.104 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:31:30.104 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:31:30.104 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:30.104 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:31:30.104 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:31:30.104 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:30.104 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:30.104 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:31:30.104 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:31:30.104 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:30.104 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:31:30.104 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:31:30.104 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:31:30.104 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:31:30.104 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:30.104 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:31:30.104 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:31:30.104 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:30.104 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:30.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:30.105 --rc genhtml_branch_coverage=1 00:31:30.105 --rc genhtml_function_coverage=1 00:31:30.105 --rc genhtml_legend=1 00:31:30.105 --rc geninfo_all_blocks=1 00:31:30.105 --rc geninfo_unexecuted_blocks=1 00:31:30.105 00:31:30.105 ' 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:30.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:30.105 --rc genhtml_branch_coverage=1 00:31:30.105 --rc genhtml_function_coverage=1 00:31:30.105 --rc genhtml_legend=1 00:31:30.105 --rc geninfo_all_blocks=1 00:31:30.105 --rc geninfo_unexecuted_blocks=1 00:31:30.105 00:31:30.105 ' 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:30.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:30.105 --rc genhtml_branch_coverage=1 00:31:30.105 --rc genhtml_function_coverage=1 00:31:30.105 --rc genhtml_legend=1 00:31:30.105 --rc geninfo_all_blocks=1 00:31:30.105 --rc geninfo_unexecuted_blocks=1 00:31:30.105 00:31:30.105 ' 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:30.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:30.105 --rc genhtml_branch_coverage=1 00:31:30.105 --rc genhtml_function_coverage=1 00:31:30.105 --rc genhtml_legend=1 00:31:30.105 --rc geninfo_all_blocks=1 00:31:30.105 --rc geninfo_unexecuted_blocks=1 00:31:30.105 00:31:30.105 ' 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:30.105 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:31:30.106 12:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:36.676 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:36.676 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:31:36.676 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:36.676 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:36.676 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:36.676 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:36.676 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:36.676 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:36.677 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:36.677 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:36.677 Found net devices under 0000:86:00.0: cvl_0_0 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:36.677 Found net devices under 0000:86:00.1: cvl_0_1 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:36.677 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:36.677 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.477 ms 00:31:36.677 00:31:36.677 --- 10.0.0.2 ping statistics --- 00:31:36.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:36.677 rtt min/avg/max/mdev = 0.477/0.477/0.477/0.000 ms 00:31:36.677 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:36.677 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:36.677 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:31:36.677 00:31:36.677 --- 10.0.0.1 ping statistics --- 00:31:36.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:36.678 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:31:36.678 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:36.678 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:31:36.678 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:36.678 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:36.678 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:36.678 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:36.678 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:36.678 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:36.678 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:36.678 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:31:36.678 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:36.678 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:36.678 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:36.678 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=377662 00:31:36.678 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:36.678 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 377662 00:31:36.678 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 377662 ']' 00:31:36.678 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:36.678 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:36.678 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:36.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:36.678 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:36.678 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:36.678 [2024-11-20 12:45:41.748083] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:36.678 [2024-11-20 12:45:41.749045] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:31:36.678 [2024-11-20 12:45:41.749086] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:36.678 [2024-11-20 12:45:41.829997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:36.678 [2024-11-20 12:45:41.869900] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:36.678 [2024-11-20 12:45:41.869935] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:36.678 [2024-11-20 12:45:41.869942] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:36.678 [2024-11-20 12:45:41.869948] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:36.678 [2024-11-20 12:45:41.869953] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:36.678 [2024-11-20 12:45:41.870498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:36.678 [2024-11-20 12:45:41.936512] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:36.678 [2024-11-20 12:45:41.936718] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:36.678 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:36.678 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:31:36.678 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:36.678 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:36.678 12:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:36.678 12:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:36.678 12:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:36.678 [2024-11-20 12:45:42.175162] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:36.678 12:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:31:36.678 12:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:36.678 12:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:36.678 12:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:36.678 ************************************ 00:31:36.678 START TEST lvs_grow_clean 00:31:36.678 ************************************ 00:31:36.678 12:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:31:36.678 12:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:31:36.678 12:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:31:36.678 12:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:31:36.678 12:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:31:36.678 12:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:31:36.678 12:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:31:36.678 12:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:36.678 12:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:36.678 12:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:36.937 12:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:31:36.937 12:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:31:36.937 12:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=36a300d2-3fe7-4df3-bd80-d1610fef2f0d 00:31:37.196 12:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36a300d2-3fe7-4df3-bd80-d1610fef2f0d 00:31:37.196 12:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:31:37.196 12:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:31:37.196 12:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:31:37.196 12:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 36a300d2-3fe7-4df3-bd80-d1610fef2f0d lvol 150 00:31:37.455 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=00411a04-1777-4460-9fec-194e894ab2ba 00:31:37.455 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:37.455 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:31:37.715 [2024-11-20 12:45:43.242887] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:31:37.715 [2024-11-20 12:45:43.243012] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:31:37.715 true 00:31:37.715 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36a300d2-3fe7-4df3-bd80-d1610fef2f0d 00:31:37.715 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:31:37.715 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:31:37.715 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:37.974 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 00411a04-1777-4460-9fec-194e894ab2ba 00:31:38.234 12:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:38.525 [2024-11-20 12:45:44.031434] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:38.525 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:38.526 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=378153 00:31:38.526 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:38.526 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:31:38.526 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 378153 /var/tmp/bdevperf.sock 00:31:38.526 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 378153 ']' 00:31:38.526 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:38.526 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:38.526 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:38.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:38.526 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:38.526 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:31:38.526 [2024-11-20 12:45:44.288018] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:31:38.526 [2024-11-20 12:45:44.288067] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid378153 ] 00:31:38.785 [2024-11-20 12:45:44.362342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:38.785 [2024-11-20 12:45:44.404652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:38.785 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:38.785 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:31:38.785 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:31:39.353 Nvme0n1 00:31:39.353 12:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:31:39.353 [ 00:31:39.353 { 00:31:39.353 "name": "Nvme0n1", 00:31:39.353 "aliases": [ 00:31:39.353 "00411a04-1777-4460-9fec-194e894ab2ba" 00:31:39.353 ], 00:31:39.353 "product_name": "NVMe disk", 00:31:39.353 "block_size": 4096, 00:31:39.353 "num_blocks": 38912, 00:31:39.353 "uuid": "00411a04-1777-4460-9fec-194e894ab2ba", 00:31:39.353 "numa_id": 1, 00:31:39.353 "assigned_rate_limits": { 00:31:39.353 "rw_ios_per_sec": 0, 00:31:39.353 "rw_mbytes_per_sec": 0, 00:31:39.353 "r_mbytes_per_sec": 0, 00:31:39.353 "w_mbytes_per_sec": 0 00:31:39.353 }, 00:31:39.353 "claimed": false, 00:31:39.353 "zoned": false, 00:31:39.353 "supported_io_types": { 00:31:39.353 "read": true, 00:31:39.353 "write": true, 00:31:39.353 "unmap": true, 00:31:39.353 "flush": true, 00:31:39.354 "reset": true, 00:31:39.354 "nvme_admin": true, 00:31:39.354 "nvme_io": true, 00:31:39.354 "nvme_io_md": false, 00:31:39.354 "write_zeroes": true, 00:31:39.354 "zcopy": false, 00:31:39.354 "get_zone_info": false, 00:31:39.354 "zone_management": false, 00:31:39.354 "zone_append": false, 00:31:39.354 "compare": true, 00:31:39.354 "compare_and_write": true, 00:31:39.354 "abort": true, 00:31:39.354 "seek_hole": false, 00:31:39.354 "seek_data": false, 00:31:39.354 "copy": true, 00:31:39.354 "nvme_iov_md": false 00:31:39.354 }, 00:31:39.354 "memory_domains": [ 00:31:39.354 { 00:31:39.354 "dma_device_id": "system", 00:31:39.354 "dma_device_type": 1 00:31:39.354 } 00:31:39.354 ], 00:31:39.354 "driver_specific": { 00:31:39.354 "nvme": [ 00:31:39.354 { 00:31:39.354 "trid": { 00:31:39.354 "trtype": "TCP", 00:31:39.354 "adrfam": "IPv4", 00:31:39.354 "traddr": "10.0.0.2", 00:31:39.354 "trsvcid": "4420", 00:31:39.354 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:39.354 }, 00:31:39.354 "ctrlr_data": { 00:31:39.354 "cntlid": 1, 00:31:39.354 "vendor_id": "0x8086", 00:31:39.354 "model_number": "SPDK bdev Controller", 00:31:39.354 "serial_number": "SPDK0", 00:31:39.354 "firmware_revision": "25.01", 00:31:39.354 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:39.354 "oacs": { 00:31:39.354 "security": 0, 00:31:39.354 "format": 0, 00:31:39.354 "firmware": 0, 00:31:39.354 "ns_manage": 0 00:31:39.354 }, 00:31:39.354 "multi_ctrlr": true, 00:31:39.354 "ana_reporting": false 00:31:39.354 }, 00:31:39.354 "vs": { 00:31:39.354 "nvme_version": "1.3" 00:31:39.354 }, 00:31:39.354 "ns_data": { 00:31:39.354 "id": 1, 00:31:39.354 "can_share": true 00:31:39.354 } 00:31:39.354 } 00:31:39.354 ], 00:31:39.354 "mp_policy": "active_passive" 00:31:39.354 } 00:31:39.354 } 00:31:39.354 ] 00:31:39.354 12:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=378171 00:31:39.354 12:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:31:39.354 12:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:39.613 Running I/O for 10 seconds... 00:31:40.549 Latency(us) 00:31:40.549 [2024-11-20T11:45:46.315Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:40.549 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:40.549 Nvme0n1 : 1.00 22860.00 89.30 0.00 0.00 0.00 0.00 0.00 00:31:40.549 [2024-11-20T11:45:46.315Z] =================================================================================================================== 00:31:40.549 [2024-11-20T11:45:46.315Z] Total : 22860.00 89.30 0.00 0.00 0.00 0.00 0.00 00:31:40.549 00:31:41.485 12:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 36a300d2-3fe7-4df3-bd80-d1610fef2f0d 00:31:41.485 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:41.485 Nvme0n1 : 2.00 23177.50 90.54 0.00 0.00 0.00 0.00 0.00 00:31:41.485 [2024-11-20T11:45:47.251Z] =================================================================================================================== 00:31:41.485 [2024-11-20T11:45:47.251Z] Total : 23177.50 90.54 0.00 0.00 0.00 0.00 0.00 00:31:41.485 00:31:41.485 true 00:31:41.744 12:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36a300d2-3fe7-4df3-bd80-d1610fef2f0d 00:31:41.744 12:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:41.744 12:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:41.744 12:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:41.744 12:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 378171 00:31:42.682 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:42.682 Nvme0n1 : 3.00 23262.33 90.87 0.00 0.00 0.00 0.00 0.00 00:31:42.682 [2024-11-20T11:45:48.448Z] =================================================================================================================== 00:31:42.682 [2024-11-20T11:45:48.448Z] Total : 23262.33 90.87 0.00 0.00 0.00 0.00 0.00 00:31:42.682 00:31:43.617 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:43.617 Nvme0n1 : 4.00 23384.00 91.34 0.00 0.00 0.00 0.00 0.00 00:31:43.617 [2024-11-20T11:45:49.383Z] =================================================================================================================== 00:31:43.617 [2024-11-20T11:45:49.383Z] Total : 23384.00 91.34 0.00 0.00 0.00 0.00 0.00 00:31:43.617 00:31:44.553 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:44.553 Nvme0n1 : 5.00 23406.20 91.43 0.00 0.00 0.00 0.00 0.00 00:31:44.553 [2024-11-20T11:45:50.319Z] =================================================================================================================== 00:31:44.553 [2024-11-20T11:45:50.319Z] Total : 23406.20 91.43 0.00 0.00 0.00 0.00 0.00 00:31:44.553 00:31:45.488 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:45.489 Nvme0n1 : 6.00 23399.83 91.41 0.00 0.00 0.00 0.00 0.00 00:31:45.489 [2024-11-20T11:45:51.255Z] =================================================================================================================== 00:31:45.489 [2024-11-20T11:45:51.255Z] Total : 23399.83 91.41 0.00 0.00 0.00 0.00 0.00 00:31:45.489 00:31:46.426 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:46.426 Nvme0n1 : 7.00 23440.71 91.57 0.00 0.00 0.00 0.00 0.00 00:31:46.426 [2024-11-20T11:45:52.192Z] =================================================================================================================== 00:31:46.426 [2024-11-20T11:45:52.192Z] Total : 23440.71 91.57 0.00 0.00 0.00 0.00 0.00 00:31:46.426 00:31:47.803 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:47.804 Nvme0n1 : 8.00 23479.25 91.72 0.00 0.00 0.00 0.00 0.00 00:31:47.804 [2024-11-20T11:45:53.570Z] =================================================================================================================== 00:31:47.804 [2024-11-20T11:45:53.570Z] Total : 23479.25 91.72 0.00 0.00 0.00 0.00 0.00 00:31:47.804 00:31:48.741 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:48.741 Nvme0n1 : 9.00 23509.22 91.83 0.00 0.00 0.00 0.00 0.00 00:31:48.741 [2024-11-20T11:45:54.507Z] =================================================================================================================== 00:31:48.741 [2024-11-20T11:45:54.507Z] Total : 23509.22 91.83 0.00 0.00 0.00 0.00 0.00 00:31:48.741 00:31:49.691 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:49.691 Nvme0n1 : 10.00 23545.90 91.98 0.00 0.00 0.00 0.00 0.00 00:31:49.691 [2024-11-20T11:45:55.457Z] =================================================================================================================== 00:31:49.691 [2024-11-20T11:45:55.457Z] Total : 23545.90 91.98 0.00 0.00 0.00 0.00 0.00 00:31:49.691 00:31:49.691 00:31:49.691 Latency(us) 00:31:49.691 [2024-11-20T11:45:55.457Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:49.691 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:49.691 Nvme0n1 : 10.00 23543.31 91.97 0.00 0.00 5433.40 3276.80 26089.57 00:31:49.691 [2024-11-20T11:45:55.457Z] =================================================================================================================== 00:31:49.691 [2024-11-20T11:45:55.457Z] Total : 23543.31 91.97 0.00 0.00 5433.40 3276.80 26089.57 00:31:49.691 { 00:31:49.691 "results": [ 00:31:49.691 { 00:31:49.691 "job": "Nvme0n1", 00:31:49.691 "core_mask": "0x2", 00:31:49.691 "workload": "randwrite", 00:31:49.691 "status": "finished", 00:31:49.691 "queue_depth": 128, 00:31:49.691 "io_size": 4096, 00:31:49.692 "runtime": 10.003862, 00:31:49.692 "iops": 23543.307574614682, 00:31:49.692 "mibps": 91.9660452133386, 00:31:49.692 "io_failed": 0, 00:31:49.692 "io_timeout": 0, 00:31:49.692 "avg_latency_us": 5433.40072284616, 00:31:49.692 "min_latency_us": 3276.8, 00:31:49.692 "max_latency_us": 26089.569523809525 00:31:49.692 } 00:31:49.692 ], 00:31:49.692 "core_count": 1 00:31:49.692 } 00:31:49.692 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 378153 00:31:49.692 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 378153 ']' 00:31:49.692 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 378153 00:31:49.692 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:31:49.692 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:49.692 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 378153 00:31:49.692 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:49.692 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:49.692 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 378153' 00:31:49.692 killing process with pid 378153 00:31:49.692 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 378153 00:31:49.692 Received shutdown signal, test time was about 10.000000 seconds 00:31:49.692 00:31:49.692 Latency(us) 00:31:49.692 [2024-11-20T11:45:55.458Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:49.692 [2024-11-20T11:45:55.458Z] =================================================================================================================== 00:31:49.692 [2024-11-20T11:45:55.458Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:49.692 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 378153 00:31:49.692 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:49.985 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:50.269 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36a300d2-3fe7-4df3-bd80-d1610fef2f0d 00:31:50.269 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:31:50.269 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:31:50.269 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:31:50.269 12:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:50.535 [2024-11-20 12:45:56.154961] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:31:50.535 12:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36a300d2-3fe7-4df3-bd80-d1610fef2f0d 00:31:50.535 12:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:31:50.535 12:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36a300d2-3fe7-4df3-bd80-d1610fef2f0d 00:31:50.536 12:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:50.536 12:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:50.536 12:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:50.536 12:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:50.536 12:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:50.536 12:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:50.536 12:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:50.536 12:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:31:50.536 12:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36a300d2-3fe7-4df3-bd80-d1610fef2f0d 00:31:50.794 request: 00:31:50.794 { 00:31:50.794 "uuid": "36a300d2-3fe7-4df3-bd80-d1610fef2f0d", 00:31:50.794 "method": "bdev_lvol_get_lvstores", 00:31:50.794 "req_id": 1 00:31:50.794 } 00:31:50.794 Got JSON-RPC error response 00:31:50.794 response: 00:31:50.794 { 00:31:50.794 "code": -19, 00:31:50.794 "message": "No such device" 00:31:50.794 } 00:31:50.794 12:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:31:50.794 12:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:50.794 12:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:50.794 12:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:50.794 12:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:51.053 aio_bdev 00:31:51.053 12:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 00411a04-1777-4460-9fec-194e894ab2ba 00:31:51.053 12:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=00411a04-1777-4460-9fec-194e894ab2ba 00:31:51.053 12:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:51.054 12:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:31:51.054 12:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:51.054 12:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:51.054 12:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:51.054 12:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 00411a04-1777-4460-9fec-194e894ab2ba -t 2000 00:31:51.312 [ 00:31:51.312 { 00:31:51.312 "name": "00411a04-1777-4460-9fec-194e894ab2ba", 00:31:51.312 "aliases": [ 00:31:51.312 "lvs/lvol" 00:31:51.312 ], 00:31:51.312 "product_name": "Logical Volume", 00:31:51.312 "block_size": 4096, 00:31:51.312 "num_blocks": 38912, 00:31:51.312 "uuid": "00411a04-1777-4460-9fec-194e894ab2ba", 00:31:51.312 "assigned_rate_limits": { 00:31:51.312 "rw_ios_per_sec": 0, 00:31:51.312 "rw_mbytes_per_sec": 0, 00:31:51.313 "r_mbytes_per_sec": 0, 00:31:51.313 "w_mbytes_per_sec": 0 00:31:51.313 }, 00:31:51.313 "claimed": false, 00:31:51.313 "zoned": false, 00:31:51.313 "supported_io_types": { 00:31:51.313 "read": true, 00:31:51.313 "write": true, 00:31:51.313 "unmap": true, 00:31:51.313 "flush": false, 00:31:51.313 "reset": true, 00:31:51.313 "nvme_admin": false, 00:31:51.313 "nvme_io": false, 00:31:51.313 "nvme_io_md": false, 00:31:51.313 "write_zeroes": true, 00:31:51.313 "zcopy": false, 00:31:51.313 "get_zone_info": false, 00:31:51.313 "zone_management": false, 00:31:51.313 "zone_append": false, 00:31:51.313 "compare": false, 00:31:51.313 "compare_and_write": false, 00:31:51.313 "abort": false, 00:31:51.313 "seek_hole": true, 00:31:51.313 "seek_data": true, 00:31:51.313 "copy": false, 00:31:51.313 "nvme_iov_md": false 00:31:51.313 }, 00:31:51.313 "driver_specific": { 00:31:51.313 "lvol": { 00:31:51.313 "lvol_store_uuid": "36a300d2-3fe7-4df3-bd80-d1610fef2f0d", 00:31:51.313 "base_bdev": "aio_bdev", 00:31:51.313 "thin_provision": false, 00:31:51.313 "num_allocated_clusters": 38, 00:31:51.313 "snapshot": false, 00:31:51.313 "clone": false, 00:31:51.313 "esnap_clone": false 00:31:51.313 } 00:31:51.313 } 00:31:51.313 } 00:31:51.313 ] 00:31:51.313 12:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:31:51.313 12:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36a300d2-3fe7-4df3-bd80-d1610fef2f0d 00:31:51.313 12:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:31:51.571 12:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:31:51.572 12:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36a300d2-3fe7-4df3-bd80-d1610fef2f0d 00:31:51.572 12:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:31:51.832 12:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:31:51.832 12:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 00411a04-1777-4460-9fec-194e894ab2ba 00:31:51.832 12:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 36a300d2-3fe7-4df3-bd80-d1610fef2f0d 00:31:52.091 12:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:52.351 12:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:52.351 00:31:52.351 real 0m15.733s 00:31:52.351 user 0m15.236s 00:31:52.351 sys 0m1.474s 00:31:52.351 12:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:52.351 12:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:31:52.351 ************************************ 00:31:52.351 END TEST lvs_grow_clean 00:31:52.351 ************************************ 00:31:52.351 12:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:31:52.351 12:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:52.351 12:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:52.351 12:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:52.351 ************************************ 00:31:52.351 START TEST lvs_grow_dirty 00:31:52.351 ************************************ 00:31:52.351 12:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:31:52.351 12:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:31:52.351 12:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:31:52.351 12:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:31:52.351 12:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:31:52.351 12:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:31:52.351 12:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:31:52.351 12:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:52.351 12:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:52.351 12:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:52.610 12:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:31:52.610 12:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:31:52.869 12:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=79ebddf9-a56e-4130-a74f-574893261da3 00:31:52.869 12:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 79ebddf9-a56e-4130-a74f-574893261da3 00:31:52.869 12:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:31:53.129 12:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:31:53.129 12:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:31:53.129 12:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 79ebddf9-a56e-4130-a74f-574893261da3 lvol 150 00:31:53.129 12:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=af151a33-0889-4ef1-b2d5-769338d9532b 00:31:53.129 12:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:53.129 12:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:31:53.388 [2024-11-20 12:45:59.034897] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:31:53.388 [2024-11-20 12:45:59.035029] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:31:53.388 true 00:31:53.388 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:31:53.388 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 79ebddf9-a56e-4130-a74f-574893261da3 00:31:53.646 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:31:53.646 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:53.906 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 af151a33-0889-4ef1-b2d5-769338d9532b 00:31:53.906 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:54.165 [2024-11-20 12:45:59.807364] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:54.165 12:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:54.424 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=380746 00:31:54.424 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:54.424 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:31:54.424 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 380746 /var/tmp/bdevperf.sock 00:31:54.424 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 380746 ']' 00:31:54.424 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:54.424 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:54.424 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:54.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:54.424 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:54.424 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:54.424 [2024-11-20 12:46:00.059575] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:31:54.424 [2024-11-20 12:46:00.059626] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid380746 ] 00:31:54.424 [2024-11-20 12:46:00.133828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:54.424 [2024-11-20 12:46:00.175984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:54.683 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:54.683 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:31:54.683 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:31:54.941 Nvme0n1 00:31:54.941 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:31:55.200 [ 00:31:55.200 { 00:31:55.200 "name": "Nvme0n1", 00:31:55.200 "aliases": [ 00:31:55.200 "af151a33-0889-4ef1-b2d5-769338d9532b" 00:31:55.200 ], 00:31:55.200 "product_name": "NVMe disk", 00:31:55.200 "block_size": 4096, 00:31:55.200 "num_blocks": 38912, 00:31:55.200 "uuid": "af151a33-0889-4ef1-b2d5-769338d9532b", 00:31:55.200 "numa_id": 1, 00:31:55.200 "assigned_rate_limits": { 00:31:55.200 "rw_ios_per_sec": 0, 00:31:55.200 "rw_mbytes_per_sec": 0, 00:31:55.200 "r_mbytes_per_sec": 0, 00:31:55.200 "w_mbytes_per_sec": 0 00:31:55.200 }, 00:31:55.200 "claimed": false, 00:31:55.200 "zoned": false, 00:31:55.200 "supported_io_types": { 00:31:55.200 "read": true, 00:31:55.200 "write": true, 00:31:55.200 "unmap": true, 00:31:55.200 "flush": true, 00:31:55.200 "reset": true, 00:31:55.200 "nvme_admin": true, 00:31:55.200 "nvme_io": true, 00:31:55.200 "nvme_io_md": false, 00:31:55.200 "write_zeroes": true, 00:31:55.200 "zcopy": false, 00:31:55.200 "get_zone_info": false, 00:31:55.200 "zone_management": false, 00:31:55.200 "zone_append": false, 00:31:55.200 "compare": true, 00:31:55.200 "compare_and_write": true, 00:31:55.200 "abort": true, 00:31:55.200 "seek_hole": false, 00:31:55.200 "seek_data": false, 00:31:55.200 "copy": true, 00:31:55.200 "nvme_iov_md": false 00:31:55.200 }, 00:31:55.200 "memory_domains": [ 00:31:55.200 { 00:31:55.200 "dma_device_id": "system", 00:31:55.200 "dma_device_type": 1 00:31:55.200 } 00:31:55.200 ], 00:31:55.200 "driver_specific": { 00:31:55.200 "nvme": [ 00:31:55.200 { 00:31:55.200 "trid": { 00:31:55.200 "trtype": "TCP", 00:31:55.200 "adrfam": "IPv4", 00:31:55.200 "traddr": "10.0.0.2", 00:31:55.200 "trsvcid": "4420", 00:31:55.200 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:55.200 }, 00:31:55.200 "ctrlr_data": { 00:31:55.200 "cntlid": 1, 00:31:55.200 "vendor_id": "0x8086", 00:31:55.200 "model_number": "SPDK bdev Controller", 00:31:55.200 "serial_number": "SPDK0", 00:31:55.200 "firmware_revision": "25.01", 00:31:55.200 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:55.200 "oacs": { 00:31:55.200 "security": 0, 00:31:55.200 "format": 0, 00:31:55.200 "firmware": 0, 00:31:55.200 "ns_manage": 0 00:31:55.200 }, 00:31:55.200 "multi_ctrlr": true, 00:31:55.200 "ana_reporting": false 00:31:55.200 }, 00:31:55.200 "vs": { 00:31:55.200 "nvme_version": "1.3" 00:31:55.200 }, 00:31:55.200 "ns_data": { 00:31:55.200 "id": 1, 00:31:55.200 "can_share": true 00:31:55.200 } 00:31:55.200 } 00:31:55.200 ], 00:31:55.200 "mp_policy": "active_passive" 00:31:55.200 } 00:31:55.200 } 00:31:55.200 ] 00:31:55.200 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=380760 00:31:55.200 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:31:55.200 12:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:55.200 Running I/O for 10 seconds... 00:31:56.577 Latency(us) 00:31:56.577 [2024-11-20T11:46:02.343Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:56.577 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:56.577 Nvme0n1 : 1.00 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:31:56.577 [2024-11-20T11:46:02.343Z] =================================================================================================================== 00:31:56.577 [2024-11-20T11:46:02.343Z] Total : 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:31:56.577 00:31:57.145 12:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 79ebddf9-a56e-4130-a74f-574893261da3 00:31:57.404 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:57.404 Nvme0n1 : 2.00 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:31:57.404 [2024-11-20T11:46:03.170Z] =================================================================================================================== 00:31:57.404 [2024-11-20T11:46:03.170Z] Total : 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:31:57.404 00:31:57.404 true 00:31:57.404 12:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 79ebddf9-a56e-4130-a74f-574893261da3 00:31:57.404 12:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:57.663 12:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:57.663 12:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:57.663 12:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 380760 00:31:58.231 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:58.231 Nvme0n1 : 3.00 23198.67 90.62 0.00 0.00 0.00 0.00 0.00 00:31:58.231 [2024-11-20T11:46:03.997Z] =================================================================================================================== 00:31:58.231 [2024-11-20T11:46:03.997Z] Total : 23198.67 90.62 0.00 0.00 0.00 0.00 0.00 00:31:58.231 00:31:59.607 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:59.607 Nvme0n1 : 4.00 23272.75 90.91 0.00 0.00 0.00 0.00 0.00 00:31:59.607 [2024-11-20T11:46:05.373Z] =================================================================================================================== 00:31:59.607 [2024-11-20T11:46:05.373Z] Total : 23272.75 90.91 0.00 0.00 0.00 0.00 0.00 00:31:59.607 00:32:00.542 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:00.542 Nvme0n1 : 5.00 23342.60 91.18 0.00 0.00 0.00 0.00 0.00 00:32:00.542 [2024-11-20T11:46:06.308Z] =================================================================================================================== 00:32:00.542 [2024-11-20T11:46:06.308Z] Total : 23342.60 91.18 0.00 0.00 0.00 0.00 0.00 00:32:00.542 00:32:01.477 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:01.477 Nvme0n1 : 6.00 23389.17 91.36 0.00 0.00 0.00 0.00 0.00 00:32:01.477 [2024-11-20T11:46:07.243Z] =================================================================================================================== 00:32:01.477 [2024-11-20T11:46:07.243Z] Total : 23389.17 91.36 0.00 0.00 0.00 0.00 0.00 00:32:01.477 00:32:02.413 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:02.413 Nvme0n1 : 7.00 23424.86 91.50 0.00 0.00 0.00 0.00 0.00 00:32:02.414 [2024-11-20T11:46:08.180Z] =================================================================================================================== 00:32:02.414 [2024-11-20T11:46:08.180Z] Total : 23424.86 91.50 0.00 0.00 0.00 0.00 0.00 00:32:02.414 00:32:03.349 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:03.349 Nvme0n1 : 8.00 23465.38 91.66 0.00 0.00 0.00 0.00 0.00 00:32:03.349 [2024-11-20T11:46:09.115Z] =================================================================================================================== 00:32:03.349 [2024-11-20T11:46:09.115Z] Total : 23465.38 91.66 0.00 0.00 0.00 0.00 0.00 00:32:03.349 00:32:04.295 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:04.295 Nvme0n1 : 9.00 23482.78 91.73 0.00 0.00 0.00 0.00 0.00 00:32:04.295 [2024-11-20T11:46:10.061Z] =================================================================================================================== 00:32:04.295 [2024-11-20T11:46:10.061Z] Total : 23482.78 91.73 0.00 0.00 0.00 0.00 0.00 00:32:04.295 00:32:05.231 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:05.231 Nvme0n1 : 10.00 23496.70 91.78 0.00 0.00 0.00 0.00 0.00 00:32:05.231 [2024-11-20T11:46:10.997Z] =================================================================================================================== 00:32:05.231 [2024-11-20T11:46:10.997Z] Total : 23496.70 91.78 0.00 0.00 0.00 0.00 0.00 00:32:05.231 00:32:05.231 00:32:05.231 Latency(us) 00:32:05.231 [2024-11-20T11:46:10.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:05.231 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:05.231 Nvme0n1 : 10.00 23489.00 91.75 0.00 0.00 5445.77 3214.38 25090.93 00:32:05.231 [2024-11-20T11:46:10.997Z] =================================================================================================================== 00:32:05.231 [2024-11-20T11:46:10.997Z] Total : 23489.00 91.75 0.00 0.00 5445.77 3214.38 25090.93 00:32:05.231 { 00:32:05.231 "results": [ 00:32:05.231 { 00:32:05.231 "job": "Nvme0n1", 00:32:05.231 "core_mask": "0x2", 00:32:05.231 "workload": "randwrite", 00:32:05.231 "status": "finished", 00:32:05.231 "queue_depth": 128, 00:32:05.231 "io_size": 4096, 00:32:05.231 "runtime": 10.003321, 00:32:05.231 "iops": 23488.999303331362, 00:32:05.231 "mibps": 91.75390352863813, 00:32:05.231 "io_failed": 0, 00:32:05.231 "io_timeout": 0, 00:32:05.231 "avg_latency_us": 5445.7695679735925, 00:32:05.231 "min_latency_us": 3214.384761904762, 00:32:05.231 "max_latency_us": 25090.925714285713 00:32:05.231 } 00:32:05.231 ], 00:32:05.231 "core_count": 1 00:32:05.231 } 00:32:05.491 12:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 380746 00:32:05.491 12:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 380746 ']' 00:32:05.491 12:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 380746 00:32:05.491 12:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:32:05.491 12:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:05.491 12:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 380746 00:32:05.491 12:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:05.491 12:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:05.491 12:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 380746' 00:32:05.491 killing process with pid 380746 00:32:05.491 12:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 380746 00:32:05.491 Received shutdown signal, test time was about 10.000000 seconds 00:32:05.491 00:32:05.491 Latency(us) 00:32:05.491 [2024-11-20T11:46:11.257Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:05.491 [2024-11-20T11:46:11.257Z] =================================================================================================================== 00:32:05.491 [2024-11-20T11:46:11.257Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:05.491 12:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 380746 00:32:05.491 12:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:05.750 12:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:06.009 12:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 79ebddf9-a56e-4130-a74f-574893261da3 00:32:06.009 12:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:06.268 12:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:06.268 12:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:32:06.268 12:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 377662 00:32:06.268 12:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 377662 00:32:06.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 377662 Killed "${NVMF_APP[@]}" "$@" 00:32:06.268 12:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:32:06.268 12:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:32:06.268 12:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:06.268 12:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:06.268 12:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:06.268 12:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=382588 00:32:06.268 12:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 382588 00:32:06.268 12:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:06.268 12:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 382588 ']' 00:32:06.268 12:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:06.268 12:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:06.268 12:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:06.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:06.268 12:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:06.268 12:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:06.268 [2024-11-20 12:46:11.884652] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:06.268 [2024-11-20 12:46:11.885551] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:32:06.268 [2024-11-20 12:46:11.885583] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:06.268 [2024-11-20 12:46:11.962760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:06.268 [2024-11-20 12:46:12.000969] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:06.268 [2024-11-20 12:46:12.001008] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:06.268 [2024-11-20 12:46:12.001015] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:06.268 [2024-11-20 12:46:12.001024] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:06.268 [2024-11-20 12:46:12.001029] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:06.268 [2024-11-20 12:46:12.001571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:06.527 [2024-11-20 12:46:12.067708] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:06.527 [2024-11-20 12:46:12.067916] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:06.527 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:06.527 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:32:06.527 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:06.527 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:06.527 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:06.527 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:06.527 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:06.786 [2024-11-20 12:46:12.314959] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:32:06.786 [2024-11-20 12:46:12.315156] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:32:06.786 [2024-11-20 12:46:12.315251] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:32:06.786 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:32:06.786 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev af151a33-0889-4ef1-b2d5-769338d9532b 00:32:06.786 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=af151a33-0889-4ef1-b2d5-769338d9532b 00:32:06.786 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:06.786 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:32:06.786 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:06.786 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:06.786 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:06.786 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b af151a33-0889-4ef1-b2d5-769338d9532b -t 2000 00:32:07.045 [ 00:32:07.045 { 00:32:07.045 "name": "af151a33-0889-4ef1-b2d5-769338d9532b", 00:32:07.045 "aliases": [ 00:32:07.045 "lvs/lvol" 00:32:07.045 ], 00:32:07.045 "product_name": "Logical Volume", 00:32:07.045 "block_size": 4096, 00:32:07.045 "num_blocks": 38912, 00:32:07.045 "uuid": "af151a33-0889-4ef1-b2d5-769338d9532b", 00:32:07.045 "assigned_rate_limits": { 00:32:07.045 "rw_ios_per_sec": 0, 00:32:07.045 "rw_mbytes_per_sec": 0, 00:32:07.045 "r_mbytes_per_sec": 0, 00:32:07.045 "w_mbytes_per_sec": 0 00:32:07.045 }, 00:32:07.045 "claimed": false, 00:32:07.045 "zoned": false, 00:32:07.045 "supported_io_types": { 00:32:07.045 "read": true, 00:32:07.045 "write": true, 00:32:07.045 "unmap": true, 00:32:07.045 "flush": false, 00:32:07.045 "reset": true, 00:32:07.045 "nvme_admin": false, 00:32:07.045 "nvme_io": false, 00:32:07.045 "nvme_io_md": false, 00:32:07.045 "write_zeroes": true, 00:32:07.045 "zcopy": false, 00:32:07.045 "get_zone_info": false, 00:32:07.045 "zone_management": false, 00:32:07.045 "zone_append": false, 00:32:07.045 "compare": false, 00:32:07.045 "compare_and_write": false, 00:32:07.045 "abort": false, 00:32:07.045 "seek_hole": true, 00:32:07.045 "seek_data": true, 00:32:07.045 "copy": false, 00:32:07.045 "nvme_iov_md": false 00:32:07.045 }, 00:32:07.045 "driver_specific": { 00:32:07.045 "lvol": { 00:32:07.045 "lvol_store_uuid": "79ebddf9-a56e-4130-a74f-574893261da3", 00:32:07.045 "base_bdev": "aio_bdev", 00:32:07.045 "thin_provision": false, 00:32:07.045 "num_allocated_clusters": 38, 00:32:07.045 "snapshot": false, 00:32:07.045 "clone": false, 00:32:07.045 "esnap_clone": false 00:32:07.045 } 00:32:07.045 } 00:32:07.045 } 00:32:07.045 ] 00:32:07.045 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:32:07.045 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 79ebddf9-a56e-4130-a74f-574893261da3 00:32:07.045 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:32:07.304 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:32:07.304 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 79ebddf9-a56e-4130-a74f-574893261da3 00:32:07.304 12:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:32:07.563 12:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:32:07.563 12:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:07.563 [2024-11-20 12:46:13.270014] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:07.563 12:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 79ebddf9-a56e-4130-a74f-574893261da3 00:32:07.563 12:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:32:07.563 12:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 79ebddf9-a56e-4130-a74f-574893261da3 00:32:07.563 12:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:07.563 12:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:07.563 12:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:07.563 12:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:07.563 12:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:07.563 12:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:07.563 12:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:07.563 12:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:07.563 12:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 79ebddf9-a56e-4130-a74f-574893261da3 00:32:07.822 request: 00:32:07.822 { 00:32:07.822 "uuid": "79ebddf9-a56e-4130-a74f-574893261da3", 00:32:07.822 "method": "bdev_lvol_get_lvstores", 00:32:07.822 "req_id": 1 00:32:07.822 } 00:32:07.822 Got JSON-RPC error response 00:32:07.822 response: 00:32:07.822 { 00:32:07.822 "code": -19, 00:32:07.822 "message": "No such device" 00:32:07.822 } 00:32:07.822 12:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:32:07.822 12:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:07.822 12:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:07.822 12:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:07.822 12:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:08.081 aio_bdev 00:32:08.081 12:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev af151a33-0889-4ef1-b2d5-769338d9532b 00:32:08.081 12:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=af151a33-0889-4ef1-b2d5-769338d9532b 00:32:08.081 12:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:08.081 12:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:32:08.081 12:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:08.081 12:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:08.081 12:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:08.340 12:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b af151a33-0889-4ef1-b2d5-769338d9532b -t 2000 00:32:08.340 [ 00:32:08.340 { 00:32:08.340 "name": "af151a33-0889-4ef1-b2d5-769338d9532b", 00:32:08.340 "aliases": [ 00:32:08.340 "lvs/lvol" 00:32:08.340 ], 00:32:08.340 "product_name": "Logical Volume", 00:32:08.340 "block_size": 4096, 00:32:08.340 "num_blocks": 38912, 00:32:08.340 "uuid": "af151a33-0889-4ef1-b2d5-769338d9532b", 00:32:08.340 "assigned_rate_limits": { 00:32:08.340 "rw_ios_per_sec": 0, 00:32:08.340 "rw_mbytes_per_sec": 0, 00:32:08.340 "r_mbytes_per_sec": 0, 00:32:08.340 "w_mbytes_per_sec": 0 00:32:08.340 }, 00:32:08.340 "claimed": false, 00:32:08.340 "zoned": false, 00:32:08.340 "supported_io_types": { 00:32:08.340 "read": true, 00:32:08.340 "write": true, 00:32:08.340 "unmap": true, 00:32:08.340 "flush": false, 00:32:08.340 "reset": true, 00:32:08.340 "nvme_admin": false, 00:32:08.340 "nvme_io": false, 00:32:08.340 "nvme_io_md": false, 00:32:08.340 "write_zeroes": true, 00:32:08.340 "zcopy": false, 00:32:08.340 "get_zone_info": false, 00:32:08.340 "zone_management": false, 00:32:08.340 "zone_append": false, 00:32:08.340 "compare": false, 00:32:08.340 "compare_and_write": false, 00:32:08.340 "abort": false, 00:32:08.340 "seek_hole": true, 00:32:08.340 "seek_data": true, 00:32:08.340 "copy": false, 00:32:08.340 "nvme_iov_md": false 00:32:08.340 }, 00:32:08.340 "driver_specific": { 00:32:08.340 "lvol": { 00:32:08.340 "lvol_store_uuid": "79ebddf9-a56e-4130-a74f-574893261da3", 00:32:08.340 "base_bdev": "aio_bdev", 00:32:08.340 "thin_provision": false, 00:32:08.340 "num_allocated_clusters": 38, 00:32:08.340 "snapshot": false, 00:32:08.340 "clone": false, 00:32:08.340 "esnap_clone": false 00:32:08.340 } 00:32:08.340 } 00:32:08.340 } 00:32:08.340 ] 00:32:08.340 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:32:08.340 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:08.340 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 79ebddf9-a56e-4130-a74f-574893261da3 00:32:08.599 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:08.599 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 79ebddf9-a56e-4130-a74f-574893261da3 00:32:08.599 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:08.857 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:08.857 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete af151a33-0889-4ef1-b2d5-769338d9532b 00:32:09.116 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 79ebddf9-a56e-4130-a74f-574893261da3 00:32:09.116 12:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:09.375 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:09.375 00:32:09.375 real 0m17.042s 00:32:09.375 user 0m34.493s 00:32:09.375 sys 0m3.800s 00:32:09.375 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:09.375 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:09.375 ************************************ 00:32:09.375 END TEST lvs_grow_dirty 00:32:09.375 ************************************ 00:32:09.375 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:32:09.375 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:32:09.375 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:32:09.375 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:32:09.375 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:32:09.375 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:32:09.375 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:32:09.375 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:32:09.375 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:32:09.375 nvmf_trace.0 00:32:09.634 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:32:09.634 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:32:09.634 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:09.634 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:32:09.634 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:09.634 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:32:09.634 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:09.634 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:09.634 rmmod nvme_tcp 00:32:09.634 rmmod nvme_fabrics 00:32:09.634 rmmod nvme_keyring 00:32:09.634 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:09.634 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:32:09.634 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:32:09.634 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 382588 ']' 00:32:09.634 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 382588 00:32:09.634 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 382588 ']' 00:32:09.634 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 382588 00:32:09.634 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:32:09.634 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:09.634 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 382588 00:32:09.634 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:09.634 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:09.634 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 382588' 00:32:09.634 killing process with pid 382588 00:32:09.634 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 382588 00:32:09.634 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 382588 00:32:09.894 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:09.894 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:09.894 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:09.894 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:32:09.894 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:32:09.894 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:09.894 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:32:09.894 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:09.894 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:09.894 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:09.894 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:09.894 12:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:11.799 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:11.799 00:32:11.799 real 0m41.958s 00:32:11.799 user 0m52.232s 00:32:11.799 sys 0m10.143s 00:32:11.799 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:11.799 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:11.799 ************************************ 00:32:11.799 END TEST nvmf_lvs_grow 00:32:11.799 ************************************ 00:32:12.058 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:12.058 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:12.058 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:12.058 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:12.058 ************************************ 00:32:12.058 START TEST nvmf_bdev_io_wait 00:32:12.058 ************************************ 00:32:12.058 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:12.058 * Looking for test storage... 00:32:12.058 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:12.058 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:12.058 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:32:12.058 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:12.059 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:12.059 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:12.059 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:12.059 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:12.059 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:32:12.059 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:32:12.059 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:32:12.059 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:32:12.059 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:32:12.059 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:32:12.059 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:32:12.059 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:12.059 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:32:12.059 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:32:12.059 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:12.059 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:12.059 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:32:12.059 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:32:12.059 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:12.059 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:32:12.059 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:32:12.059 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:32:12.059 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:32:12.059 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:12.059 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:32:12.059 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:32:12.059 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:12.059 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:12.059 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:32:12.059 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:12.059 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:12.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:12.059 --rc genhtml_branch_coverage=1 00:32:12.059 --rc genhtml_function_coverage=1 00:32:12.059 --rc genhtml_legend=1 00:32:12.059 --rc geninfo_all_blocks=1 00:32:12.059 --rc geninfo_unexecuted_blocks=1 00:32:12.059 00:32:12.059 ' 00:32:12.059 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:12.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:12.059 --rc genhtml_branch_coverage=1 00:32:12.059 --rc genhtml_function_coverage=1 00:32:12.059 --rc genhtml_legend=1 00:32:12.059 --rc geninfo_all_blocks=1 00:32:12.059 --rc geninfo_unexecuted_blocks=1 00:32:12.059 00:32:12.059 ' 00:32:12.059 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:12.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:12.059 --rc genhtml_branch_coverage=1 00:32:12.059 --rc genhtml_function_coverage=1 00:32:12.059 --rc genhtml_legend=1 00:32:12.059 --rc geninfo_all_blocks=1 00:32:12.059 --rc geninfo_unexecuted_blocks=1 00:32:12.059 00:32:12.059 ' 00:32:12.059 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:12.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:12.059 --rc genhtml_branch_coverage=1 00:32:12.059 --rc genhtml_function_coverage=1 00:32:12.059 --rc genhtml_legend=1 00:32:12.059 --rc geninfo_all_blocks=1 00:32:12.059 --rc geninfo_unexecuted_blocks=1 00:32:12.059 00:32:12.059 ' 00:32:12.059 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:12.059 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:32:12.059 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:12.059 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:12.059 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:12.059 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:12.059 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:12.059 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:12.059 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:12.059 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:12.059 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:12.059 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:12.319 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:12.319 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:12.319 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:12.319 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:12.319 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:12.319 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:12.319 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:12.319 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:32:12.319 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:12.319 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:12.319 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:12.319 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:12.319 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:12.319 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:12.319 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:32:12.319 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:12.319 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:32:12.319 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:12.319 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:12.319 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:12.319 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:12.319 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:12.319 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:12.319 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:12.319 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:12.319 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:12.319 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:12.319 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:12.319 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:12.319 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:32:12.319 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:12.319 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:12.319 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:12.319 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:12.319 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:12.319 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:12.319 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:12.319 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:12.319 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:12.320 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:12.320 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:32:12.320 12:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:18.890 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:18.890 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:32:18.890 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:18.890 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:18.890 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:18.890 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:18.891 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:18.891 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:18.891 Found net devices under 0000:86:00.0: cvl_0_0 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:18.891 Found net devices under 0000:86:00.1: cvl_0_1 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:18.891 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:18.892 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:18.892 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:18.892 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:18.892 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:18.892 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:18.892 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:18.892 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:18.892 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:18.892 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:18.892 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:18.892 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:18.892 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:18.892 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:18.892 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:18.892 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:18.892 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.502 ms 00:32:18.892 00:32:18.892 --- 10.0.0.2 ping statistics --- 00:32:18.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:18.892 rtt min/avg/max/mdev = 0.502/0.502/0.502/0.000 ms 00:32:18.892 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:18.892 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:18.892 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:32:18.892 00:32:18.892 --- 10.0.0.1 ping statistics --- 00:32:18.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:18.892 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:32:18.892 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:18.892 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:32:18.892 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:18.892 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:18.892 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:18.892 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:18.892 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:18.892 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:18.892 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:18.892 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:32:18.892 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:18.892 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:18.892 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:18.892 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=386639 00:32:18.892 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 386639 00:32:18.892 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:32:18.892 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 386639 ']' 00:32:18.892 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:18.892 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:18.892 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:18.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:18.892 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:18.892 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:18.892 [2024-11-20 12:46:23.816860] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:18.892 [2024-11-20 12:46:23.817768] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:32:18.892 [2024-11-20 12:46:23.817804] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:18.892 [2024-11-20 12:46:23.892814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:18.892 [2024-11-20 12:46:23.935068] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:18.892 [2024-11-20 12:46:23.935102] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:18.892 [2024-11-20 12:46:23.935110] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:18.892 [2024-11-20 12:46:23.935116] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:18.892 [2024-11-20 12:46:23.935121] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:18.892 [2024-11-20 12:46:23.936546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:18.892 [2024-11-20 12:46:23.936657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:18.892 [2024-11-20 12:46:23.936761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:18.892 [2024-11-20 12:46:23.936762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:18.892 [2024-11-20 12:46:23.937016] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:18.892 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:18.892 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:32:18.892 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:18.892 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:18.892 12:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:18.892 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:18.892 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:32:18.892 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.892 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:18.892 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.892 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:32:18.892 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.892 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:18.892 [2024-11-20 12:46:24.074621] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:18.892 [2024-11-20 12:46:24.075318] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:18.892 [2024-11-20 12:46:24.075514] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:18.893 [2024-11-20 12:46:24.075657] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:18.893 [2024-11-20 12:46:24.085323] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:18.893 Malloc0 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:18.893 [2024-11-20 12:46:24.157692] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=386662 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=386665 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:18.893 { 00:32:18.893 "params": { 00:32:18.893 "name": "Nvme$subsystem", 00:32:18.893 "trtype": "$TEST_TRANSPORT", 00:32:18.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:18.893 "adrfam": "ipv4", 00:32:18.893 "trsvcid": "$NVMF_PORT", 00:32:18.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:18.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:18.893 "hdgst": ${hdgst:-false}, 00:32:18.893 "ddgst": ${ddgst:-false} 00:32:18.893 }, 00:32:18.893 "method": "bdev_nvme_attach_controller" 00:32:18.893 } 00:32:18.893 EOF 00:32:18.893 )") 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=386667 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=386670 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:18.893 { 00:32:18.893 "params": { 00:32:18.893 "name": "Nvme$subsystem", 00:32:18.893 "trtype": "$TEST_TRANSPORT", 00:32:18.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:18.893 "adrfam": "ipv4", 00:32:18.893 "trsvcid": "$NVMF_PORT", 00:32:18.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:18.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:18.893 "hdgst": ${hdgst:-false}, 00:32:18.893 "ddgst": ${ddgst:-false} 00:32:18.893 }, 00:32:18.893 "method": "bdev_nvme_attach_controller" 00:32:18.893 } 00:32:18.893 EOF 00:32:18.893 )") 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:18.893 { 00:32:18.893 "params": { 00:32:18.893 "name": "Nvme$subsystem", 00:32:18.893 "trtype": "$TEST_TRANSPORT", 00:32:18.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:18.893 "adrfam": "ipv4", 00:32:18.893 "trsvcid": "$NVMF_PORT", 00:32:18.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:18.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:18.893 "hdgst": ${hdgst:-false}, 00:32:18.893 "ddgst": ${ddgst:-false} 00:32:18.893 }, 00:32:18.893 "method": "bdev_nvme_attach_controller" 00:32:18.893 } 00:32:18.893 EOF 00:32:18.893 )") 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:18.893 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:18.893 { 00:32:18.893 "params": { 00:32:18.893 "name": "Nvme$subsystem", 00:32:18.893 "trtype": "$TEST_TRANSPORT", 00:32:18.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:18.893 "adrfam": "ipv4", 00:32:18.893 "trsvcid": "$NVMF_PORT", 00:32:18.894 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:18.894 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:18.894 "hdgst": ${hdgst:-false}, 00:32:18.894 "ddgst": ${ddgst:-false} 00:32:18.894 }, 00:32:18.894 "method": "bdev_nvme_attach_controller" 00:32:18.894 } 00:32:18.894 EOF 00:32:18.894 )") 00:32:18.894 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:18.894 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 386662 00:32:18.894 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:18.894 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:18.894 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:18.894 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:18.894 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:18.894 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:18.894 "params": { 00:32:18.894 "name": "Nvme1", 00:32:18.894 "trtype": "tcp", 00:32:18.894 "traddr": "10.0.0.2", 00:32:18.894 "adrfam": "ipv4", 00:32:18.894 "trsvcid": "4420", 00:32:18.894 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:18.894 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:18.894 "hdgst": false, 00:32:18.894 "ddgst": false 00:32:18.894 }, 00:32:18.894 "method": "bdev_nvme_attach_controller" 00:32:18.894 }' 00:32:18.894 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:18.894 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:18.894 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:18.894 "params": { 00:32:18.894 "name": "Nvme1", 00:32:18.894 "trtype": "tcp", 00:32:18.894 "traddr": "10.0.0.2", 00:32:18.894 "adrfam": "ipv4", 00:32:18.894 "trsvcid": "4420", 00:32:18.894 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:18.894 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:18.894 "hdgst": false, 00:32:18.894 "ddgst": false 00:32:18.894 }, 00:32:18.894 "method": "bdev_nvme_attach_controller" 00:32:18.894 }' 00:32:18.894 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:18.894 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:18.894 "params": { 00:32:18.894 "name": "Nvme1", 00:32:18.894 "trtype": "tcp", 00:32:18.894 "traddr": "10.0.0.2", 00:32:18.894 "adrfam": "ipv4", 00:32:18.894 "trsvcid": "4420", 00:32:18.894 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:18.894 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:18.894 "hdgst": false, 00:32:18.894 "ddgst": false 00:32:18.894 }, 00:32:18.894 "method": "bdev_nvme_attach_controller" 00:32:18.894 }' 00:32:18.894 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:18.894 12:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:18.894 "params": { 00:32:18.894 "name": "Nvme1", 00:32:18.894 "trtype": "tcp", 00:32:18.894 "traddr": "10.0.0.2", 00:32:18.894 "adrfam": "ipv4", 00:32:18.894 "trsvcid": "4420", 00:32:18.894 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:18.894 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:18.894 "hdgst": false, 00:32:18.894 "ddgst": false 00:32:18.894 }, 00:32:18.894 "method": "bdev_nvme_attach_controller" 00:32:18.894 }' 00:32:18.894 [2024-11-20 12:46:24.208534] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:32:18.894 [2024-11-20 12:46:24.208587] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:32:18.894 [2024-11-20 12:46:24.212596] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:32:18.894 [2024-11-20 12:46:24.212640] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:32:18.894 [2024-11-20 12:46:24.213025] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:32:18.894 [2024-11-20 12:46:24.213067] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:32:18.894 [2024-11-20 12:46:24.213144] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:32:18.894 [2024-11-20 12:46:24.213184] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:32:18.894 [2024-11-20 12:46:24.396933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:18.894 [2024-11-20 12:46:24.439286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:18.894 [2024-11-20 12:46:24.490566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:18.894 [2024-11-20 12:46:24.539289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:18.894 [2024-11-20 12:46:24.553185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:18.894 [2024-11-20 12:46:24.595743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:18.894 [2024-11-20 12:46:24.612719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:19.152 [2024-11-20 12:46:24.652334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:32:19.152 Running I/O for 1 seconds... 00:32:19.152 Running I/O for 1 seconds... 00:32:19.152 Running I/O for 1 seconds... 00:32:19.410 Running I/O for 1 seconds... 00:32:19.975 9279.00 IOPS, 36.25 MiB/s 00:32:19.975 Latency(us) 00:32:19.975 [2024-11-20T11:46:25.741Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:19.975 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:32:19.975 Nvme1n1 : 1.01 9270.49 36.21 0.00 0.00 13691.59 3464.05 23218.47 00:32:19.975 [2024-11-20T11:46:25.741Z] =================================================================================================================== 00:32:19.975 [2024-11-20T11:46:25.741Z] Total : 9270.49 36.21 0.00 0.00 13691.59 3464.05 23218.47 00:32:20.234 246024.00 IOPS, 961.03 MiB/s 00:32:20.234 Latency(us) 00:32:20.234 [2024-11-20T11:46:26.000Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:20.234 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:32:20.234 Nvme1n1 : 1.00 245657.13 959.60 0.00 0.00 518.85 222.35 1490.16 00:32:20.234 [2024-11-20T11:46:26.000Z] =================================================================================================================== 00:32:20.234 [2024-11-20T11:46:26.000Z] Total : 245657.13 959.60 0.00 0.00 518.85 222.35 1490.16 00:32:20.234 12:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 386665 00:32:20.234 8235.00 IOPS, 32.17 MiB/s 00:32:20.234 Latency(us) 00:32:20.234 [2024-11-20T11:46:26.000Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:20.234 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:32:20.234 Nvme1n1 : 1.01 8320.25 32.50 0.00 0.00 15339.75 4774.77 24591.60 00:32:20.234 [2024-11-20T11:46:26.000Z] =================================================================================================================== 00:32:20.234 [2024-11-20T11:46:26.000Z] Total : 8320.25 32.50 0.00 0.00 15339.75 4774.77 24591.60 00:32:20.234 13687.00 IOPS, 53.46 MiB/s 00:32:20.234 Latency(us) 00:32:20.235 [2024-11-20T11:46:26.001Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:20.235 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:32:20.235 Nvme1n1 : 1.00 13771.75 53.80 0.00 0.00 9275.06 2559.02 13793.77 00:32:20.235 [2024-11-20T11:46:26.001Z] =================================================================================================================== 00:32:20.235 [2024-11-20T11:46:26.001Z] Total : 13771.75 53.80 0.00 0.00 9275.06 2559.02 13793.77 00:32:20.494 12:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 386667 00:32:20.494 12:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 386670 00:32:20.494 12:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:20.494 12:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.494 12:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:20.494 12:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.494 12:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:32:20.494 12:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:32:20.494 12:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:20.494 12:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:32:20.494 12:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:20.494 12:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:32:20.494 12:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:20.494 12:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:20.494 rmmod nvme_tcp 00:32:20.494 rmmod nvme_fabrics 00:32:20.494 rmmod nvme_keyring 00:32:20.494 12:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:20.494 12:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:32:20.494 12:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:32:20.494 12:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 386639 ']' 00:32:20.494 12:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 386639 00:32:20.494 12:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 386639 ']' 00:32:20.494 12:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 386639 00:32:20.494 12:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:32:20.494 12:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:20.494 12:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 386639 00:32:20.494 12:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:20.494 12:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:20.494 12:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 386639' 00:32:20.494 killing process with pid 386639 00:32:20.494 12:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 386639 00:32:20.494 12:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 386639 00:32:20.753 12:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:20.753 12:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:20.753 12:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:20.753 12:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:32:20.753 12:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:32:20.753 12:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:20.753 12:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:32:20.753 12:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:20.753 12:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:20.753 12:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:20.753 12:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:20.753 12:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:22.659 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:22.659 00:32:22.659 real 0m10.765s 00:32:22.659 user 0m15.150s 00:32:22.659 sys 0m6.376s 00:32:22.659 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:22.659 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:22.659 ************************************ 00:32:22.659 END TEST nvmf_bdev_io_wait 00:32:22.659 ************************************ 00:32:22.919 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:22.919 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:22.919 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:22.919 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:22.919 ************************************ 00:32:22.919 START TEST nvmf_queue_depth 00:32:22.919 ************************************ 00:32:22.919 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:22.919 * Looking for test storage... 00:32:22.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:22.919 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:22.919 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:32:22.919 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:22.919 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:22.919 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:22.919 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:22.919 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:22.919 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:32:22.919 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:32:22.919 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:32:22.919 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:32:22.919 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:32:22.919 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:32:22.919 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:32:22.919 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:22.919 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:32:22.919 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:32:22.919 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:22.919 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:22.919 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:32:22.919 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:32:22.919 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:22.919 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:32:22.919 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:22.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.920 --rc genhtml_branch_coverage=1 00:32:22.920 --rc genhtml_function_coverage=1 00:32:22.920 --rc genhtml_legend=1 00:32:22.920 --rc geninfo_all_blocks=1 00:32:22.920 --rc geninfo_unexecuted_blocks=1 00:32:22.920 00:32:22.920 ' 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:22.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.920 --rc genhtml_branch_coverage=1 00:32:22.920 --rc genhtml_function_coverage=1 00:32:22.920 --rc genhtml_legend=1 00:32:22.920 --rc geninfo_all_blocks=1 00:32:22.920 --rc geninfo_unexecuted_blocks=1 00:32:22.920 00:32:22.920 ' 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:22.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.920 --rc genhtml_branch_coverage=1 00:32:22.920 --rc genhtml_function_coverage=1 00:32:22.920 --rc genhtml_legend=1 00:32:22.920 --rc geninfo_all_blocks=1 00:32:22.920 --rc geninfo_unexecuted_blocks=1 00:32:22.920 00:32:22.920 ' 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:22.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.920 --rc genhtml_branch_coverage=1 00:32:22.920 --rc genhtml_function_coverage=1 00:32:22.920 --rc genhtml_legend=1 00:32:22.920 --rc geninfo_all_blocks=1 00:32:22.920 --rc geninfo_unexecuted_blocks=1 00:32:22.920 00:32:22.920 ' 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:22.920 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:23.180 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:32:23.180 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:32:23.180 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:23.180 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:32:23.180 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:23.180 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:23.180 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:23.180 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:23.180 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:23.180 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:23.180 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:23.180 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:23.180 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:23.180 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:23.180 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:32:23.180 12:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:29.750 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:29.750 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:29.750 Found net devices under 0000:86:00.0: cvl_0_0 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:29.750 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:29.751 Found net devices under 0000:86:00.1: cvl_0_1 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:29.751 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:29.751 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.373 ms 00:32:29.751 00:32:29.751 --- 10.0.0.2 ping statistics --- 00:32:29.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:29.751 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:29.751 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:29.751 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:32:29.751 00:32:29.751 --- 10.0.0.1 ping statistics --- 00:32:29.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:29.751 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=390533 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 390533 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 390533 ']' 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:29.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:29.751 [2024-11-20 12:46:34.627183] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:29.751 [2024-11-20 12:46:34.628136] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:32:29.751 [2024-11-20 12:46:34.628173] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:29.751 [2024-11-20 12:46:34.692927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:29.751 [2024-11-20 12:46:34.731398] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:29.751 [2024-11-20 12:46:34.731435] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:29.751 [2024-11-20 12:46:34.731442] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:29.751 [2024-11-20 12:46:34.731448] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:29.751 [2024-11-20 12:46:34.731453] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:29.751 [2024-11-20 12:46:34.731973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:29.751 [2024-11-20 12:46:34.797700] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:29.751 [2024-11-20 12:46:34.797910] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:29.751 [2024-11-20 12:46:34.876709] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:29.751 Malloc0 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:29.751 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.752 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:29.752 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.752 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:29.752 [2024-11-20 12:46:34.948781] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:29.752 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.752 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=390681 00:32:29.752 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:29.752 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:32:29.752 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 390681 /var/tmp/bdevperf.sock 00:32:29.752 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 390681 ']' 00:32:29.752 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:29.752 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:29.752 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:29.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:29.752 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:29.752 12:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:29.752 [2024-11-20 12:46:35.002061] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:32:29.752 [2024-11-20 12:46:35.002106] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid390681 ] 00:32:29.752 [2024-11-20 12:46:35.076418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:29.752 [2024-11-20 12:46:35.117463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:29.752 12:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:29.752 12:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:32:29.752 12:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:29.752 12:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:29.752 12:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:29.752 NVMe0n1 00:32:29.752 12:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:29.752 12:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:29.752 Running I/O for 10 seconds... 00:32:32.063 11847.00 IOPS, 46.28 MiB/s [2024-11-20T11:46:38.765Z] 12155.00 IOPS, 47.48 MiB/s [2024-11-20T11:46:39.710Z] 12284.33 IOPS, 47.99 MiB/s [2024-11-20T11:46:40.646Z] 12291.75 IOPS, 48.01 MiB/s [2024-11-20T11:46:41.580Z] 12292.40 IOPS, 48.02 MiB/s [2024-11-20T11:46:42.514Z] 12370.33 IOPS, 48.32 MiB/s [2024-11-20T11:46:43.449Z] 12422.29 IOPS, 48.52 MiB/s [2024-11-20T11:46:44.826Z] 12432.75 IOPS, 48.57 MiB/s [2024-11-20T11:46:45.764Z] 12478.56 IOPS, 48.74 MiB/s [2024-11-20T11:46:45.764Z] 12495.50 IOPS, 48.81 MiB/s 00:32:39.998 Latency(us) 00:32:39.998 [2024-11-20T11:46:45.764Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:39.998 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:32:39.998 Verification LBA range: start 0x0 length 0x4000 00:32:39.999 NVMe0n1 : 10.05 12531.21 48.95 0.00 0.00 81464.67 16602.45 53926.77 00:32:39.999 [2024-11-20T11:46:45.765Z] =================================================================================================================== 00:32:39.999 [2024-11-20T11:46:45.765Z] Total : 12531.21 48.95 0.00 0.00 81464.67 16602.45 53926.77 00:32:39.999 { 00:32:39.999 "results": [ 00:32:39.999 { 00:32:39.999 "job": "NVMe0n1", 00:32:39.999 "core_mask": "0x1", 00:32:39.999 "workload": "verify", 00:32:39.999 "status": "finished", 00:32:39.999 "verify_range": { 00:32:39.999 "start": 0, 00:32:39.999 "length": 16384 00:32:39.999 }, 00:32:39.999 "queue_depth": 1024, 00:32:39.999 "io_size": 4096, 00:32:39.999 "runtime": 10.05162, 00:32:39.999 "iops": 12531.213873982502, 00:32:39.999 "mibps": 48.95005419524415, 00:32:39.999 "io_failed": 0, 00:32:39.999 "io_timeout": 0, 00:32:39.999 "avg_latency_us": 81464.67162389576, 00:32:39.999 "min_latency_us": 16602.453333333335, 00:32:39.999 "max_latency_us": 53926.76571428571 00:32:39.999 } 00:32:39.999 ], 00:32:39.999 "core_count": 1 00:32:39.999 } 00:32:39.999 12:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 390681 00:32:39.999 12:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 390681 ']' 00:32:39.999 12:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 390681 00:32:39.999 12:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:32:39.999 12:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:39.999 12:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 390681 00:32:39.999 12:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:39.999 12:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:39.999 12:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 390681' 00:32:39.999 killing process with pid 390681 00:32:39.999 12:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 390681 00:32:39.999 Received shutdown signal, test time was about 10.000000 seconds 00:32:39.999 00:32:39.999 Latency(us) 00:32:39.999 [2024-11-20T11:46:45.765Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:39.999 [2024-11-20T11:46:45.765Z] =================================================================================================================== 00:32:39.999 [2024-11-20T11:46:45.765Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:39.999 12:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 390681 00:32:39.999 12:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:32:39.999 12:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:32:39.999 12:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:39.999 12:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:32:39.999 12:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:39.999 12:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:32:39.999 12:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:39.999 12:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:39.999 rmmod nvme_tcp 00:32:39.999 rmmod nvme_fabrics 00:32:40.304 rmmod nvme_keyring 00:32:40.304 12:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:40.304 12:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:32:40.304 12:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:32:40.304 12:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 390533 ']' 00:32:40.304 12:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 390533 00:32:40.304 12:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 390533 ']' 00:32:40.304 12:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 390533 00:32:40.304 12:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:32:40.304 12:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:40.304 12:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 390533 00:32:40.304 12:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:40.304 12:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:40.304 12:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 390533' 00:32:40.304 killing process with pid 390533 00:32:40.304 12:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 390533 00:32:40.304 12:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 390533 00:32:40.304 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:40.304 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:40.304 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:40.304 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:32:40.304 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:32:40.304 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:40.304 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:32:40.304 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:40.304 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:40.304 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:40.304 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:40.304 12:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:42.899 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:42.899 00:32:42.899 real 0m19.628s 00:32:42.899 user 0m22.634s 00:32:42.899 sys 0m6.276s 00:32:42.899 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:42.899 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:42.899 ************************************ 00:32:42.899 END TEST nvmf_queue_depth 00:32:42.899 ************************************ 00:32:42.899 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:32:42.899 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:42.899 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:42.899 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:42.899 ************************************ 00:32:42.899 START TEST nvmf_target_multipath 00:32:42.899 ************************************ 00:32:42.899 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:32:42.899 * Looking for test storage... 00:32:42.899 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:42.899 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:42.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:42.900 --rc genhtml_branch_coverage=1 00:32:42.900 --rc genhtml_function_coverage=1 00:32:42.900 --rc genhtml_legend=1 00:32:42.900 --rc geninfo_all_blocks=1 00:32:42.900 --rc geninfo_unexecuted_blocks=1 00:32:42.900 00:32:42.900 ' 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:42.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:42.900 --rc genhtml_branch_coverage=1 00:32:42.900 --rc genhtml_function_coverage=1 00:32:42.900 --rc genhtml_legend=1 00:32:42.900 --rc geninfo_all_blocks=1 00:32:42.900 --rc geninfo_unexecuted_blocks=1 00:32:42.900 00:32:42.900 ' 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:42.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:42.900 --rc genhtml_branch_coverage=1 00:32:42.900 --rc genhtml_function_coverage=1 00:32:42.900 --rc genhtml_legend=1 00:32:42.900 --rc geninfo_all_blocks=1 00:32:42.900 --rc geninfo_unexecuted_blocks=1 00:32:42.900 00:32:42.900 ' 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:42.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:42.900 --rc genhtml_branch_coverage=1 00:32:42.900 --rc genhtml_function_coverage=1 00:32:42.900 --rc genhtml_legend=1 00:32:42.900 --rc geninfo_all_blocks=1 00:32:42.900 --rc geninfo_unexecuted_blocks=1 00:32:42.900 00:32:42.900 ' 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:42.900 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:42.901 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:42.901 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:42.901 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:42.901 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:42.901 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:42.901 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:42.901 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:42.901 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:42.901 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:42.901 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:42.901 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:32:42.901 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:42.901 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:42.901 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:42.901 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:42.901 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:42.901 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:42.901 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:42.901 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:42.901 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:42.901 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:42.901 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:32:42.901 12:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:49.472 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:49.472 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:49.472 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:49.473 Found net devices under 0000:86:00.0: cvl_0_0 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:49.473 Found net devices under 0000:86:00.1: cvl_0_1 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:49.473 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:49.473 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.469 ms 00:32:49.473 00:32:49.473 --- 10.0.0.2 ping statistics --- 00:32:49.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:49.473 rtt min/avg/max/mdev = 0.469/0.469/0.469/0.000 ms 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:49.473 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:49.473 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:32:49.473 00:32:49.473 --- 10.0.0.1 ping statistics --- 00:32:49.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:49.473 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:32:49.473 only one NIC for nvmf test 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:49.473 rmmod nvme_tcp 00:32:49.473 rmmod nvme_fabrics 00:32:49.473 rmmod nvme_keyring 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:49.473 12:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:50.856 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:50.856 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:32:50.856 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:32:50.856 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:50.856 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:32:50.856 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:50.856 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:32:50.856 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:50.856 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:50.856 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:50.856 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:32:50.856 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:32:50.856 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:32:50.856 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:50.856 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:50.856 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:50.856 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:32:50.856 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:32:50.856 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:50.856 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:32:50.856 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:50.856 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:50.856 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:50.856 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:50.856 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:50.856 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:50.856 00:32:50.856 real 0m8.311s 00:32:50.856 user 0m1.823s 00:32:50.856 sys 0m4.497s 00:32:50.856 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:50.856 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:32:50.856 ************************************ 00:32:50.856 END TEST nvmf_target_multipath 00:32:50.856 ************************************ 00:32:50.856 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:32:50.856 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:50.856 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:50.856 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:50.856 ************************************ 00:32:50.856 START TEST nvmf_zcopy 00:32:50.856 ************************************ 00:32:50.856 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:32:51.116 * Looking for test storage... 00:32:51.116 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:51.116 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:51.116 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:32:51.116 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:51.116 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:51.116 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:51.116 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:51.116 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:51.116 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:32:51.116 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:32:51.116 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:32:51.116 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:32:51.116 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:32:51.116 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:32:51.116 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:51.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:51.117 --rc genhtml_branch_coverage=1 00:32:51.117 --rc genhtml_function_coverage=1 00:32:51.117 --rc genhtml_legend=1 00:32:51.117 --rc geninfo_all_blocks=1 00:32:51.117 --rc geninfo_unexecuted_blocks=1 00:32:51.117 00:32:51.117 ' 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:51.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:51.117 --rc genhtml_branch_coverage=1 00:32:51.117 --rc genhtml_function_coverage=1 00:32:51.117 --rc genhtml_legend=1 00:32:51.117 --rc geninfo_all_blocks=1 00:32:51.117 --rc geninfo_unexecuted_blocks=1 00:32:51.117 00:32:51.117 ' 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:51.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:51.117 --rc genhtml_branch_coverage=1 00:32:51.117 --rc genhtml_function_coverage=1 00:32:51.117 --rc genhtml_legend=1 00:32:51.117 --rc geninfo_all_blocks=1 00:32:51.117 --rc geninfo_unexecuted_blocks=1 00:32:51.117 00:32:51.117 ' 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:51.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:51.117 --rc genhtml_branch_coverage=1 00:32:51.117 --rc genhtml_function_coverage=1 00:32:51.117 --rc genhtml_legend=1 00:32:51.117 --rc geninfo_all_blocks=1 00:32:51.117 --rc geninfo_unexecuted_blocks=1 00:32:51.117 00:32:51.117 ' 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:51.117 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:32:51.118 12:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:57.688 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:57.688 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:32:57.688 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:57.688 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:57.688 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:57.688 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:57.688 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:57.688 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:32:57.688 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:57.688 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:32:57.688 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:32:57.688 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:32:57.688 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:32:57.688 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:32:57.688 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:32:57.688 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:57.688 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:57.688 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:57.688 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:57.688 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:57.688 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:57.688 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:57.688 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:57.688 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:57.688 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:57.688 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:57.688 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:57.688 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:57.688 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:57.688 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:57.688 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:57.688 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:57.688 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:57.688 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:57.688 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:57.688 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:57.689 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:57.689 Found net devices under 0000:86:00.0: cvl_0_0 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:57.689 Found net devices under 0000:86:00.1: cvl_0_1 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:57.689 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:57.689 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.480 ms 00:32:57.689 00:32:57.689 --- 10.0.0.2 ping statistics --- 00:32:57.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:57.689 rtt min/avg/max/mdev = 0.480/0.480/0.480/0.000 ms 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:57.689 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:57.689 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:32:57.689 00:32:57.689 --- 10.0.0.1 ping statistics --- 00:32:57.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:57.689 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:57.689 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:57.690 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:57.690 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:32:57.690 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:57.690 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:57.690 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:57.690 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=399335 00:32:57.690 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:57.690 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 399335 00:32:57.690 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 399335 ']' 00:32:57.690 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:57.690 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:57.690 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:57.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:57.690 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:57.690 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:57.690 [2024-11-20 12:47:02.756647] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:57.690 [2024-11-20 12:47:02.757532] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:32:57.690 [2024-11-20 12:47:02.757565] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:57.690 [2024-11-20 12:47:02.834779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:57.690 [2024-11-20 12:47:02.874577] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:57.690 [2024-11-20 12:47:02.874614] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:57.690 [2024-11-20 12:47:02.874622] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:57.690 [2024-11-20 12:47:02.874627] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:57.690 [2024-11-20 12:47:02.874632] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:57.690 [2024-11-20 12:47:02.875161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:57.690 [2024-11-20 12:47:02.940062] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:57.690 [2024-11-20 12:47:02.940286] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:57.690 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:57.690 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:32:57.690 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:57.690 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:57.690 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:57.690 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:57.690 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:32:57.690 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:32:57.690 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.690 12:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:57.690 [2024-11-20 12:47:03.003828] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:57.690 12:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.690 12:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:57.690 12:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.690 12:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:57.690 12:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.690 12:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:57.690 12:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.690 12:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:57.690 [2024-11-20 12:47:03.032037] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:57.690 12:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.690 12:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:57.690 12:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.690 12:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:57.690 12:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.690 12:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:32:57.690 12:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.690 12:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:57.690 malloc0 00:32:57.690 12:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.690 12:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:32:57.690 12:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.690 12:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:57.690 12:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.690 12:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:32:57.690 12:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:32:57.690 12:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:32:57.690 12:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:32:57.690 12:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:57.690 12:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:57.690 { 00:32:57.690 "params": { 00:32:57.690 "name": "Nvme$subsystem", 00:32:57.690 "trtype": "$TEST_TRANSPORT", 00:32:57.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:57.690 "adrfam": "ipv4", 00:32:57.690 "trsvcid": "$NVMF_PORT", 00:32:57.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:57.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:57.690 "hdgst": ${hdgst:-false}, 00:32:57.690 "ddgst": ${ddgst:-false} 00:32:57.690 }, 00:32:57.690 "method": "bdev_nvme_attach_controller" 00:32:57.690 } 00:32:57.690 EOF 00:32:57.690 )") 00:32:57.690 12:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:32:57.690 12:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:32:57.690 12:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:32:57.690 12:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:57.690 "params": { 00:32:57.690 "name": "Nvme1", 00:32:57.690 "trtype": "tcp", 00:32:57.690 "traddr": "10.0.0.2", 00:32:57.690 "adrfam": "ipv4", 00:32:57.690 "trsvcid": "4420", 00:32:57.690 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:57.690 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:57.690 "hdgst": false, 00:32:57.690 "ddgst": false 00:32:57.690 }, 00:32:57.690 "method": "bdev_nvme_attach_controller" 00:32:57.691 }' 00:32:57.691 [2024-11-20 12:47:03.123874] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:32:57.691 [2024-11-20 12:47:03.123919] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid399355 ] 00:32:57.691 [2024-11-20 12:47:03.198991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:57.691 [2024-11-20 12:47:03.239291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:57.691 Running I/O for 10 seconds... 00:33:00.004 8454.00 IOPS, 66.05 MiB/s [2024-11-20T11:47:06.706Z] 8549.50 IOPS, 66.79 MiB/s [2024-11-20T11:47:07.643Z] 8568.00 IOPS, 66.94 MiB/s [2024-11-20T11:47:08.580Z] 8580.25 IOPS, 67.03 MiB/s [2024-11-20T11:47:09.517Z] 8597.80 IOPS, 67.17 MiB/s [2024-11-20T11:47:10.454Z] 8599.00 IOPS, 67.18 MiB/s [2024-11-20T11:47:11.833Z] 8586.86 IOPS, 67.08 MiB/s [2024-11-20T11:47:12.769Z] 8581.38 IOPS, 67.04 MiB/s [2024-11-20T11:47:13.706Z] 8588.44 IOPS, 67.10 MiB/s [2024-11-20T11:47:13.706Z] 8591.00 IOPS, 67.12 MiB/s 00:33:07.940 Latency(us) 00:33:07.940 [2024-11-20T11:47:13.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:07.940 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:33:07.940 Verification LBA range: start 0x0 length 0x1000 00:33:07.940 Nvme1n1 : 10.01 8594.98 67.15 0.00 0.00 14850.47 436.91 21595.67 00:33:07.940 [2024-11-20T11:47:13.706Z] =================================================================================================================== 00:33:07.940 [2024-11-20T11:47:13.706Z] Total : 8594.98 67.15 0.00 0.00 14850.47 436.91 21595.67 00:33:07.940 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=400956 00:33:07.940 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:33:07.940 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:07.940 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:33:07.940 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:33:07.940 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:07.940 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:07.940 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:07.940 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:07.940 { 00:33:07.940 "params": { 00:33:07.940 "name": "Nvme$subsystem", 00:33:07.940 "trtype": "$TEST_TRANSPORT", 00:33:07.940 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:07.940 "adrfam": "ipv4", 00:33:07.940 "trsvcid": "$NVMF_PORT", 00:33:07.940 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:07.940 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:07.940 "hdgst": ${hdgst:-false}, 00:33:07.940 "ddgst": ${ddgst:-false} 00:33:07.940 }, 00:33:07.940 "method": "bdev_nvme_attach_controller" 00:33:07.940 } 00:33:07.940 EOF 00:33:07.940 )") 00:33:07.940 [2024-11-20 12:47:13.627500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.940 [2024-11-20 12:47:13.627530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.940 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:07.940 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:07.940 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:07.940 12:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:07.940 "params": { 00:33:07.940 "name": "Nvme1", 00:33:07.940 "trtype": "tcp", 00:33:07.940 "traddr": "10.0.0.2", 00:33:07.940 "adrfam": "ipv4", 00:33:07.940 "trsvcid": "4420", 00:33:07.940 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:07.940 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:07.940 "hdgst": false, 00:33:07.940 "ddgst": false 00:33:07.940 }, 00:33:07.940 "method": "bdev_nvme_attach_controller" 00:33:07.940 }' 00:33:07.940 [2024-11-20 12:47:13.639467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.940 [2024-11-20 12:47:13.639481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.940 [2024-11-20 12:47:13.651463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.940 [2024-11-20 12:47:13.651473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.940 [2024-11-20 12:47:13.663475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.940 [2024-11-20 12:47:13.663487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.940 [2024-11-20 12:47:13.666114] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:33:07.940 [2024-11-20 12:47:13.666155] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid400956 ] 00:33:07.940 [2024-11-20 12:47:13.675466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.940 [2024-11-20 12:47:13.675477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.940 [2024-11-20 12:47:13.687463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.940 [2024-11-20 12:47:13.687473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.940 [2024-11-20 12:47:13.699465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.940 [2024-11-20 12:47:13.699476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.200 [2024-11-20 12:47:13.711464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.200 [2024-11-20 12:47:13.711474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.200 [2024-11-20 12:47:13.723464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.200 [2024-11-20 12:47:13.723473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.200 [2024-11-20 12:47:13.735466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.200 [2024-11-20 12:47:13.735478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.200 [2024-11-20 12:47:13.739767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:08.200 [2024-11-20 12:47:13.747465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.200 [2024-11-20 12:47:13.747476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.200 [2024-11-20 12:47:13.759466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.200 [2024-11-20 12:47:13.759481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.200 [2024-11-20 12:47:13.771464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.200 [2024-11-20 12:47:13.771475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.200 [2024-11-20 12:47:13.781261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:08.200 [2024-11-20 12:47:13.783475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.200 [2024-11-20 12:47:13.783487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.200 [2024-11-20 12:47:13.795486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.200 [2024-11-20 12:47:13.795506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.200 [2024-11-20 12:47:13.807472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.200 [2024-11-20 12:47:13.807501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.200 [2024-11-20 12:47:13.819476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.200 [2024-11-20 12:47:13.819491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.200 [2024-11-20 12:47:13.831466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.200 [2024-11-20 12:47:13.831492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.200 [2024-11-20 12:47:13.843467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.200 [2024-11-20 12:47:13.843482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.200 [2024-11-20 12:47:13.855462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.200 [2024-11-20 12:47:13.855475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.200 [2024-11-20 12:47:13.867473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.200 [2024-11-20 12:47:13.867494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.200 [2024-11-20 12:47:13.879470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.200 [2024-11-20 12:47:13.879485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.200 [2024-11-20 12:47:13.891469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.200 [2024-11-20 12:47:13.891485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.200 [2024-11-20 12:47:13.903468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.200 [2024-11-20 12:47:13.903483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.200 [2024-11-20 12:47:13.915463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.200 [2024-11-20 12:47:13.915475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.200 [2024-11-20 12:47:13.927465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.200 [2024-11-20 12:47:13.927477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.200 [2024-11-20 12:47:13.939468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.200 [2024-11-20 12:47:13.939483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.200 [2024-11-20 12:47:13.951465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.200 [2024-11-20 12:47:13.951479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.460 [2024-11-20 12:47:13.963462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.460 [2024-11-20 12:47:13.963473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.460 [2024-11-20 12:47:13.975461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.460 [2024-11-20 12:47:13.975473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.460 [2024-11-20 12:47:13.987466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.460 [2024-11-20 12:47:13.987480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.460 [2024-11-20 12:47:13.999461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.460 [2024-11-20 12:47:13.999472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.460 [2024-11-20 12:47:14.011465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.460 [2024-11-20 12:47:14.011486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.460 [2024-11-20 12:47:14.023462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.460 [2024-11-20 12:47:14.023472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.460 [2024-11-20 12:47:14.035467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.460 [2024-11-20 12:47:14.035491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.460 [2024-11-20 12:47:14.047462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.460 [2024-11-20 12:47:14.047484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.460 [2024-11-20 12:47:14.059462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.460 [2024-11-20 12:47:14.059473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.460 [2024-11-20 12:47:14.071461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.460 [2024-11-20 12:47:14.071473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.460 [2024-11-20 12:47:14.083470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.460 [2024-11-20 12:47:14.083488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.460 Running I/O for 5 seconds... 00:33:08.460 [2024-11-20 12:47:14.100960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.460 [2024-11-20 12:47:14.100981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.460 [2024-11-20 12:47:14.116213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.460 [2024-11-20 12:47:14.116234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.460 [2024-11-20 12:47:14.127681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.460 [2024-11-20 12:47:14.127700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.460 [2024-11-20 12:47:14.141178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.460 [2024-11-20 12:47:14.141198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.460 [2024-11-20 12:47:14.156306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.460 [2024-11-20 12:47:14.156326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.460 [2024-11-20 12:47:14.168127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.460 [2024-11-20 12:47:14.168146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.460 [2024-11-20 12:47:14.181584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.460 [2024-11-20 12:47:14.181603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.460 [2024-11-20 12:47:14.196566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.460 [2024-11-20 12:47:14.196585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.460 [2024-11-20 12:47:14.211663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.460 [2024-11-20 12:47:14.211683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.719 [2024-11-20 12:47:14.223314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.719 [2024-11-20 12:47:14.223335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.719 [2024-11-20 12:47:14.237699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.719 [2024-11-20 12:47:14.237719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.719 [2024-11-20 12:47:14.252643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.720 [2024-11-20 12:47:14.252662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.720 [2024-11-20 12:47:14.267314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.720 [2024-11-20 12:47:14.267332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.720 [2024-11-20 12:47:14.278480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.720 [2024-11-20 12:47:14.278498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.720 [2024-11-20 12:47:14.292946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.720 [2024-11-20 12:47:14.292964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.720 [2024-11-20 12:47:14.302914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.720 [2024-11-20 12:47:14.302938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.720 [2024-11-20 12:47:14.317028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.720 [2024-11-20 12:47:14.317047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.720 [2024-11-20 12:47:14.331767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.720 [2024-11-20 12:47:14.331786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.720 [2024-11-20 12:47:14.347430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.720 [2024-11-20 12:47:14.347459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.720 [2024-11-20 12:47:14.360214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.720 [2024-11-20 12:47:14.360236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.720 [2024-11-20 12:47:14.372709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.720 [2024-11-20 12:47:14.372727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.720 [2024-11-20 12:47:14.387757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.720 [2024-11-20 12:47:14.387775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.720 [2024-11-20 12:47:14.401368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.720 [2024-11-20 12:47:14.401391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.720 [2024-11-20 12:47:14.415915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.720 [2024-11-20 12:47:14.415934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.720 [2024-11-20 12:47:14.428330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.720 [2024-11-20 12:47:14.428349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.720 [2024-11-20 12:47:14.441396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.720 [2024-11-20 12:47:14.441414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.720 [2024-11-20 12:47:14.456130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.720 [2024-11-20 12:47:14.456149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.720 [2024-11-20 12:47:14.472014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.720 [2024-11-20 12:47:14.472033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.979 [2024-11-20 12:47:14.482660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.979 [2024-11-20 12:47:14.482680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.979 [2024-11-20 12:47:14.497182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.979 [2024-11-20 12:47:14.497209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.979 [2024-11-20 12:47:14.511856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.979 [2024-11-20 12:47:14.511874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.979 [2024-11-20 12:47:14.523511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.979 [2024-11-20 12:47:14.523529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.979 [2024-11-20 12:47:14.537387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.979 [2024-11-20 12:47:14.537405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.979 [2024-11-20 12:47:14.552260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.979 [2024-11-20 12:47:14.552278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.979 [2024-11-20 12:47:14.567244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.979 [2024-11-20 12:47:14.567263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.979 [2024-11-20 12:47:14.581320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.979 [2024-11-20 12:47:14.581339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.979 [2024-11-20 12:47:14.595777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.979 [2024-11-20 12:47:14.595795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.979 [2024-11-20 12:47:14.609749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.979 [2024-11-20 12:47:14.609768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.979 [2024-11-20 12:47:14.624609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.979 [2024-11-20 12:47:14.624628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.979 [2024-11-20 12:47:14.639797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.979 [2024-11-20 12:47:14.639816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.979 [2024-11-20 12:47:14.651750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.979 [2024-11-20 12:47:14.651772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.979 [2024-11-20 12:47:14.665147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.979 [2024-11-20 12:47:14.665170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.979 [2024-11-20 12:47:14.680279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.979 [2024-11-20 12:47:14.680298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.979 [2024-11-20 12:47:14.690673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.979 [2024-11-20 12:47:14.690692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.979 [2024-11-20 12:47:14.705483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.979 [2024-11-20 12:47:14.705504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.979 [2024-11-20 12:47:14.720323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.979 [2024-11-20 12:47:14.720341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.979 [2024-11-20 12:47:14.732093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.979 [2024-11-20 12:47:14.732111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.238 [2024-11-20 12:47:14.745427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.238 [2024-11-20 12:47:14.745446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.238 [2024-11-20 12:47:14.760094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.238 [2024-11-20 12:47:14.760112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.238 [2024-11-20 12:47:14.775324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.238 [2024-11-20 12:47:14.775342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.238 [2024-11-20 12:47:14.789634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.238 [2024-11-20 12:47:14.789652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.238 [2024-11-20 12:47:14.804704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.238 [2024-11-20 12:47:14.804723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.238 [2024-11-20 12:47:14.814618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.238 [2024-11-20 12:47:14.814636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.238 [2024-11-20 12:47:14.829699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.238 [2024-11-20 12:47:14.829719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.238 [2024-11-20 12:47:14.844392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.238 [2024-11-20 12:47:14.844411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.238 [2024-11-20 12:47:14.857005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.238 [2024-11-20 12:47:14.857023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.238 [2024-11-20 12:47:14.871795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.238 [2024-11-20 12:47:14.871813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.238 [2024-11-20 12:47:14.887945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.238 [2024-11-20 12:47:14.887965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.238 [2024-11-20 12:47:14.902551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.238 [2024-11-20 12:47:14.902570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.238 [2024-11-20 12:47:14.917272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.238 [2024-11-20 12:47:14.917291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.238 [2024-11-20 12:47:14.932124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.238 [2024-11-20 12:47:14.932146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.238 [2024-11-20 12:47:14.947480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.238 [2024-11-20 12:47:14.947500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.238 [2024-11-20 12:47:14.960164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.239 [2024-11-20 12:47:14.960183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.239 [2024-11-20 12:47:14.973209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.239 [2024-11-20 12:47:14.973229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.239 [2024-11-20 12:47:14.988173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.239 [2024-11-20 12:47:14.988192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.239 [2024-11-20 12:47:15.000926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.239 [2024-11-20 12:47:15.000946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.498 [2024-11-20 12:47:15.015261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.498 [2024-11-20 12:47:15.015280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.498 [2024-11-20 12:47:15.029545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.498 [2024-11-20 12:47:15.029564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.498 [2024-11-20 12:47:15.043707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.498 [2024-11-20 12:47:15.043727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.498 [2024-11-20 12:47:15.055718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.498 [2024-11-20 12:47:15.055737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.498 [2024-11-20 12:47:15.069440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.498 [2024-11-20 12:47:15.069470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.498 [2024-11-20 12:47:15.084112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.498 [2024-11-20 12:47:15.084132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.498 [2024-11-20 12:47:15.096056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.498 [2024-11-20 12:47:15.096075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.498 16776.00 IOPS, 131.06 MiB/s [2024-11-20T11:47:15.264Z] [2024-11-20 12:47:15.109112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.498 [2024-11-20 12:47:15.109132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.498 [2024-11-20 12:47:15.123430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.498 [2024-11-20 12:47:15.123449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.498 [2024-11-20 12:47:15.136786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.498 [2024-11-20 12:47:15.136805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.498 [2024-11-20 12:47:15.151492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.498 [2024-11-20 12:47:15.151511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.498 [2024-11-20 12:47:15.165249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.498 [2024-11-20 12:47:15.165268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.498 [2024-11-20 12:47:15.179802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.498 [2024-11-20 12:47:15.179820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.498 [2024-11-20 12:47:15.191979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.498 [2024-11-20 12:47:15.191997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.498 [2024-11-20 12:47:15.204753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.498 [2024-11-20 12:47:15.204772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.498 [2024-11-20 12:47:15.219487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.498 [2024-11-20 12:47:15.219506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.498 [2024-11-20 12:47:15.230986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.498 [2024-11-20 12:47:15.231005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.498 [2024-11-20 12:47:15.245796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.498 [2024-11-20 12:47:15.245817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.498 [2024-11-20 12:47:15.260087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.498 [2024-11-20 12:47:15.260110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.758 [2024-11-20 12:47:15.271309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.758 [2024-11-20 12:47:15.271330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.758 [2024-11-20 12:47:15.285197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.758 [2024-11-20 12:47:15.285234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.758 [2024-11-20 12:47:15.300670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.758 [2024-11-20 12:47:15.300690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.758 [2024-11-20 12:47:15.310812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.758 [2024-11-20 12:47:15.310831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.758 [2024-11-20 12:47:15.325654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.758 [2024-11-20 12:47:15.325674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.758 [2024-11-20 12:47:15.339862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.758 [2024-11-20 12:47:15.339881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.758 [2024-11-20 12:47:15.351287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.759 [2024-11-20 12:47:15.351307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.759 [2024-11-20 12:47:15.365255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.759 [2024-11-20 12:47:15.365274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.759 [2024-11-20 12:47:15.379970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.759 [2024-11-20 12:47:15.379989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.759 [2024-11-20 12:47:15.391097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.759 [2024-11-20 12:47:15.391116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.759 [2024-11-20 12:47:15.405450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.759 [2024-11-20 12:47:15.405471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.759 [2024-11-20 12:47:15.420213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.759 [2024-11-20 12:47:15.420248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.759 [2024-11-20 12:47:15.431799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.759 [2024-11-20 12:47:15.431818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.759 [2024-11-20 12:47:15.445321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.759 [2024-11-20 12:47:15.445340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.759 [2024-11-20 12:47:15.460326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.759 [2024-11-20 12:47:15.460345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.759 [2024-11-20 12:47:15.475179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.759 [2024-11-20 12:47:15.475200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.759 [2024-11-20 12:47:15.489481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.759 [2024-11-20 12:47:15.489502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.759 [2024-11-20 12:47:15.504574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.759 [2024-11-20 12:47:15.504594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.759 [2024-11-20 12:47:15.514994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.759 [2024-11-20 12:47:15.515013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.018 [2024-11-20 12:47:15.529361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.018 [2024-11-20 12:47:15.529380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.018 [2024-11-20 12:47:15.543365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.018 [2024-11-20 12:47:15.543386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.018 [2024-11-20 12:47:15.554158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.018 [2024-11-20 12:47:15.554176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.019 [2024-11-20 12:47:15.568584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.019 [2024-11-20 12:47:15.568604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.019 [2024-11-20 12:47:15.578687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.019 [2024-11-20 12:47:15.578706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.019 [2024-11-20 12:47:15.593389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.019 [2024-11-20 12:47:15.593408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.019 [2024-11-20 12:47:15.607851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.019 [2024-11-20 12:47:15.607869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.019 [2024-11-20 12:47:15.623492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.019 [2024-11-20 12:47:15.623511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.019 [2024-11-20 12:47:15.636521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.019 [2024-11-20 12:47:15.636540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.019 [2024-11-20 12:47:15.651400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.019 [2024-11-20 12:47:15.651423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.019 [2024-11-20 12:47:15.662628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.019 [2024-11-20 12:47:15.662647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.019 [2024-11-20 12:47:15.677063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.019 [2024-11-20 12:47:15.677081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.019 [2024-11-20 12:47:15.692077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.019 [2024-11-20 12:47:15.692095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.019 [2024-11-20 12:47:15.706792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.019 [2024-11-20 12:47:15.706811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.019 [2024-11-20 12:47:15.721064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.019 [2024-11-20 12:47:15.721083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.019 [2024-11-20 12:47:15.735845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.019 [2024-11-20 12:47:15.735865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.019 [2024-11-20 12:47:15.748206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.019 [2024-11-20 12:47:15.748224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.019 [2024-11-20 12:47:15.761494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.019 [2024-11-20 12:47:15.761513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.019 [2024-11-20 12:47:15.776450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.019 [2024-11-20 12:47:15.776469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.278 [2024-11-20 12:47:15.786848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.278 [2024-11-20 12:47:15.786866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.278 [2024-11-20 12:47:15.801353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.278 [2024-11-20 12:47:15.801371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.278 [2024-11-20 12:47:15.815714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.278 [2024-11-20 12:47:15.815733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.278 [2024-11-20 12:47:15.826958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.278 [2024-11-20 12:47:15.826976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.278 [2024-11-20 12:47:15.841085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.279 [2024-11-20 12:47:15.841103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.279 [2024-11-20 12:47:15.855940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.279 [2024-11-20 12:47:15.855958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.279 [2024-11-20 12:47:15.870935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.279 [2024-11-20 12:47:15.870953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.279 [2024-11-20 12:47:15.885165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.279 [2024-11-20 12:47:15.885183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.279 [2024-11-20 12:47:15.899867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.279 [2024-11-20 12:47:15.899885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.279 [2024-11-20 12:47:15.915903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.279 [2024-11-20 12:47:15.915921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.279 [2024-11-20 12:47:15.931424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.279 [2024-11-20 12:47:15.931443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.279 [2024-11-20 12:47:15.945659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.279 [2024-11-20 12:47:15.945676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.279 [2024-11-20 12:47:15.960806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.279 [2024-11-20 12:47:15.960828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.279 [2024-11-20 12:47:15.975040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.279 [2024-11-20 12:47:15.975060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.279 [2024-11-20 12:47:15.988994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.279 [2024-11-20 12:47:15.989017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.279 [2024-11-20 12:47:16.003633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.279 [2024-11-20 12:47:16.003650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.279 [2024-11-20 12:47:16.015953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.279 [2024-11-20 12:47:16.015970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.279 [2024-11-20 12:47:16.029299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.279 [2024-11-20 12:47:16.029319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.538 [2024-11-20 12:47:16.044107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.538 [2024-11-20 12:47:16.044129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.538 [2024-11-20 12:47:16.059247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.538 [2024-11-20 12:47:16.059266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.538 [2024-11-20 12:47:16.073582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.538 [2024-11-20 12:47:16.073601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.538 [2024-11-20 12:47:16.087877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.538 [2024-11-20 12:47:16.087895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.538 [2024-11-20 12:47:16.100142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.538 [2024-11-20 12:47:16.100161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.538 16798.00 IOPS, 131.23 MiB/s [2024-11-20T11:47:16.304Z] [2024-11-20 12:47:16.115008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.538 [2024-11-20 12:47:16.115026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.538 [2024-11-20 12:47:16.129438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.538 [2024-11-20 12:47:16.129458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.538 [2024-11-20 12:47:16.144209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.538 [2024-11-20 12:47:16.144227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.538 [2024-11-20 12:47:16.159667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.538 [2024-11-20 12:47:16.159685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.538 [2024-11-20 12:47:16.172179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.538 [2024-11-20 12:47:16.172197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.538 [2024-11-20 12:47:16.186909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.538 [2024-11-20 12:47:16.186928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.538 [2024-11-20 12:47:16.200995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.538 [2024-11-20 12:47:16.201013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.538 [2024-11-20 12:47:16.215997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.538 [2024-11-20 12:47:16.216015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.538 [2024-11-20 12:47:16.226902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.538 [2024-11-20 12:47:16.226925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.538 [2024-11-20 12:47:16.241076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.538 [2024-11-20 12:47:16.241093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.538 [2024-11-20 12:47:16.255663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.538 [2024-11-20 12:47:16.255681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.538 [2024-11-20 12:47:16.267055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.538 [2024-11-20 12:47:16.267072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.538 [2024-11-20 12:47:16.281118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.538 [2024-11-20 12:47:16.281136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.538 [2024-11-20 12:47:16.295870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.538 [2024-11-20 12:47:16.295887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.797 [2024-11-20 12:47:16.311097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.797 [2024-11-20 12:47:16.311116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.797 [2024-11-20 12:47:16.325417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.797 [2024-11-20 12:47:16.325436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.797 [2024-11-20 12:47:16.339998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.797 [2024-11-20 12:47:16.340020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.797 [2024-11-20 12:47:16.355347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.797 [2024-11-20 12:47:16.355366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.797 [2024-11-20 12:47:16.368318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.797 [2024-11-20 12:47:16.368337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.797 [2024-11-20 12:47:16.382954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.797 [2024-11-20 12:47:16.382972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.797 [2024-11-20 12:47:16.395542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.797 [2024-11-20 12:47:16.395560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.797 [2024-11-20 12:47:16.409792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.797 [2024-11-20 12:47:16.409811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.797 [2024-11-20 12:47:16.424773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.797 [2024-11-20 12:47:16.424792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.797 [2024-11-20 12:47:16.439525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.797 [2024-11-20 12:47:16.439544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.797 [2024-11-20 12:47:16.452190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.797 [2024-11-20 12:47:16.452213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.797 [2024-11-20 12:47:16.466996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.797 [2024-11-20 12:47:16.467014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.797 [2024-11-20 12:47:16.480361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.797 [2024-11-20 12:47:16.480379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.797 [2024-11-20 12:47:16.495670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.797 [2024-11-20 12:47:16.495693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.797 [2024-11-20 12:47:16.506704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.797 [2024-11-20 12:47:16.506722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.797 [2024-11-20 12:47:16.521385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.797 [2024-11-20 12:47:16.521403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.797 [2024-11-20 12:47:16.535946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.797 [2024-11-20 12:47:16.535964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.797 [2024-11-20 12:47:16.548889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.797 [2024-11-20 12:47:16.548907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.797 [2024-11-20 12:47:16.560090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.797 [2024-11-20 12:47:16.560108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.056 [2024-11-20 12:47:16.574961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.056 [2024-11-20 12:47:16.574980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.056 [2024-11-20 12:47:16.588290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.056 [2024-11-20 12:47:16.588308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.056 [2024-11-20 12:47:16.602877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.056 [2024-11-20 12:47:16.602894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.056 [2024-11-20 12:47:16.616536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.056 [2024-11-20 12:47:16.616554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.056 [2024-11-20 12:47:16.631217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.056 [2024-11-20 12:47:16.631235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.056 [2024-11-20 12:47:16.645491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.056 [2024-11-20 12:47:16.645509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.056 [2024-11-20 12:47:16.659962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.056 [2024-11-20 12:47:16.659979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.056 [2024-11-20 12:47:16.675724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.056 [2024-11-20 12:47:16.675743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.056 [2024-11-20 12:47:16.689208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.056 [2024-11-20 12:47:16.689243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.056 [2024-11-20 12:47:16.704300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.056 [2024-11-20 12:47:16.704319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.056 [2024-11-20 12:47:16.719322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.056 [2024-11-20 12:47:16.719342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.056 [2024-11-20 12:47:16.733190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.056 [2024-11-20 12:47:16.733215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.056 [2024-11-20 12:47:16.747935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.056 [2024-11-20 12:47:16.747955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.056 [2024-11-20 12:47:16.760149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.056 [2024-11-20 12:47:16.760172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.056 [2024-11-20 12:47:16.772895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.056 [2024-11-20 12:47:16.772913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.056 [2024-11-20 12:47:16.787662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.056 [2024-11-20 12:47:16.787680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.056 [2024-11-20 12:47:16.798266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.056 [2024-11-20 12:47:16.798284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.056 [2024-11-20 12:47:16.813031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.056 [2024-11-20 12:47:16.813049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.315 [2024-11-20 12:47:16.827819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.315 [2024-11-20 12:47:16.827838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.315 [2024-11-20 12:47:16.842904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.315 [2024-11-20 12:47:16.842922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.315 [2024-11-20 12:47:16.856891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.315 [2024-11-20 12:47:16.856909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.315 [2024-11-20 12:47:16.871304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.315 [2024-11-20 12:47:16.871323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.315 [2024-11-20 12:47:16.884880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.315 [2024-11-20 12:47:16.884899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.315 [2024-11-20 12:47:16.899665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.315 [2024-11-20 12:47:16.899683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.315 [2024-11-20 12:47:16.912278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.315 [2024-11-20 12:47:16.912297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.315 [2024-11-20 12:47:16.927300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.315 [2024-11-20 12:47:16.927320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.315 [2024-11-20 12:47:16.940954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.315 [2024-11-20 12:47:16.940973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.315 [2024-11-20 12:47:16.956283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.315 [2024-11-20 12:47:16.956303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.315 [2024-11-20 12:47:16.971870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.315 [2024-11-20 12:47:16.971889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.315 [2024-11-20 12:47:16.984717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.315 [2024-11-20 12:47:16.984736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.315 [2024-11-20 12:47:16.995689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.315 [2024-11-20 12:47:16.995707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.315 [2024-11-20 12:47:17.009946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.315 [2024-11-20 12:47:17.009966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.315 [2024-11-20 12:47:17.024701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.315 [2024-11-20 12:47:17.024720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.315 [2024-11-20 12:47:17.039607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.315 [2024-11-20 12:47:17.039626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.315 [2024-11-20 12:47:17.050971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.315 [2024-11-20 12:47:17.050990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.315 [2024-11-20 12:47:17.065124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.315 [2024-11-20 12:47:17.065143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.574 [2024-11-20 12:47:17.080373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.574 [2024-11-20 12:47:17.080392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.574 [2024-11-20 12:47:17.095111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.574 [2024-11-20 12:47:17.095130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.574 16802.67 IOPS, 131.27 MiB/s [2024-11-20T11:47:17.340Z] [2024-11-20 12:47:17.109444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.574 [2024-11-20 12:47:17.109463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.574 [2024-11-20 12:47:17.124191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.574 [2024-11-20 12:47:17.124216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.574 [2024-11-20 12:47:17.139785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.574 [2024-11-20 12:47:17.139803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.574 [2024-11-20 12:47:17.151249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.574 [2024-11-20 12:47:17.151267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.574 [2024-11-20 12:47:17.165309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.574 [2024-11-20 12:47:17.165328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.574 [2024-11-20 12:47:17.180065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.574 [2024-11-20 12:47:17.180084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.574 [2024-11-20 12:47:17.195633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.574 [2024-11-20 12:47:17.195651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.574 [2024-11-20 12:47:17.209294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.574 [2024-11-20 12:47:17.209312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.574 [2024-11-20 12:47:17.224141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.574 [2024-11-20 12:47:17.224159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.574 [2024-11-20 12:47:17.238961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.574 [2024-11-20 12:47:17.238980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.574 [2024-11-20 12:47:17.253578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.574 [2024-11-20 12:47:17.253597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.574 [2024-11-20 12:47:17.268792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.574 [2024-11-20 12:47:17.268809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.574 [2024-11-20 12:47:17.283595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.574 [2024-11-20 12:47:17.283613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.574 [2024-11-20 12:47:17.296216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.574 [2024-11-20 12:47:17.296249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.574 [2024-11-20 12:47:17.311656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.574 [2024-11-20 12:47:17.311675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.574 [2024-11-20 12:47:17.323302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.574 [2024-11-20 12:47:17.323320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.833 [2024-11-20 12:47:17.337744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.833 [2024-11-20 12:47:17.337766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.833 [2024-11-20 12:47:17.352860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.833 [2024-11-20 12:47:17.352878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.833 [2024-11-20 12:47:17.367357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.833 [2024-11-20 12:47:17.367375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.833 [2024-11-20 12:47:17.380416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.833 [2024-11-20 12:47:17.380444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.833 [2024-11-20 12:47:17.393126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.833 [2024-11-20 12:47:17.393143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.834 [2024-11-20 12:47:17.408217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.834 [2024-11-20 12:47:17.408251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.834 [2024-11-20 12:47:17.419190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.834 [2024-11-20 12:47:17.419214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.834 [2024-11-20 12:47:17.433387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.834 [2024-11-20 12:47:17.433405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.834 [2024-11-20 12:47:17.448285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.834 [2024-11-20 12:47:17.448303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.834 [2024-11-20 12:47:17.463014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.834 [2024-11-20 12:47:17.463032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.834 [2024-11-20 12:47:17.476420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.834 [2024-11-20 12:47:17.476438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.834 [2024-11-20 12:47:17.491540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.834 [2024-11-20 12:47:17.491558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.834 [2024-11-20 12:47:17.503873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.834 [2024-11-20 12:47:17.503890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.834 [2024-11-20 12:47:17.519476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.834 [2024-11-20 12:47:17.519495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.834 [2024-11-20 12:47:17.533036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.834 [2024-11-20 12:47:17.533054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.834 [2024-11-20 12:47:17.547710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.834 [2024-11-20 12:47:17.547731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.834 [2024-11-20 12:47:17.558472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.834 [2024-11-20 12:47:17.558491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.834 [2024-11-20 12:47:17.573540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.834 [2024-11-20 12:47:17.573558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.834 [2024-11-20 12:47:17.588163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.834 [2024-11-20 12:47:17.588180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.102 [2024-11-20 12:47:17.603062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.102 [2024-11-20 12:47:17.603081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.102 [2024-11-20 12:47:17.617180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.102 [2024-11-20 12:47:17.617198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.102 [2024-11-20 12:47:17.631659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.102 [2024-11-20 12:47:17.631677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.102 [2024-11-20 12:47:17.642038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.102 [2024-11-20 12:47:17.642057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.102 [2024-11-20 12:47:17.657021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.102 [2024-11-20 12:47:17.657039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.102 [2024-11-20 12:47:17.671807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.102 [2024-11-20 12:47:17.671824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.102 [2024-11-20 12:47:17.683532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.102 [2024-11-20 12:47:17.683549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.102 [2024-11-20 12:47:17.697759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.102 [2024-11-20 12:47:17.697778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.102 [2024-11-20 12:47:17.712822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.102 [2024-11-20 12:47:17.712842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.102 [2024-11-20 12:47:17.727332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.102 [2024-11-20 12:47:17.727351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.102 [2024-11-20 12:47:17.741651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.102 [2024-11-20 12:47:17.741669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.103 [2024-11-20 12:47:17.756839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.103 [2024-11-20 12:47:17.756856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.103 [2024-11-20 12:47:17.771887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.103 [2024-11-20 12:47:17.771906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.103 [2024-11-20 12:47:17.787909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.103 [2024-11-20 12:47:17.787928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.103 [2024-11-20 12:47:17.802591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.103 [2024-11-20 12:47:17.802609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.103 [2024-11-20 12:47:17.817178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.103 [2024-11-20 12:47:17.817206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.103 [2024-11-20 12:47:17.832114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.103 [2024-11-20 12:47:17.832132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.103 [2024-11-20 12:47:17.847669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.103 [2024-11-20 12:47:17.847687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.103 [2024-11-20 12:47:17.861778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.103 [2024-11-20 12:47:17.861797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.360 [2024-11-20 12:47:17.876723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.361 [2024-11-20 12:47:17.876741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.361 [2024-11-20 12:47:17.891659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.361 [2024-11-20 12:47:17.891676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.361 [2024-11-20 12:47:17.904400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.361 [2024-11-20 12:47:17.904418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.361 [2024-11-20 12:47:17.919312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.361 [2024-11-20 12:47:17.919331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.361 [2024-11-20 12:47:17.933591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.361 [2024-11-20 12:47:17.933608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.361 [2024-11-20 12:47:17.948001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.361 [2024-11-20 12:47:17.948019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.361 [2024-11-20 12:47:17.963251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.361 [2024-11-20 12:47:17.963270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.361 [2024-11-20 12:47:17.977633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.361 [2024-11-20 12:47:17.977651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.361 [2024-11-20 12:47:17.992716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.361 [2024-11-20 12:47:17.992735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.361 [2024-11-20 12:47:18.007211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.361 [2024-11-20 12:47:18.007230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.361 [2024-11-20 12:47:18.021083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.361 [2024-11-20 12:47:18.021100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.361 [2024-11-20 12:47:18.036428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.361 [2024-11-20 12:47:18.036446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.361 [2024-11-20 12:47:18.051710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.361 [2024-11-20 12:47:18.051728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.361 [2024-11-20 12:47:18.064041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.361 [2024-11-20 12:47:18.064058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.361 [2024-11-20 12:47:18.076910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.361 [2024-11-20 12:47:18.076928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.361 [2024-11-20 12:47:18.091938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.361 [2024-11-20 12:47:18.091963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.361 16780.00 IOPS, 131.09 MiB/s [2024-11-20T11:47:18.127Z] [2024-11-20 12:47:18.107725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.361 [2024-11-20 12:47:18.107743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.361 [2024-11-20 12:47:18.119698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.361 [2024-11-20 12:47:18.119716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.619 [2024-11-20 12:47:18.133440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.619 [2024-11-20 12:47:18.133458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.619 [2024-11-20 12:47:18.148257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.619 [2024-11-20 12:47:18.148277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.619 [2024-11-20 12:47:18.163197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.619 [2024-11-20 12:47:18.163221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.619 [2024-11-20 12:47:18.177273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.619 [2024-11-20 12:47:18.177294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.619 [2024-11-20 12:47:18.191716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.619 [2024-11-20 12:47:18.191735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.619 [2024-11-20 12:47:18.203221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.619 [2024-11-20 12:47:18.203241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.619 [2024-11-20 12:47:18.217616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.619 [2024-11-20 12:47:18.217636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.619 [2024-11-20 12:47:18.232636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.619 [2024-11-20 12:47:18.232660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.619 [2024-11-20 12:47:18.247737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.619 [2024-11-20 12:47:18.247756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.619 [2024-11-20 12:47:18.258338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.619 [2024-11-20 12:47:18.258357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.619 [2024-11-20 12:47:18.273260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.619 [2024-11-20 12:47:18.273278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.619 [2024-11-20 12:47:18.288108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.619 [2024-11-20 12:47:18.288126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.619 [2024-11-20 12:47:18.302878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.619 [2024-11-20 12:47:18.302896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.619 [2024-11-20 12:47:18.316555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.619 [2024-11-20 12:47:18.316574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.619 [2024-11-20 12:47:18.331548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.619 [2024-11-20 12:47:18.331566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.619 [2024-11-20 12:47:18.342473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.619 [2024-11-20 12:47:18.342491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.620 [2024-11-20 12:47:18.357382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.620 [2024-11-20 12:47:18.357400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.620 [2024-11-20 12:47:18.371447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.620 [2024-11-20 12:47:18.371467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.878 [2024-11-20 12:47:18.382799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.878 [2024-11-20 12:47:18.382818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.878 [2024-11-20 12:47:18.397105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.878 [2024-11-20 12:47:18.397123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.878 [2024-11-20 12:47:18.412191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.878 [2024-11-20 12:47:18.412217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.878 [2024-11-20 12:47:18.427229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.878 [2024-11-20 12:47:18.427248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.878 [2024-11-20 12:47:18.441598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.878 [2024-11-20 12:47:18.441616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.878 [2024-11-20 12:47:18.456443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.878 [2024-11-20 12:47:18.456462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.878 [2024-11-20 12:47:18.471126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.878 [2024-11-20 12:47:18.471145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.878 [2024-11-20 12:47:18.483279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.878 [2024-11-20 12:47:18.483298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.878 [2024-11-20 12:47:18.497729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.878 [2024-11-20 12:47:18.497748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.878 [2024-11-20 12:47:18.512448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.878 [2024-11-20 12:47:18.512468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.878 [2024-11-20 12:47:18.523845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.878 [2024-11-20 12:47:18.523863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.878 [2024-11-20 12:47:18.537433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.878 [2024-11-20 12:47:18.537452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.878 [2024-11-20 12:47:18.552190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.878 [2024-11-20 12:47:18.552213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.878 [2024-11-20 12:47:18.566936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.878 [2024-11-20 12:47:18.566955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.878 [2024-11-20 12:47:18.581473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.878 [2024-11-20 12:47:18.581493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.878 [2024-11-20 12:47:18.596006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.878 [2024-11-20 12:47:18.596025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.878 [2024-11-20 12:47:18.611594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.878 [2024-11-20 12:47:18.611613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.878 [2024-11-20 12:47:18.623077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.878 [2024-11-20 12:47:18.623094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.878 [2024-11-20 12:47:18.637481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.878 [2024-11-20 12:47:18.637499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.136 [2024-11-20 12:47:18.652401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.136 [2024-11-20 12:47:18.652419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.136 [2024-11-20 12:47:18.667089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.136 [2024-11-20 12:47:18.667107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.136 [2024-11-20 12:47:18.681445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.136 [2024-11-20 12:47:18.681463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.136 [2024-11-20 12:47:18.696346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.136 [2024-11-20 12:47:18.696363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.136 [2024-11-20 12:47:18.712121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.136 [2024-11-20 12:47:18.712138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.136 [2024-11-20 12:47:18.726672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.136 [2024-11-20 12:47:18.726690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.137 [2024-11-20 12:47:18.741695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.137 [2024-11-20 12:47:18.741713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.137 [2024-11-20 12:47:18.756716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.137 [2024-11-20 12:47:18.756734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.137 [2024-11-20 12:47:18.771457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.137 [2024-11-20 12:47:18.771477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.137 [2024-11-20 12:47:18.784423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.137 [2024-11-20 12:47:18.784441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.138 [2024-11-20 12:47:18.799774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.138 [2024-11-20 12:47:18.799793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.138 [2024-11-20 12:47:18.815149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.138 [2024-11-20 12:47:18.815169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.138 [2024-11-20 12:47:18.828718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.138 [2024-11-20 12:47:18.828738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.138 [2024-11-20 12:47:18.843699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.138 [2024-11-20 12:47:18.843717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.138 [2024-11-20 12:47:18.854746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.138 [2024-11-20 12:47:18.854764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.138 [2024-11-20 12:47:18.869513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.138 [2024-11-20 12:47:18.869531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.138 [2024-11-20 12:47:18.884076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.138 [2024-11-20 12:47:18.884093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.138 [2024-11-20 12:47:18.895952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.138 [2024-11-20 12:47:18.895970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.397 [2024-11-20 12:47:18.909452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.397 [2024-11-20 12:47:18.909470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.397 [2024-11-20 12:47:18.923961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.397 [2024-11-20 12:47:18.923979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.397 [2024-11-20 12:47:18.939437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.397 [2024-11-20 12:47:18.939456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.397 [2024-11-20 12:47:18.953194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.397 [2024-11-20 12:47:18.953217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.397 [2024-11-20 12:47:18.968129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.397 [2024-11-20 12:47:18.968146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.397 [2024-11-20 12:47:18.980812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.398 [2024-11-20 12:47:18.980830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.398 [2024-11-20 12:47:18.991331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.398 [2024-11-20 12:47:18.991350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.398 [2024-11-20 12:47:19.005224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.398 [2024-11-20 12:47:19.005242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.398 [2024-11-20 12:47:19.020520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.398 [2024-11-20 12:47:19.020538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.398 [2024-11-20 12:47:19.035844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.398 [2024-11-20 12:47:19.035862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.398 [2024-11-20 12:47:19.049591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.398 [2024-11-20 12:47:19.049609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.398 [2024-11-20 12:47:19.064772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.398 [2024-11-20 12:47:19.064790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.398 [2024-11-20 12:47:19.079155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.398 [2024-11-20 12:47:19.079173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.398 [2024-11-20 12:47:19.093288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.398 [2024-11-20 12:47:19.093315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.398 16757.80 IOPS, 130.92 MiB/s [2024-11-20T11:47:19.164Z] [2024-11-20 12:47:19.108196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.398 [2024-11-20 12:47:19.108221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.398 00:33:13.398 Latency(us) 00:33:13.398 [2024-11-20T11:47:19.164Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:13.398 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:33:13.398 Nvme1n1 : 5.01 16761.20 130.95 0.00 0.00 7629.86 2012.89 12732.71 00:33:13.398 [2024-11-20T11:47:19.164Z] =================================================================================================================== 00:33:13.398 [2024-11-20T11:47:19.164Z] Total : 16761.20 130.95 0.00 0.00 7629.86 2012.89 12732.71 00:33:13.398 [2024-11-20 12:47:19.119467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.398 [2024-11-20 12:47:19.119483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.398 [2024-11-20 12:47:19.131468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.398 [2024-11-20 12:47:19.131482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.398 [2024-11-20 12:47:19.143490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.398 [2024-11-20 12:47:19.143508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.398 [2024-11-20 12:47:19.155471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.398 [2024-11-20 12:47:19.155486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.657 [2024-11-20 12:47:19.167472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.657 [2024-11-20 12:47:19.167486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.657 [2024-11-20 12:47:19.179467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.657 [2024-11-20 12:47:19.179490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.657 [2024-11-20 12:47:19.191466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.657 [2024-11-20 12:47:19.191480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.657 [2024-11-20 12:47:19.203464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.657 [2024-11-20 12:47:19.203478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.657 [2024-11-20 12:47:19.215465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.657 [2024-11-20 12:47:19.215480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.657 [2024-11-20 12:47:19.227462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.657 [2024-11-20 12:47:19.227471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.657 [2024-11-20 12:47:19.239472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.657 [2024-11-20 12:47:19.239485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.657 [2024-11-20 12:47:19.251466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.657 [2024-11-20 12:47:19.251487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.657 [2024-11-20 12:47:19.263462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.657 [2024-11-20 12:47:19.263474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.657 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (400956) - No such process 00:33:13.657 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 400956 00:33:13.657 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:13.657 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.657 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:13.657 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.657 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:13.657 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.657 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:13.657 delay0 00:33:13.657 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.657 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:33:13.657 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.657 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:13.657 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.658 12:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:33:13.658 [2024-11-20 12:47:19.367661] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:33:20.223 [2024-11-20 12:47:25.735010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3c80 is same with the state(6) to be set 00:33:20.223 [2024-11-20 12:47:25.735046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3c80 is same with the state(6) to be set 00:33:20.223 Initializing NVMe Controllers 00:33:20.223 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:20.223 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:20.223 Initialization complete. Launching workers. 00:33:20.223 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 1390 00:33:20.223 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1660, failed to submit 50 00:33:20.223 success 1530, unsuccessful 130, failed 0 00:33:20.223 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:33:20.223 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:33:20.223 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:20.223 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:33:20.223 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:20.223 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:33:20.223 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:20.223 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:20.223 rmmod nvme_tcp 00:33:20.223 rmmod nvme_fabrics 00:33:20.223 rmmod nvme_keyring 00:33:20.223 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:20.223 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:33:20.223 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:33:20.223 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 399335 ']' 00:33:20.223 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 399335 00:33:20.223 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 399335 ']' 00:33:20.223 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 399335 00:33:20.223 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:33:20.223 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:20.223 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 399335 00:33:20.223 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:20.223 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:20.223 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 399335' 00:33:20.223 killing process with pid 399335 00:33:20.223 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 399335 00:33:20.223 12:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 399335 00:33:20.482 12:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:20.482 12:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:20.482 12:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:20.482 12:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:33:20.483 12:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:33:20.483 12:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:20.483 12:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:33:20.483 12:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:20.483 12:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:20.483 12:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:20.483 12:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:20.483 12:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:22.388 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:22.388 00:33:22.388 real 0m31.547s 00:33:22.388 user 0m40.846s 00:33:22.388 sys 0m12.344s 00:33:22.388 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:22.388 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:22.388 ************************************ 00:33:22.388 END TEST nvmf_zcopy 00:33:22.388 ************************************ 00:33:22.388 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:22.388 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:22.388 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:22.388 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:22.648 ************************************ 00:33:22.648 START TEST nvmf_nmic 00:33:22.648 ************************************ 00:33:22.648 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:22.648 * Looking for test storage... 00:33:22.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:22.648 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:22.648 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:33:22.648 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:22.648 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:22.648 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:22.648 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:22.648 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:22.648 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:33:22.648 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:33:22.648 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:33:22.648 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:33:22.648 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:33:22.648 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:33:22.648 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:33:22.648 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:22.648 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:33:22.648 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:33:22.648 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:22.648 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:22.648 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:33:22.648 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:33:22.648 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:22.648 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:33:22.648 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:33:22.648 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:33:22.648 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:22.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.649 --rc genhtml_branch_coverage=1 00:33:22.649 --rc genhtml_function_coverage=1 00:33:22.649 --rc genhtml_legend=1 00:33:22.649 --rc geninfo_all_blocks=1 00:33:22.649 --rc geninfo_unexecuted_blocks=1 00:33:22.649 00:33:22.649 ' 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:22.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.649 --rc genhtml_branch_coverage=1 00:33:22.649 --rc genhtml_function_coverage=1 00:33:22.649 --rc genhtml_legend=1 00:33:22.649 --rc geninfo_all_blocks=1 00:33:22.649 --rc geninfo_unexecuted_blocks=1 00:33:22.649 00:33:22.649 ' 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:22.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.649 --rc genhtml_branch_coverage=1 00:33:22.649 --rc genhtml_function_coverage=1 00:33:22.649 --rc genhtml_legend=1 00:33:22.649 --rc geninfo_all_blocks=1 00:33:22.649 --rc geninfo_unexecuted_blocks=1 00:33:22.649 00:33:22.649 ' 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:22.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.649 --rc genhtml_branch_coverage=1 00:33:22.649 --rc genhtml_function_coverage=1 00:33:22.649 --rc genhtml_legend=1 00:33:22.649 --rc geninfo_all_blocks=1 00:33:22.649 --rc geninfo_unexecuted_blocks=1 00:33:22.649 00:33:22.649 ' 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:33:22.649 12:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:29.256 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:29.256 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:33:29.256 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:29.256 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:29.256 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:29.256 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:29.256 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:29.256 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:33:29.256 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:29.256 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:33:29.256 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:33:29.256 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:33:29.256 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:33:29.256 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:33:29.256 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:33:29.256 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:29.256 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:29.256 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:29.256 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:29.256 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:29.256 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:29.256 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:29.256 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:29.256 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:29.257 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:29.257 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:29.257 Found net devices under 0000:86:00.0: cvl_0_0 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:29.257 Found net devices under 0000:86:00.1: cvl_0_1 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:29.257 12:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:29.257 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:29.257 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:29.257 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:29.257 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:29.257 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:29.257 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:29.257 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:29.257 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:29.257 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:29.257 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:33:29.257 00:33:29.257 --- 10.0.0.2 ping statistics --- 00:33:29.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:29.257 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:33:29.257 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:29.257 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:29.257 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:33:29.257 00:33:29.257 --- 10.0.0.1 ping statistics --- 00:33:29.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:29.257 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:33:29.257 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:29.257 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:33:29.257 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:29.257 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:29.257 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:29.257 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:29.257 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:29.257 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:29.257 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:29.257 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:33:29.257 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=406367 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 406367 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 406367 ']' 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:29.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:29.258 [2024-11-20 12:47:34.334997] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:29.258 [2024-11-20 12:47:34.335976] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:33:29.258 [2024-11-20 12:47:34.336016] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:29.258 [2024-11-20 12:47:34.415830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:29.258 [2024-11-20 12:47:34.458296] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:29.258 [2024-11-20 12:47:34.458336] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:29.258 [2024-11-20 12:47:34.458343] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:29.258 [2024-11-20 12:47:34.458349] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:29.258 [2024-11-20 12:47:34.458355] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:29.258 [2024-11-20 12:47:34.459948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:29.258 [2024-11-20 12:47:34.460056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:29.258 [2024-11-20 12:47:34.460084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:29.258 [2024-11-20 12:47:34.460084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:29.258 [2024-11-20 12:47:34.528413] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:29.258 [2024-11-20 12:47:34.528922] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:29.258 [2024-11-20 12:47:34.529358] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:29.258 [2024-11-20 12:47:34.529746] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:29.258 [2024-11-20 12:47:34.529799] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:29.258 [2024-11-20 12:47:34.609141] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:29.258 Malloc0 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:29.258 [2024-11-20 12:47:34.697455] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:33:29.258 test case1: single bdev can't be used in multiple subsystems 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:29.258 [2024-11-20 12:47:34.732820] bdev.c:8203:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:33:29.258 [2024-11-20 12:47:34.732843] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:33:29.258 [2024-11-20 12:47:34.732850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.258 request: 00:33:29.258 { 00:33:29.258 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:33:29.258 "namespace": { 00:33:29.258 "bdev_name": "Malloc0", 00:33:29.258 "no_auto_visible": false 00:33:29.258 }, 00:33:29.258 "method": "nvmf_subsystem_add_ns", 00:33:29.258 "req_id": 1 00:33:29.258 } 00:33:29.258 Got JSON-RPC error response 00:33:29.258 response: 00:33:29.258 { 00:33:29.258 "code": -32602, 00:33:29.258 "message": "Invalid parameters" 00:33:29.258 } 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:33:29.258 Adding namespace failed - expected result. 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:33:29.258 test case2: host connect to nvmf target in multiple paths 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:29.258 [2024-11-20 12:47:34.744926] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.258 12:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:29.517 12:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:33:29.774 12:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:33:29.774 12:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:33:29.774 12:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:33:29.774 12:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:33:29.774 12:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:33:31.678 12:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:33:31.678 12:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:33:31.678 12:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:33:31.678 12:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:33:31.678 12:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:33:31.678 12:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:33:31.678 12:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:33:31.678 [global] 00:33:31.678 thread=1 00:33:31.678 invalidate=1 00:33:31.678 rw=write 00:33:31.678 time_based=1 00:33:31.678 runtime=1 00:33:31.678 ioengine=libaio 00:33:31.678 direct=1 00:33:31.678 bs=4096 00:33:31.678 iodepth=1 00:33:31.678 norandommap=0 00:33:31.678 numjobs=1 00:33:31.678 00:33:31.678 verify_dump=1 00:33:31.678 verify_backlog=512 00:33:31.678 verify_state_save=0 00:33:31.678 do_verify=1 00:33:31.678 verify=crc32c-intel 00:33:31.678 [job0] 00:33:31.678 filename=/dev/nvme0n1 00:33:31.678 Could not set queue depth (nvme0n1) 00:33:31.937 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:31.937 fio-3.35 00:33:31.937 Starting 1 thread 00:33:33.314 00:33:33.314 job0: (groupid=0, jobs=1): err= 0: pid=407150: Wed Nov 20 12:47:38 2024 00:33:33.314 read: IOPS=510, BW=2044KiB/s (2093kB/s)(2052KiB/1004msec) 00:33:33.314 slat (nsec): min=6707, max=26575, avg=7978.72, stdev=3000.28 00:33:33.314 clat (usec): min=192, max=41989, avg=1650.74, stdev=7569.73 00:33:33.314 lat (usec): min=199, max=42012, avg=1658.72, stdev=7572.29 00:33:33.314 clat percentiles (usec): 00:33:33.314 | 1.00th=[ 194], 5.00th=[ 196], 10.00th=[ 198], 20.00th=[ 200], 00:33:33.314 | 30.00th=[ 202], 40.00th=[ 204], 50.00th=[ 206], 60.00th=[ 210], 00:33:33.314 | 70.00th=[ 215], 80.00th=[ 217], 90.00th=[ 221], 95.00th=[ 231], 00:33:33.314 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:33.314 | 99.99th=[42206] 00:33:33.314 write: IOPS=1019, BW=4080KiB/s (4178kB/s)(4096KiB/1004msec); 0 zone resets 00:33:33.314 slat (nsec): min=9266, max=39683, avg=10400.25, stdev=1594.59 00:33:33.314 clat (usec): min=123, max=351, avg=135.29, stdev= 8.98 00:33:33.314 lat (usec): min=135, max=391, avg=145.69, stdev=10.01 00:33:33.314 clat percentiles (usec): 00:33:33.314 | 1.00th=[ 128], 5.00th=[ 130], 10.00th=[ 131], 20.00th=[ 133], 00:33:33.314 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 135], 60.00th=[ 135], 00:33:33.314 | 70.00th=[ 137], 80.00th=[ 139], 90.00th=[ 141], 95.00th=[ 145], 00:33:33.314 | 99.00th=[ 151], 99.50th=[ 155], 99.90th=[ 260], 99.95th=[ 351], 00:33:33.314 | 99.99th=[ 351] 00:33:33.314 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=2 00:33:33.314 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:33:33.314 lat (usec) : 250=98.37%, 500=0.46% 00:33:33.314 lat (msec) : 50=1.17% 00:33:33.314 cpu : usr=0.60%, sys=1.60%, ctx=1537, majf=0, minf=1 00:33:33.314 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:33.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:33.314 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:33.314 issued rwts: total=513,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:33.314 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:33.314 00:33:33.314 Run status group 0 (all jobs): 00:33:33.314 READ: bw=2044KiB/s (2093kB/s), 2044KiB/s-2044KiB/s (2093kB/s-2093kB/s), io=2052KiB (2101kB), run=1004-1004msec 00:33:33.314 WRITE: bw=4080KiB/s (4178kB/s), 4080KiB/s-4080KiB/s (4178kB/s-4178kB/s), io=4096KiB (4194kB), run=1004-1004msec 00:33:33.314 00:33:33.314 Disk stats (read/write): 00:33:33.314 nvme0n1: ios=562/592, merge=0/0, ticks=825/79, in_queue=904, util=91.08% 00:33:33.314 12:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:33.314 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:33:33.314 12:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:33.314 12:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:33:33.314 12:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:33:33.314 12:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:33.314 12:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:33:33.314 12:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:33.314 12:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:33:33.314 12:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:33:33.314 12:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:33:33.314 12:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:33.314 12:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:33:33.583 12:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:33.583 12:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:33:33.583 12:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:33.583 12:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:33.583 rmmod nvme_tcp 00:33:33.583 rmmod nvme_fabrics 00:33:33.583 rmmod nvme_keyring 00:33:33.583 12:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:33.583 12:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:33:33.583 12:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:33:33.583 12:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 406367 ']' 00:33:33.583 12:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 406367 00:33:33.583 12:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 406367 ']' 00:33:33.583 12:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 406367 00:33:33.583 12:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:33:33.583 12:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:33.584 12:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 406367 00:33:33.584 12:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:33.584 12:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:33.584 12:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 406367' 00:33:33.584 killing process with pid 406367 00:33:33.584 12:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 406367 00:33:33.584 12:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 406367 00:33:33.852 12:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:33.853 12:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:33.853 12:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:33.853 12:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:33:33.853 12:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:33:33.853 12:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:33.853 12:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:33:33.853 12:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:33.853 12:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:33.853 12:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:33.853 12:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:33.853 12:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:35.756 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:35.756 00:33:35.756 real 0m13.279s 00:33:35.756 user 0m24.948s 00:33:35.756 sys 0m6.012s 00:33:35.756 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:35.756 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:35.757 ************************************ 00:33:35.757 END TEST nvmf_nmic 00:33:35.757 ************************************ 00:33:35.757 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:33:35.757 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:35.757 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:35.757 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:36.016 ************************************ 00:33:36.016 START TEST nvmf_fio_target 00:33:36.016 ************************************ 00:33:36.016 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:33:36.016 * Looking for test storage... 00:33:36.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:36.016 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:36.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:36.017 --rc genhtml_branch_coverage=1 00:33:36.017 --rc genhtml_function_coverage=1 00:33:36.017 --rc genhtml_legend=1 00:33:36.017 --rc geninfo_all_blocks=1 00:33:36.017 --rc geninfo_unexecuted_blocks=1 00:33:36.017 00:33:36.017 ' 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:36.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:36.017 --rc genhtml_branch_coverage=1 00:33:36.017 --rc genhtml_function_coverage=1 00:33:36.017 --rc genhtml_legend=1 00:33:36.017 --rc geninfo_all_blocks=1 00:33:36.017 --rc geninfo_unexecuted_blocks=1 00:33:36.017 00:33:36.017 ' 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:36.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:36.017 --rc genhtml_branch_coverage=1 00:33:36.017 --rc genhtml_function_coverage=1 00:33:36.017 --rc genhtml_legend=1 00:33:36.017 --rc geninfo_all_blocks=1 00:33:36.017 --rc geninfo_unexecuted_blocks=1 00:33:36.017 00:33:36.017 ' 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:36.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:36.017 --rc genhtml_branch_coverage=1 00:33:36.017 --rc genhtml_function_coverage=1 00:33:36.017 --rc genhtml_legend=1 00:33:36.017 --rc geninfo_all_blocks=1 00:33:36.017 --rc geninfo_unexecuted_blocks=1 00:33:36.017 00:33:36.017 ' 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:36.017 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:36.018 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:36.018 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:36.018 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:36.018 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:36.018 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:36.018 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:36.018 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:36.018 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:36.018 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:33:36.018 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:36.018 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:36.018 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:36.018 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:36.018 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:36.018 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:36.018 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:36.018 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:36.018 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:36.018 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:36.018 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:33:36.018 12:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:42.702 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:42.702 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:33:42.702 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:42.702 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:42.702 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:42.702 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:42.702 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:42.702 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:33:42.702 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:42.702 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:33:42.702 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:33:42.702 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:42.703 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:42.703 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:42.703 Found net devices under 0000:86:00.0: cvl_0_0 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:42.703 Found net devices under 0000:86:00.1: cvl_0_1 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:42.703 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:42.703 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:33:42.703 00:33:42.703 --- 10.0.0.2 ping statistics --- 00:33:42.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:42.703 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:42.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:42.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:33:42.703 00:33:42.703 --- 10.0.0.1 ping statistics --- 00:33:42.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:42.703 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:42.703 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:33:42.704 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:42.704 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:42.704 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:42.704 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:42.704 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:42.704 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:42.704 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:42.704 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:33:42.704 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:42.704 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:42.704 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:42.704 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=410817 00:33:42.704 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 410817 00:33:42.704 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:33:42.704 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 410817 ']' 00:33:42.704 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:42.704 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:42.704 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:42.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:42.704 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:42.704 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:42.704 [2024-11-20 12:47:47.678353] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:42.704 [2024-11-20 12:47:47.679299] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:33:42.704 [2024-11-20 12:47:47.679338] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:42.704 [2024-11-20 12:47:47.759816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:42.704 [2024-11-20 12:47:47.802384] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:42.704 [2024-11-20 12:47:47.802421] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:42.704 [2024-11-20 12:47:47.802428] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:42.704 [2024-11-20 12:47:47.802434] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:42.704 [2024-11-20 12:47:47.802438] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:42.704 [2024-11-20 12:47:47.803882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:42.704 [2024-11-20 12:47:47.803994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:42.704 [2024-11-20 12:47:47.804099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:42.704 [2024-11-20 12:47:47.804100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:42.704 [2024-11-20 12:47:47.872282] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:42.704 [2024-11-20 12:47:47.873069] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:42.704 [2024-11-20 12:47:47.873261] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:42.704 [2024-11-20 12:47:47.873645] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:42.704 [2024-11-20 12:47:47.873692] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:42.704 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:42.704 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:33:42.704 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:42.704 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:42.704 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:42.704 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:42.704 12:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:42.704 [2024-11-20 12:47:48.096783] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:42.704 12:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:42.704 12:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:33:42.704 12:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:42.963 12:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:33:42.963 12:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:43.223 12:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:33:43.223 12:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:43.482 12:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:33:43.482 12:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:33:43.482 12:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:43.740 12:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:33:43.740 12:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:43.999 12:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:33:43.999 12:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:44.257 12:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:33:44.257 12:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:33:44.258 12:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:44.516 12:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:33:44.516 12:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:44.774 12:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:33:44.774 12:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:45.032 12:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:45.032 [2024-11-20 12:47:50.748697] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:45.032 12:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:33:45.290 12:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:33:45.549 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:45.808 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:33:45.809 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:33:45.809 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:33:45.809 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:33:45.809 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:33:45.809 12:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:33:47.713 12:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:33:47.713 12:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:33:47.713 12:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:33:47.713 12:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:33:47.713 12:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:33:47.713 12:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:33:47.713 12:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:33:47.713 [global] 00:33:47.713 thread=1 00:33:47.713 invalidate=1 00:33:47.713 rw=write 00:33:47.713 time_based=1 00:33:47.713 runtime=1 00:33:47.713 ioengine=libaio 00:33:47.713 direct=1 00:33:47.713 bs=4096 00:33:47.713 iodepth=1 00:33:47.713 norandommap=0 00:33:47.713 numjobs=1 00:33:47.713 00:33:47.713 verify_dump=1 00:33:47.713 verify_backlog=512 00:33:47.713 verify_state_save=0 00:33:47.713 do_verify=1 00:33:47.713 verify=crc32c-intel 00:33:47.713 [job0] 00:33:47.713 filename=/dev/nvme0n1 00:33:47.713 [job1] 00:33:47.713 filename=/dev/nvme0n2 00:33:47.713 [job2] 00:33:47.713 filename=/dev/nvme0n3 00:33:47.713 [job3] 00:33:47.713 filename=/dev/nvme0n4 00:33:47.970 Could not set queue depth (nvme0n1) 00:33:47.970 Could not set queue depth (nvme0n2) 00:33:47.970 Could not set queue depth (nvme0n3) 00:33:47.970 Could not set queue depth (nvme0n4) 00:33:48.229 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:48.229 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:48.229 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:48.229 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:48.229 fio-3.35 00:33:48.229 Starting 4 threads 00:33:49.604 00:33:49.604 job0: (groupid=0, jobs=1): err= 0: pid=412030: Wed Nov 20 12:47:54 2024 00:33:49.604 read: IOPS=2068, BW=8276KiB/s (8474kB/s)(8284KiB/1001msec) 00:33:49.604 slat (nsec): min=7345, max=48254, avg=8850.02, stdev=2884.49 00:33:49.605 clat (usec): min=178, max=1468, avg=235.39, stdev=67.45 00:33:49.605 lat (usec): min=185, max=1476, avg=244.24, stdev=67.41 00:33:49.605 clat percentiles (usec): 00:33:49.605 | 1.00th=[ 182], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 188], 00:33:49.605 | 30.00th=[ 192], 40.00th=[ 206], 50.00th=[ 217], 60.00th=[ 231], 00:33:49.605 | 70.00th=[ 243], 80.00th=[ 269], 90.00th=[ 285], 95.00th=[ 416], 00:33:49.605 | 99.00th=[ 449], 99.50th=[ 465], 99.90th=[ 482], 99.95th=[ 553], 00:33:49.605 | 99.99th=[ 1467] 00:33:49.605 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:33:49.605 slat (usec): min=10, max=644, avg=13.06, stdev=13.01 00:33:49.605 clat (usec): min=127, max=373, avg=173.10, stdev=41.00 00:33:49.605 lat (usec): min=138, max=899, avg=186.16, stdev=44.10 00:33:49.605 clat percentiles (usec): 00:33:49.605 | 1.00th=[ 131], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 137], 00:33:49.605 | 30.00th=[ 143], 40.00th=[ 149], 50.00th=[ 157], 60.00th=[ 167], 00:33:49.605 | 70.00th=[ 194], 80.00th=[ 215], 90.00th=[ 243], 95.00th=[ 251], 00:33:49.605 | 99.00th=[ 273], 99.50th=[ 293], 99.90th=[ 322], 99.95th=[ 338], 00:33:49.605 | 99.99th=[ 375] 00:33:49.605 bw ( KiB/s): min= 8192, max= 8192, per=40.84%, avg=8192.00, stdev= 0.00, samples=1 00:33:49.605 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:33:49.605 lat (usec) : 250=85.17%, 500=14.79%, 750=0.02% 00:33:49.605 lat (msec) : 2=0.02% 00:33:49.605 cpu : usr=3.70%, sys=7.90%, ctx=4633, majf=0, minf=1 00:33:49.605 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:49.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.605 issued rwts: total=2071,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.605 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:49.605 job1: (groupid=0, jobs=1): err= 0: pid=412031: Wed Nov 20 12:47:54 2024 00:33:49.605 read: IOPS=22, BW=90.1KiB/s (92.3kB/s)(92.0KiB/1021msec) 00:33:49.605 slat (nsec): min=8519, max=23010, avg=21147.91, stdev=3753.00 00:33:49.605 clat (usec): min=277, max=41428, avg=39208.82, stdev=8487.64 00:33:49.605 lat (usec): min=298, max=41438, avg=39229.97, stdev=8487.64 00:33:49.605 clat percentiles (usec): 00:33:49.605 | 1.00th=[ 277], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:33:49.605 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:49.605 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:49.605 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:33:49.605 | 99.99th=[41681] 00:33:49.605 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:33:49.605 slat (nsec): min=9214, max=41231, avg=10371.81, stdev=2447.36 00:33:49.605 clat (usec): min=126, max=345, avg=219.20, stdev=36.20 00:33:49.605 lat (usec): min=135, max=358, avg=229.57, stdev=36.26 00:33:49.605 clat percentiles (usec): 00:33:49.605 | 1.00th=[ 131], 5.00th=[ 147], 10.00th=[ 163], 20.00th=[ 194], 00:33:49.605 | 30.00th=[ 208], 40.00th=[ 217], 50.00th=[ 225], 60.00th=[ 231], 00:33:49.605 | 70.00th=[ 241], 80.00th=[ 249], 90.00th=[ 258], 95.00th=[ 265], 00:33:49.605 | 99.00th=[ 297], 99.50th=[ 314], 99.90th=[ 347], 99.95th=[ 347], 00:33:49.605 | 99.99th=[ 347] 00:33:49.605 bw ( KiB/s): min= 4096, max= 4096, per=20.42%, avg=4096.00, stdev= 0.00, samples=1 00:33:49.605 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:49.605 lat (usec) : 250=77.94%, 500=17.94% 00:33:49.605 lat (msec) : 50=4.11% 00:33:49.605 cpu : usr=0.20%, sys=0.49%, ctx=535, majf=0, minf=2 00:33:49.605 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:49.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.605 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.605 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:49.605 job2: (groupid=0, jobs=1): err= 0: pid=412032: Wed Nov 20 12:47:54 2024 00:33:49.605 read: IOPS=206, BW=827KiB/s (847kB/s)(836KiB/1011msec) 00:33:49.605 slat (nsec): min=6907, max=27256, avg=9621.26, stdev=4866.79 00:33:49.605 clat (usec): min=209, max=41168, avg=4178.00, stdev=11957.99 00:33:49.605 lat (usec): min=217, max=41176, avg=4187.62, stdev=11960.16 00:33:49.605 clat percentiles (usec): 00:33:49.605 | 1.00th=[ 212], 5.00th=[ 225], 10.00th=[ 239], 20.00th=[ 260], 00:33:49.605 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 273], 60.00th=[ 285], 00:33:49.605 | 70.00th=[ 322], 80.00th=[ 433], 90.00th=[ 486], 95.00th=[41157], 00:33:49.605 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:33:49.605 | 99.99th=[41157] 00:33:49.605 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:33:49.605 slat (nsec): min=9847, max=37203, avg=11178.31, stdev=1701.99 00:33:49.605 clat (usec): min=223, max=454, avg=243.95, stdev=13.34 00:33:49.605 lat (usec): min=236, max=491, avg=255.13, stdev=14.13 00:33:49.605 clat percentiles (usec): 00:33:49.605 | 1.00th=[ 231], 5.00th=[ 237], 10.00th=[ 239], 20.00th=[ 241], 00:33:49.605 | 30.00th=[ 241], 40.00th=[ 241], 50.00th=[ 243], 60.00th=[ 243], 00:33:49.605 | 70.00th=[ 243], 80.00th=[ 245], 90.00th=[ 247], 95.00th=[ 253], 00:33:49.605 | 99.00th=[ 289], 99.50th=[ 318], 99.90th=[ 457], 99.95th=[ 457], 00:33:49.605 | 99.99th=[ 457] 00:33:49.605 bw ( KiB/s): min= 4096, max= 4096, per=20.42%, avg=4096.00, stdev= 0.00, samples=1 00:33:49.605 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:49.605 lat (usec) : 250=70.18%, 500=27.05% 00:33:49.605 lat (msec) : 50=2.77% 00:33:49.605 cpu : usr=0.59%, sys=0.59%, ctx=723, majf=0, minf=1 00:33:49.605 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:49.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.605 issued rwts: total=209,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.605 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:49.605 job3: (groupid=0, jobs=1): err= 0: pid=412033: Wed Nov 20 12:47:54 2024 00:33:49.605 read: IOPS=1148, BW=4595KiB/s (4706kB/s)(4692KiB/1021msec) 00:33:49.605 slat (nsec): min=8435, max=51054, avg=10998.29, stdev=4899.69 00:33:49.605 clat (usec): min=211, max=41030, avg=601.93, stdev=3744.10 00:33:49.605 lat (usec): min=227, max=41040, avg=612.93, stdev=3745.07 00:33:49.605 clat percentiles (usec): 00:33:49.605 | 1.00th=[ 229], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 243], 00:33:49.605 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 251], 60.00th=[ 255], 00:33:49.605 | 70.00th=[ 260], 80.00th=[ 265], 90.00th=[ 273], 95.00th=[ 285], 00:33:49.605 | 99.00th=[ 379], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:33:49.605 | 99.99th=[41157] 00:33:49.605 write: IOPS=1504, BW=6018KiB/s (6162kB/s)(6144KiB/1021msec); 0 zone resets 00:33:49.605 slat (nsec): min=7296, max=48029, avg=12786.88, stdev=4588.03 00:33:49.605 clat (usec): min=147, max=365, avg=175.62, stdev=14.50 00:33:49.605 lat (usec): min=155, max=377, avg=188.41, stdev=16.53 00:33:49.605 clat percentiles (usec): 00:33:49.605 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 165], 00:33:49.605 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 178], 00:33:49.605 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 192], 95.00th=[ 196], 00:33:49.605 | 99.00th=[ 210], 99.50th=[ 225], 99.90th=[ 314], 99.95th=[ 367], 00:33:49.605 | 99.99th=[ 367] 00:33:49.605 bw ( KiB/s): min= 2968, max= 9320, per=30.63%, avg=6144.00, stdev=4491.54, samples=2 00:33:49.605 iops : min= 742, max= 2330, avg=1536.00, stdev=1122.89, samples=2 00:33:49.605 lat (usec) : 250=76.15%, 500=23.48% 00:33:49.605 lat (msec) : 50=0.37% 00:33:49.605 cpu : usr=2.06%, sys=4.02%, ctx=2710, majf=0, minf=1 00:33:49.605 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:49.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.605 issued rwts: total=1173,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.605 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:49.605 00:33:49.605 Run status group 0 (all jobs): 00:33:49.605 READ: bw=13.3MiB/s (13.9MB/s), 90.1KiB/s-8276KiB/s (92.3kB/s-8474kB/s), io=13.6MiB (14.2MB), run=1001-1021msec 00:33:49.605 WRITE: bw=19.6MiB/s (20.5MB/s), 2006KiB/s-9.99MiB/s (2054kB/s-10.5MB/s), io=20.0MiB (21.0MB), run=1001-1021msec 00:33:49.605 00:33:49.605 Disk stats (read/write): 00:33:49.605 nvme0n1: ios=1879/2048, merge=0/0, ticks=443/339, in_queue=782, util=85.77% 00:33:49.605 nvme0n2: ios=67/512, merge=0/0, ticks=753/108, in_queue=861, util=90.74% 00:33:49.605 nvme0n3: ios=226/512, merge=0/0, ticks=1611/123, in_queue=1734, util=93.22% 00:33:49.605 nvme0n4: ios=1225/1536, merge=0/0, ticks=1384/252, in_queue=1636, util=93.91% 00:33:49.606 12:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:33:49.606 [global] 00:33:49.606 thread=1 00:33:49.606 invalidate=1 00:33:49.606 rw=randwrite 00:33:49.606 time_based=1 00:33:49.606 runtime=1 00:33:49.606 ioengine=libaio 00:33:49.606 direct=1 00:33:49.606 bs=4096 00:33:49.606 iodepth=1 00:33:49.606 norandommap=0 00:33:49.606 numjobs=1 00:33:49.606 00:33:49.606 verify_dump=1 00:33:49.606 verify_backlog=512 00:33:49.606 verify_state_save=0 00:33:49.606 do_verify=1 00:33:49.606 verify=crc32c-intel 00:33:49.606 [job0] 00:33:49.606 filename=/dev/nvme0n1 00:33:49.606 [job1] 00:33:49.606 filename=/dev/nvme0n2 00:33:49.606 [job2] 00:33:49.606 filename=/dev/nvme0n3 00:33:49.606 [job3] 00:33:49.606 filename=/dev/nvme0n4 00:33:49.606 Could not set queue depth (nvme0n1) 00:33:49.606 Could not set queue depth (nvme0n2) 00:33:49.606 Could not set queue depth (nvme0n3) 00:33:49.606 Could not set queue depth (nvme0n4) 00:33:49.606 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:49.606 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:49.606 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:49.606 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:49.606 fio-3.35 00:33:49.606 Starting 4 threads 00:33:50.982 00:33:50.982 job0: (groupid=0, jobs=1): err= 0: pid=412400: Wed Nov 20 12:47:56 2024 00:33:50.982 read: IOPS=295, BW=1182KiB/s (1210kB/s)(1196KiB/1012msec) 00:33:50.982 slat (nsec): min=6909, max=45872, avg=9377.81, stdev=5446.04 00:33:50.982 clat (usec): min=193, max=42037, avg=3055.23, stdev=10333.77 00:33:50.982 lat (usec): min=201, max=42060, avg=3064.60, stdev=10336.08 00:33:50.982 clat percentiles (usec): 00:33:50.982 | 1.00th=[ 196], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 210], 00:33:50.982 | 30.00th=[ 215], 40.00th=[ 217], 50.00th=[ 219], 60.00th=[ 223], 00:33:50.982 | 70.00th=[ 229], 80.00th=[ 241], 90.00th=[ 302], 95.00th=[40633], 00:33:50.982 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:33:50.982 | 99.99th=[42206] 00:33:50.982 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:33:50.982 slat (nsec): min=10092, max=44848, avg=11502.34, stdev=2397.48 00:33:50.982 clat (usec): min=153, max=326, avg=170.28, stdev=10.98 00:33:50.982 lat (usec): min=164, max=365, avg=181.78, stdev=12.35 00:33:50.982 clat percentiles (usec): 00:33:50.982 | 1.00th=[ 157], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 163], 00:33:50.982 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 172], 00:33:50.982 | 70.00th=[ 174], 80.00th=[ 176], 90.00th=[ 180], 95.00th=[ 184], 00:33:50.982 | 99.00th=[ 194], 99.50th=[ 198], 99.90th=[ 326], 99.95th=[ 326], 00:33:50.982 | 99.99th=[ 326] 00:33:50.982 bw ( KiB/s): min= 4096, max= 4096, per=22.11%, avg=4096.00, stdev= 0.00, samples=1 00:33:50.982 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:50.982 lat (usec) : 250=94.08%, 500=3.33% 00:33:50.982 lat (msec) : 50=2.59% 00:33:50.982 cpu : usr=0.40%, sys=1.58%, ctx=811, majf=0, minf=1 00:33:50.982 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:50.982 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:50.982 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:50.982 issued rwts: total=299,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:50.982 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:50.982 job1: (groupid=0, jobs=1): err= 0: pid=412401: Wed Nov 20 12:47:56 2024 00:33:50.982 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:33:50.982 slat (nsec): min=4344, max=49993, avg=8377.63, stdev=1507.92 00:33:50.982 clat (usec): min=174, max=531, avg=204.14, stdev=24.15 00:33:50.982 lat (usec): min=183, max=556, avg=212.51, stdev=24.26 00:33:50.982 clat percentiles (usec): 00:33:50.982 | 1.00th=[ 184], 5.00th=[ 186], 10.00th=[ 188], 20.00th=[ 190], 00:33:50.982 | 30.00th=[ 190], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 200], 00:33:50.982 | 70.00th=[ 208], 80.00th=[ 217], 90.00th=[ 229], 95.00th=[ 253], 00:33:50.982 | 99.00th=[ 269], 99.50th=[ 297], 99.90th=[ 502], 99.95th=[ 510], 00:33:50.982 | 99.99th=[ 529] 00:33:50.982 write: IOPS=2659, BW=10.4MiB/s (10.9MB/s)(10.4MiB/1001msec); 0 zone resets 00:33:50.982 slat (nsec): min=8496, max=40988, avg=12341.91, stdev=1806.33 00:33:50.982 clat (usec): min=124, max=469, avg=152.87, stdev=28.61 00:33:50.982 lat (usec): min=138, max=483, avg=165.21, stdev=28.84 00:33:50.982 clat percentiles (usec): 00:33:50.982 | 1.00th=[ 129], 5.00th=[ 131], 10.00th=[ 133], 20.00th=[ 133], 00:33:50.982 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 141], 60.00th=[ 149], 00:33:50.982 | 70.00th=[ 159], 80.00th=[ 174], 90.00th=[ 194], 95.00th=[ 204], 00:33:50.982 | 99.00th=[ 241], 99.50th=[ 258], 99.90th=[ 371], 99.95th=[ 404], 00:33:50.982 | 99.99th=[ 469] 00:33:50.982 bw ( KiB/s): min=12288, max=12288, per=66.33%, avg=12288.00, stdev= 0.00, samples=1 00:33:50.982 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:33:50.982 lat (usec) : 250=96.51%, 500=3.43%, 750=0.06% 00:33:50.982 cpu : usr=4.40%, sys=8.40%, ctx=5224, majf=0, minf=1 00:33:50.982 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:50.982 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:50.982 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:50.982 issued rwts: total=2560,2662,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:50.982 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:50.982 job2: (groupid=0, jobs=1): err= 0: pid=412402: Wed Nov 20 12:47:56 2024 00:33:50.982 read: IOPS=25, BW=104KiB/s (106kB/s)(104KiB/1002msec) 00:33:50.982 slat (nsec): min=10424, max=27015, avg=22000.96, stdev=5093.82 00:33:50.982 clat (usec): min=242, max=41891, avg=34781.17, stdev=14874.47 00:33:50.982 lat (usec): min=265, max=41912, avg=34803.17, stdev=14874.08 00:33:50.982 clat percentiles (usec): 00:33:50.982 | 1.00th=[ 243], 5.00th=[ 326], 10.00th=[ 396], 20.00th=[40633], 00:33:50.982 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:50.982 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:50.982 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:33:50.982 | 99.99th=[41681] 00:33:50.982 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:33:50.982 slat (nsec): min=11022, max=52428, avg=12324.25, stdev=2523.66 00:33:50.982 clat (usec): min=148, max=340, avg=174.10, stdev=14.12 00:33:50.982 lat (usec): min=163, max=366, avg=186.42, stdev=14.84 00:33:50.982 clat percentiles (usec): 00:33:50.982 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 165], 00:33:50.982 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 172], 60.00th=[ 174], 00:33:50.982 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 188], 95.00th=[ 194], 00:33:50.982 | 99.00th=[ 208], 99.50th=[ 215], 99.90th=[ 343], 99.95th=[ 343], 00:33:50.982 | 99.99th=[ 343] 00:33:50.982 bw ( KiB/s): min= 4096, max= 4096, per=22.11%, avg=4096.00, stdev= 0.00, samples=1 00:33:50.982 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:50.982 lat (usec) : 250=94.98%, 500=0.74% 00:33:50.982 lat (msec) : 2=0.19%, 50=4.09% 00:33:50.982 cpu : usr=0.40%, sys=1.00%, ctx=540, majf=0, minf=1 00:33:50.982 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:50.982 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:50.982 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:50.982 issued rwts: total=26,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:50.982 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:50.982 job3: (groupid=0, jobs=1): err= 0: pid=412403: Wed Nov 20 12:47:56 2024 00:33:50.982 read: IOPS=511, BW=2045KiB/s (2094kB/s)(2080KiB/1017msec) 00:33:50.982 slat (nsec): min=7456, max=26385, avg=9094.43, stdev=2814.72 00:33:50.982 clat (usec): min=174, max=41051, avg=1549.34, stdev=7250.18 00:33:50.982 lat (usec): min=181, max=41074, avg=1558.43, stdev=7252.51 00:33:50.982 clat percentiles (usec): 00:33:50.982 | 1.00th=[ 186], 5.00th=[ 190], 10.00th=[ 192], 20.00th=[ 208], 00:33:50.982 | 30.00th=[ 212], 40.00th=[ 215], 50.00th=[ 217], 60.00th=[ 221], 00:33:50.982 | 70.00th=[ 223], 80.00th=[ 227], 90.00th=[ 243], 95.00th=[ 269], 00:33:50.982 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:33:50.982 | 99.99th=[41157] 00:33:50.983 write: IOPS=1006, BW=4028KiB/s (4124kB/s)(4096KiB/1017msec); 0 zone resets 00:33:50.983 slat (nsec): min=10720, max=44815, avg=12338.68, stdev=1925.81 00:33:50.983 clat (usec): min=138, max=396, avg=184.68, stdev=19.15 00:33:50.983 lat (usec): min=149, max=441, avg=197.01, stdev=19.65 00:33:50.983 clat percentiles (usec): 00:33:50.983 | 1.00th=[ 143], 5.00th=[ 161], 10.00th=[ 169], 20.00th=[ 174], 00:33:50.983 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 188], 00:33:50.983 | 70.00th=[ 192], 80.00th=[ 198], 90.00th=[ 206], 95.00th=[ 212], 00:33:50.983 | 99.00th=[ 247], 99.50th=[ 260], 99.90th=[ 338], 99.95th=[ 396], 00:33:50.983 | 99.99th=[ 396] 00:33:50.983 bw ( KiB/s): min= 4096, max= 4096, per=22.11%, avg=4096.00, stdev= 0.00, samples=2 00:33:50.983 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:33:50.983 lat (usec) : 250=96.37%, 500=2.53% 00:33:50.983 lat (msec) : 50=1.10% 00:33:50.983 cpu : usr=1.18%, sys=2.66%, ctx=1545, majf=0, minf=1 00:33:50.983 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:50.983 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:50.983 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:50.983 issued rwts: total=520,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:50.983 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:50.983 00:33:50.983 Run status group 0 (all jobs): 00:33:50.983 READ: bw=13.1MiB/s (13.7MB/s), 104KiB/s-9.99MiB/s (106kB/s-10.5MB/s), io=13.3MiB (13.9MB), run=1001-1017msec 00:33:50.983 WRITE: bw=18.1MiB/s (19.0MB/s), 2024KiB/s-10.4MiB/s (2072kB/s-10.9MB/s), io=18.4MiB (19.3MB), run=1001-1017msec 00:33:50.983 00:33:50.983 Disk stats (read/write): 00:33:50.983 nvme0n1: ios=345/512, merge=0/0, ticks=771/84, in_queue=855, util=87.47% 00:33:50.983 nvme0n2: ios=2087/2417, merge=0/0, ticks=1101/352, in_queue=1453, util=100.00% 00:33:50.983 nvme0n3: ios=44/512, merge=0/0, ticks=1727/88, in_queue=1815, util=98.44% 00:33:50.983 nvme0n4: ios=559/1024, merge=0/0, ticks=1803/178, in_queue=1981, util=96.44% 00:33:50.983 12:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:33:50.983 [global] 00:33:50.983 thread=1 00:33:50.983 invalidate=1 00:33:50.983 rw=write 00:33:50.983 time_based=1 00:33:50.983 runtime=1 00:33:50.983 ioengine=libaio 00:33:50.983 direct=1 00:33:50.983 bs=4096 00:33:50.983 iodepth=128 00:33:50.983 norandommap=0 00:33:50.983 numjobs=1 00:33:50.983 00:33:50.983 verify_dump=1 00:33:50.983 verify_backlog=512 00:33:50.983 verify_state_save=0 00:33:50.983 do_verify=1 00:33:50.983 verify=crc32c-intel 00:33:50.983 [job0] 00:33:50.983 filename=/dev/nvme0n1 00:33:50.983 [job1] 00:33:50.983 filename=/dev/nvme0n2 00:33:50.983 [job2] 00:33:50.983 filename=/dev/nvme0n3 00:33:50.983 [job3] 00:33:50.983 filename=/dev/nvme0n4 00:33:50.983 Could not set queue depth (nvme0n1) 00:33:50.983 Could not set queue depth (nvme0n2) 00:33:50.983 Could not set queue depth (nvme0n3) 00:33:50.983 Could not set queue depth (nvme0n4) 00:33:51.242 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:51.242 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:51.242 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:51.242 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:51.242 fio-3.35 00:33:51.242 Starting 4 threads 00:33:52.647 00:33:52.647 job0: (groupid=0, jobs=1): err= 0: pid=412773: Wed Nov 20 12:47:58 2024 00:33:52.647 read: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec) 00:33:52.647 slat (nsec): min=1656, max=14136k, avg=92492.22, stdev=784399.69 00:33:52.647 clat (usec): min=1678, max=38619, avg=13182.15, stdev=5718.25 00:33:52.647 lat (usec): min=1701, max=38626, avg=13274.64, stdev=5774.82 00:33:52.647 clat percentiles (usec): 00:33:52.647 | 1.00th=[ 1876], 5.00th=[ 6718], 10.00th=[ 8225], 20.00th=[ 9372], 00:33:52.647 | 30.00th=[10028], 40.00th=[10290], 50.00th=[12387], 60.00th=[13435], 00:33:52.647 | 70.00th=[14353], 80.00th=[16712], 90.00th=[19792], 95.00th=[25035], 00:33:52.647 | 99.00th=[33162], 99.50th=[35390], 99.90th=[38536], 99.95th=[38536], 00:33:52.647 | 99.99th=[38536] 00:33:52.647 write: IOPS=5298, BW=20.7MiB/s (21.7MB/s)(20.8MiB/1007msec); 0 zone resets 00:33:52.647 slat (nsec): min=1849, max=15997k, avg=77179.01, stdev=698436.10 00:33:52.647 clat (usec): min=1058, max=61125, avg=11292.13, stdev=7077.68 00:33:52.647 lat (usec): min=1067, max=61127, avg=11369.31, stdev=7115.34 00:33:52.648 clat percentiles (usec): 00:33:52.648 | 1.00th=[ 1270], 5.00th=[ 3589], 10.00th=[ 5669], 20.00th=[ 7242], 00:33:52.648 | 30.00th=[ 8160], 40.00th=[ 9110], 50.00th=[ 9896], 60.00th=[10683], 00:33:52.648 | 70.00th=[12387], 80.00th=[13829], 90.00th=[17695], 95.00th=[21365], 00:33:52.648 | 99.00th=[52167], 99.50th=[57410], 99.90th=[59507], 99.95th=[59507], 00:33:52.648 | 99.99th=[61080] 00:33:52.648 bw ( KiB/s): min=16520, max=25144, per=28.71%, avg=20832.00, stdev=6098.09, samples=2 00:33:52.648 iops : min= 4130, max= 6286, avg=5208.00, stdev=1524.52, samples=2 00:33:52.648 lat (msec) : 2=1.58%, 4=2.56%, 10=36.60%, 20=51.70%, 50=7.04% 00:33:52.648 lat (msec) : 100=0.52% 00:33:52.648 cpu : usr=3.68%, sys=6.36%, ctx=281, majf=0, minf=1 00:33:52.648 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:33:52.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:52.648 issued rwts: total=5120,5336,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.648 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:52.648 job1: (groupid=0, jobs=1): err= 0: pid=412774: Wed Nov 20 12:47:58 2024 00:33:52.648 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:33:52.648 slat (nsec): min=1053, max=12392k, avg=88234.83, stdev=542650.85 00:33:52.648 clat (usec): min=3666, max=27080, avg=10851.70, stdev=2861.18 00:33:52.648 lat (usec): min=3669, max=27089, avg=10939.93, stdev=2895.73 00:33:52.648 clat percentiles (usec): 00:33:52.648 | 1.00th=[ 5997], 5.00th=[ 7504], 10.00th=[ 8094], 20.00th=[ 8848], 00:33:52.648 | 30.00th=[ 9503], 40.00th=[10028], 50.00th=[10290], 60.00th=[10552], 00:33:52.648 | 70.00th=[11600], 80.00th=[12649], 90.00th=[13698], 95.00th=[16319], 00:33:52.648 | 99.00th=[21103], 99.50th=[24773], 99.90th=[27132], 99.95th=[27132], 00:33:52.648 | 99.99th=[27132] 00:33:52.648 write: IOPS=5824, BW=22.8MiB/s (23.9MB/s)(22.8MiB/1003msec); 0 zone resets 00:33:52.648 slat (nsec): min=1806, max=17232k, avg=75799.82, stdev=489863.96 00:33:52.648 clat (usec): min=2382, max=39234, avg=11130.57, stdev=4527.26 00:33:52.648 lat (usec): min=3004, max=39241, avg=11206.37, stdev=4543.79 00:33:52.648 clat percentiles (usec): 00:33:52.648 | 1.00th=[ 5145], 5.00th=[ 7046], 10.00th=[ 7963], 20.00th=[ 8455], 00:33:52.648 | 30.00th=[ 9503], 40.00th=[10028], 50.00th=[10159], 60.00th=[10290], 00:33:52.648 | 70.00th=[10552], 80.00th=[11863], 90.00th=[15270], 95.00th=[22152], 00:33:52.648 | 99.00th=[34866], 99.50th=[34866], 99.90th=[34866], 99.95th=[35390], 00:33:52.648 | 99.99th=[39060] 00:33:52.648 bw ( KiB/s): min=22616, max=23104, per=31.50%, avg=22860.00, stdev=345.07, samples=2 00:33:52.648 iops : min= 5654, max= 5776, avg=5715.00, stdev=86.27, samples=2 00:33:52.648 lat (msec) : 4=0.19%, 10=40.11%, 20=55.86%, 50=3.84% 00:33:52.648 cpu : usr=3.59%, sys=5.79%, ctx=538, majf=0, minf=1 00:33:52.648 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:33:52.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:52.648 issued rwts: total=5632,5842,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.648 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:52.648 job2: (groupid=0, jobs=1): err= 0: pid=412775: Wed Nov 20 12:47:58 2024 00:33:52.648 read: IOPS=2599, BW=10.2MiB/s (10.6MB/s)(10.2MiB/1001msec) 00:33:52.648 slat (nsec): min=1280, max=23781k, avg=183120.72, stdev=1355374.36 00:33:52.648 clat (usec): min=510, max=81482, avg=19765.76, stdev=16440.44 00:33:52.648 lat (usec): min=3962, max=81507, avg=19948.88, stdev=16594.83 00:33:52.648 clat percentiles (usec): 00:33:52.648 | 1.00th=[ 5473], 5.00th=[ 8455], 10.00th=[10028], 20.00th=[10683], 00:33:52.648 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11600], 60.00th=[13173], 00:33:52.648 | 70.00th=[16712], 80.00th=[26608], 90.00th=[47449], 95.00th=[58459], 00:33:52.648 | 99.00th=[73925], 99.50th=[79168], 99.90th=[81265], 99.95th=[81265], 00:33:52.648 | 99.99th=[81265] 00:33:52.648 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:33:52.648 slat (usec): min=2, max=14032, avg=163.35, stdev=900.49 00:33:52.648 clat (usec): min=1648, max=120569, avg=24595.87, stdev=22431.47 00:33:52.648 lat (usec): min=1663, max=120581, avg=24759.23, stdev=22576.04 00:33:52.648 clat percentiles (msec): 00:33:52.648 | 1.00th=[ 6], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 12], 00:33:52.648 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 13], 60.00th=[ 16], 00:33:52.648 | 70.00th=[ 29], 80.00th=[ 42], 90.00th=[ 50], 95.00th=[ 77], 00:33:52.648 | 99.00th=[ 111], 99.50th=[ 114], 99.90th=[ 121], 99.95th=[ 121], 00:33:52.648 | 99.99th=[ 121] 00:33:52.648 bw ( KiB/s): min=12144, max=12144, per=16.73%, avg=12144.00, stdev= 0.00, samples=1 00:33:52.648 iops : min= 3036, max= 3036, avg=3036.00, stdev= 0.00, samples=1 00:33:52.648 lat (usec) : 750=0.02% 00:33:52.648 lat (msec) : 2=0.04%, 4=0.37%, 10=10.52%, 20=60.22%, 50=19.46% 00:33:52.648 lat (msec) : 100=8.28%, 250=1.09% 00:33:52.648 cpu : usr=2.40%, sys=3.90%, ctx=302, majf=0, minf=2 00:33:52.648 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:33:52.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:52.648 issued rwts: total=2602,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.648 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:52.648 job3: (groupid=0, jobs=1): err= 0: pid=412776: Wed Nov 20 12:47:58 2024 00:33:52.648 read: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec) 00:33:52.648 slat (nsec): min=1274, max=27194k, avg=114197.29, stdev=1067256.17 00:33:52.648 clat (usec): min=6924, max=38975, avg=14984.09, stdev=5901.18 00:33:52.648 lat (usec): min=6928, max=38986, avg=15098.28, stdev=5980.08 00:33:52.648 clat percentiles (usec): 00:33:52.648 | 1.00th=[ 6915], 5.00th=[ 8848], 10.00th=[10028], 20.00th=[10290], 00:33:52.648 | 30.00th=[10945], 40.00th=[11338], 50.00th=[12518], 60.00th=[14877], 00:33:52.648 | 70.00th=[17171], 80.00th=[19268], 90.00th=[23462], 95.00th=[28705], 00:33:52.648 | 99.00th=[32900], 99.50th=[32900], 99.90th=[39060], 99.95th=[39060], 00:33:52.648 | 99.99th=[39060] 00:33:52.648 write: IOPS=3991, BW=15.6MiB/s (16.3MB/s)(15.7MiB/1007msec); 0 zone resets 00:33:52.648 slat (usec): min=2, max=19799, avg=140.32, stdev=949.55 00:33:52.648 clat (usec): min=1112, max=82170, avg=18377.58, stdev=13926.29 00:33:52.648 lat (usec): min=1121, max=82183, avg=18517.90, stdev=14014.93 00:33:52.648 clat percentiles (usec): 00:33:52.648 | 1.00th=[ 5342], 5.00th=[ 7177], 10.00th=[ 7963], 20.00th=[ 9634], 00:33:52.648 | 30.00th=[10945], 40.00th=[11600], 50.00th=[13960], 60.00th=[16319], 00:33:52.648 | 70.00th=[19792], 80.00th=[22152], 90.00th=[41681], 95.00th=[51119], 00:33:52.648 | 99.00th=[76022], 99.50th=[80217], 99.90th=[82314], 99.95th=[82314], 00:33:52.648 | 99.99th=[82314] 00:33:52.648 bw ( KiB/s): min=13744, max=17384, per=21.45%, avg=15564.00, stdev=2573.87, samples=2 00:33:52.648 iops : min= 3436, max= 4346, avg=3891.00, stdev=643.47, samples=2 00:33:52.648 lat (msec) : 2=0.07%, 10=16.05%, 20=60.70%, 50=20.29%, 100=2.89% 00:33:52.648 cpu : usr=3.38%, sys=4.47%, ctx=301, majf=0, minf=1 00:33:52.648 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:33:52.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:52.648 issued rwts: total=3584,4019,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.648 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:52.648 00:33:52.648 Run status group 0 (all jobs): 00:33:52.648 READ: bw=65.7MiB/s (68.9MB/s), 10.2MiB/s-21.9MiB/s (10.6MB/s-23.0MB/s), io=66.2MiB (69.4MB), run=1001-1007msec 00:33:52.648 WRITE: bw=70.9MiB/s (74.3MB/s), 12.0MiB/s-22.8MiB/s (12.6MB/s-23.9MB/s), io=71.4MiB (74.8MB), run=1001-1007msec 00:33:52.648 00:33:52.648 Disk stats (read/write): 00:33:52.648 nvme0n1: ios=4660/4860, merge=0/0, ticks=50247/46384, in_queue=96631, util=98.30% 00:33:52.648 nvme0n2: ios=4632/5023, merge=0/0, ticks=23106/27378, in_queue=50484, util=86.92% 00:33:52.648 nvme0n3: ios=2048/2057, merge=0/0, ticks=28792/52588, in_queue=81380, util=88.98% 00:33:52.648 nvme0n4: ios=3099/3527, merge=0/0, ticks=41727/59923, in_queue=101650, util=98.53% 00:33:52.648 12:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:33:52.648 [global] 00:33:52.648 thread=1 00:33:52.649 invalidate=1 00:33:52.649 rw=randwrite 00:33:52.649 time_based=1 00:33:52.649 runtime=1 00:33:52.649 ioengine=libaio 00:33:52.649 direct=1 00:33:52.649 bs=4096 00:33:52.649 iodepth=128 00:33:52.649 norandommap=0 00:33:52.649 numjobs=1 00:33:52.649 00:33:52.649 verify_dump=1 00:33:52.649 verify_backlog=512 00:33:52.649 verify_state_save=0 00:33:52.649 do_verify=1 00:33:52.649 verify=crc32c-intel 00:33:52.649 [job0] 00:33:52.649 filename=/dev/nvme0n1 00:33:52.649 [job1] 00:33:52.649 filename=/dev/nvme0n2 00:33:52.649 [job2] 00:33:52.649 filename=/dev/nvme0n3 00:33:52.649 [job3] 00:33:52.649 filename=/dev/nvme0n4 00:33:52.649 Could not set queue depth (nvme0n1) 00:33:52.649 Could not set queue depth (nvme0n2) 00:33:52.649 Could not set queue depth (nvme0n3) 00:33:52.649 Could not set queue depth (nvme0n4) 00:33:52.910 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:52.910 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:52.910 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:52.910 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:52.910 fio-3.35 00:33:52.910 Starting 4 threads 00:33:54.281 00:33:54.281 job0: (groupid=0, jobs=1): err= 0: pid=413144: Wed Nov 20 12:47:59 2024 00:33:54.281 read: IOPS=5036, BW=19.7MiB/s (20.6MB/s)(19.7MiB/1002msec) 00:33:54.281 slat (nsec): min=1186, max=7609.5k, avg=94879.52, stdev=519839.40 00:33:54.281 clat (usec): min=912, max=33069, avg=11478.34, stdev=3758.82 00:33:54.281 lat (usec): min=6264, max=33076, avg=11573.22, stdev=3802.65 00:33:54.281 clat percentiles (usec): 00:33:54.281 | 1.00th=[ 6587], 5.00th=[ 7767], 10.00th=[ 8160], 20.00th=[ 9241], 00:33:54.281 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10421], 60.00th=[10683], 00:33:54.281 | 70.00th=[11338], 80.00th=[12256], 90.00th=[17171], 95.00th=[20579], 00:33:54.281 | 99.00th=[25560], 99.50th=[25822], 99.90th=[28967], 99.95th=[29492], 00:33:54.281 | 99.99th=[33162] 00:33:54.281 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:33:54.281 slat (nsec): min=1900, max=11442k, avg=97682.07, stdev=516157.35 00:33:54.281 clat (usec): min=5821, max=61953, avg=13443.91, stdev=8451.29 00:33:54.281 lat (usec): min=5823, max=61960, avg=13541.60, stdev=8498.88 00:33:54.281 clat percentiles (usec): 00:33:54.281 | 1.00th=[ 7242], 5.00th=[ 8029], 10.00th=[ 8586], 20.00th=[ 9896], 00:33:54.281 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10683], 60.00th=[11338], 00:33:54.281 | 70.00th=[11994], 80.00th=[13960], 90.00th=[17433], 95.00th=[34341], 00:33:54.281 | 99.00th=[61080], 99.50th=[61604], 99.90th=[62129], 99.95th=[62129], 00:33:54.281 | 99.99th=[62129] 00:33:54.281 bw ( KiB/s): min=19184, max=21776, per=29.22%, avg=20480.00, stdev=1832.82, samples=2 00:33:54.281 iops : min= 4796, max= 5444, avg=5120.00, stdev=458.21, samples=2 00:33:54.281 lat (usec) : 1000=0.01% 00:33:54.281 lat (msec) : 10=29.00%, 20=63.52%, 50=6.78%, 100=0.70% 00:33:54.281 cpu : usr=3.50%, sys=3.70%, ctx=569, majf=0, minf=1 00:33:54.281 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:33:54.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.281 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:54.281 issued rwts: total=5047,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.281 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:54.281 job1: (groupid=0, jobs=1): err= 0: pid=413145: Wed Nov 20 12:47:59 2024 00:33:54.281 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:33:54.281 slat (nsec): min=1072, max=21245k, avg=118819.53, stdev=840465.88 00:33:54.282 clat (usec): min=4052, max=47627, avg=15245.98, stdev=7569.17 00:33:54.282 lat (usec): min=4058, max=47651, avg=15364.80, stdev=7647.18 00:33:54.282 clat percentiles (usec): 00:33:54.282 | 1.00th=[ 6980], 5.00th=[ 8225], 10.00th=[ 8848], 20.00th=[ 9503], 00:33:54.282 | 30.00th=[10290], 40.00th=[10945], 50.00th=[12387], 60.00th=[13829], 00:33:54.282 | 70.00th=[15664], 80.00th=[22152], 90.00th=[27919], 95.00th=[30540], 00:33:54.282 | 99.00th=[37487], 99.50th=[39584], 99.90th=[39584], 99.95th=[40633], 00:33:54.282 | 99.99th=[47449] 00:33:54.282 write: IOPS=4009, BW=15.7MiB/s (16.4MB/s)(15.7MiB/1002msec); 0 zone resets 00:33:54.282 slat (nsec): min=1732, max=14066k, avg=127916.89, stdev=806010.56 00:33:54.282 clat (usec): min=349, max=102084, avg=17980.31, stdev=16345.57 00:33:54.282 lat (usec): min=644, max=102095, avg=18108.22, stdev=16451.38 00:33:54.282 clat percentiles (msec): 00:33:54.282 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 8], 20.00th=[ 10], 00:33:54.282 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 13], 00:33:54.282 | 70.00th=[ 22], 80.00th=[ 23], 90.00th=[ 32], 95.00th=[ 47], 00:33:54.282 | 99.00th=[ 95], 99.50th=[ 97], 99.90th=[ 103], 99.95th=[ 103], 00:33:54.282 | 99.99th=[ 103] 00:33:54.282 bw ( KiB/s): min=11568, max=19552, per=22.20%, avg=15560.00, stdev=5645.54, samples=2 00:33:54.282 iops : min= 2892, max= 4888, avg=3890.00, stdev=1411.39, samples=2 00:33:54.282 lat (usec) : 500=0.01%, 750=0.04% 00:33:54.282 lat (msec) : 2=0.21%, 4=1.12%, 10=23.24%, 20=47.80%, 50=25.27% 00:33:54.282 lat (msec) : 100=2.21%, 250=0.09% 00:33:54.282 cpu : usr=2.80%, sys=4.10%, ctx=362, majf=0, minf=2 00:33:54.282 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:33:54.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.282 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:54.282 issued rwts: total=3584,4018,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.282 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:54.282 job2: (groupid=0, jobs=1): err= 0: pid=413146: Wed Nov 20 12:47:59 2024 00:33:54.282 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:33:54.282 slat (nsec): min=1475, max=45695k, avg=142306.03, stdev=1170730.74 00:33:54.282 clat (usec): min=4186, max=61536, avg=17427.40, stdev=8749.64 00:33:54.282 lat (usec): min=4193, max=61560, avg=17569.71, stdev=8805.51 00:33:54.282 clat percentiles (usec): 00:33:54.282 | 1.00th=[ 6259], 5.00th=[ 8455], 10.00th=[ 9634], 20.00th=[11207], 00:33:54.282 | 30.00th=[13173], 40.00th=[14746], 50.00th=[15795], 60.00th=[17171], 00:33:54.282 | 70.00th=[19268], 80.00th=[21365], 90.00th=[25035], 95.00th=[29754], 00:33:54.282 | 99.00th=[56361], 99.50th=[56361], 99.90th=[56361], 99.95th=[57934], 00:33:54.282 | 99.99th=[61604] 00:33:54.282 write: IOPS=3838, BW=15.0MiB/s (15.7MB/s)(15.0MiB/1002msec); 0 zone resets 00:33:54.282 slat (nsec): min=1861, max=11765k, avg=121891.24, stdev=827369.91 00:33:54.282 clat (usec): min=798, max=43478, avg=16662.69, stdev=6476.29 00:33:54.282 lat (usec): min=4659, max=43500, avg=16784.58, stdev=6533.97 00:33:54.282 clat percentiles (usec): 00:33:54.282 | 1.00th=[ 6128], 5.00th=[ 8455], 10.00th=[10290], 20.00th=[11207], 00:33:54.282 | 30.00th=[13435], 40.00th=[14091], 50.00th=[15664], 60.00th=[17171], 00:33:54.282 | 70.00th=[18220], 80.00th=[21365], 90.00th=[24773], 95.00th=[30016], 00:33:54.282 | 99.00th=[37487], 99.50th=[41681], 99.90th=[41681], 99.95th=[42730], 00:33:54.282 | 99.99th=[43254] 00:33:54.282 bw ( KiB/s): min=13952, max=15800, per=21.22%, avg=14876.00, stdev=1306.73, samples=2 00:33:54.282 iops : min= 3488, max= 3950, avg=3719.00, stdev=326.68, samples=2 00:33:54.282 lat (usec) : 1000=0.01% 00:33:54.282 lat (msec) : 10=9.04%, 20=66.27%, 50=23.53%, 100=1.14% 00:33:54.282 cpu : usr=2.50%, sys=4.10%, ctx=281, majf=0, minf=1 00:33:54.282 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:33:54.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.282 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:54.282 issued rwts: total=3584,3846,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.282 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:54.282 job3: (groupid=0, jobs=1): err= 0: pid=413147: Wed Nov 20 12:47:59 2024 00:33:54.282 read: IOPS=4327, BW=16.9MiB/s (17.7MB/s)(17.0MiB/1004msec) 00:33:54.282 slat (nsec): min=1140, max=21722k, avg=107369.73, stdev=837116.02 00:33:54.282 clat (usec): min=1060, max=50182, avg=14061.54, stdev=6730.56 00:33:54.282 lat (usec): min=5911, max=53380, avg=14168.91, stdev=6777.94 00:33:54.282 clat percentiles (usec): 00:33:54.282 | 1.00th=[ 7504], 5.00th=[ 9110], 10.00th=[ 9634], 20.00th=[ 9896], 00:33:54.282 | 30.00th=[10290], 40.00th=[11469], 50.00th=[11994], 60.00th=[12649], 00:33:54.282 | 70.00th=[13960], 80.00th=[16188], 90.00th=[20317], 95.00th=[25822], 00:33:54.282 | 99.00th=[39584], 99.50th=[50070], 99.90th=[50070], 99.95th=[50070], 00:33:54.282 | 99.99th=[50070] 00:33:54.282 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:33:54.282 slat (nsec): min=1901, max=13254k, avg=107620.45, stdev=689878.64 00:33:54.282 clat (usec): min=2422, max=75107, avg=14234.78, stdev=8049.74 00:33:54.282 lat (usec): min=2428, max=75116, avg=14342.40, stdev=8096.09 00:33:54.282 clat percentiles (usec): 00:33:54.282 | 1.00th=[ 5866], 5.00th=[ 6783], 10.00th=[ 7504], 20.00th=[ 9110], 00:33:54.282 | 30.00th=[10028], 40.00th=[11207], 50.00th=[11994], 60.00th=[12911], 00:33:54.282 | 70.00th=[16319], 80.00th=[20317], 90.00th=[22414], 95.00th=[23987], 00:33:54.282 | 99.00th=[59507], 99.50th=[72877], 99.90th=[74974], 99.95th=[74974], 00:33:54.282 | 99.99th=[74974] 00:33:54.282 bw ( KiB/s): min=16384, max=20480, per=26.30%, avg=18432.00, stdev=2896.31, samples=2 00:33:54.282 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:33:54.282 lat (msec) : 2=0.01%, 4=0.29%, 10=26.44%, 20=56.84%, 50=15.56% 00:33:54.282 lat (msec) : 100=0.86% 00:33:54.282 cpu : usr=3.59%, sys=4.89%, ctx=373, majf=0, minf=1 00:33:54.282 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:33:54.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.282 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:54.282 issued rwts: total=4345,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.282 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:54.282 00:33:54.282 Run status group 0 (all jobs): 00:33:54.282 READ: bw=64.4MiB/s (67.6MB/s), 14.0MiB/s-19.7MiB/s (14.7MB/s-20.6MB/s), io=64.7MiB (67.8MB), run=1002-1004msec 00:33:54.282 WRITE: bw=68.4MiB/s (71.8MB/s), 15.0MiB/s-20.0MiB/s (15.7MB/s-20.9MB/s), io=68.7MiB (72.1MB), run=1002-1004msec 00:33:54.282 00:33:54.282 Disk stats (read/write): 00:33:54.282 nvme0n1: ios=4145/4415, merge=0/0, ticks=14826/18199, in_queue=33025, util=82.15% 00:33:54.282 nvme0n2: ios=3072/3435, merge=0/0, ticks=25229/26800, in_queue=52029, util=82.01% 00:33:54.282 nvme0n3: ios=3092/3191, merge=0/0, ticks=27293/25097, in_queue=52390, util=97.18% 00:33:54.282 nvme0n4: ios=3287/3584, merge=0/0, ticks=27194/31857, in_queue=59051, util=98.68% 00:33:54.282 12:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:33:54.282 12:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=413377 00:33:54.282 12:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:33:54.282 12:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:33:54.282 [global] 00:33:54.282 thread=1 00:33:54.282 invalidate=1 00:33:54.282 rw=read 00:33:54.282 time_based=1 00:33:54.282 runtime=10 00:33:54.282 ioengine=libaio 00:33:54.282 direct=1 00:33:54.282 bs=4096 00:33:54.282 iodepth=1 00:33:54.282 norandommap=1 00:33:54.282 numjobs=1 00:33:54.282 00:33:54.282 [job0] 00:33:54.282 filename=/dev/nvme0n1 00:33:54.282 [job1] 00:33:54.282 filename=/dev/nvme0n2 00:33:54.282 [job2] 00:33:54.282 filename=/dev/nvme0n3 00:33:54.282 [job3] 00:33:54.282 filename=/dev/nvme0n4 00:33:54.282 Could not set queue depth (nvme0n1) 00:33:54.282 Could not set queue depth (nvme0n2) 00:33:54.282 Could not set queue depth (nvme0n3) 00:33:54.282 Could not set queue depth (nvme0n4) 00:33:54.540 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:54.540 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:54.540 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:54.540 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:54.540 fio-3.35 00:33:54.540 Starting 4 threads 00:33:57.064 12:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:33:57.321 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=20762624, buflen=4096 00:33:57.321 fio: pid=413540, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:57.321 12:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:33:57.578 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=46125056, buflen=4096 00:33:57.578 fio: pid=413539, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:57.578 12:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:57.578 12:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:33:57.578 12:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:57.579 12:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:33:57.579 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=1359872, buflen=4096 00:33:57.579 fio: pid=413530, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:57.836 12:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:57.836 12:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:33:57.836 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=446464, buflen=4096 00:33:57.836 fio: pid=413538, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:58.095 00:33:58.095 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=413530: Wed Nov 20 12:48:03 2024 00:33:58.095 read: IOPS=106, BW=424KiB/s (435kB/s)(1328KiB/3129msec) 00:33:58.095 slat (usec): min=7, max=13800, avg=53.62, stdev=755.61 00:33:58.095 clat (usec): min=212, max=42044, avg=9300.56, stdev=16830.16 00:33:58.095 lat (usec): min=222, max=55077, avg=9354.27, stdev=16928.11 00:33:58.095 clat percentiles (usec): 00:33:58.095 | 1.00th=[ 227], 5.00th=[ 245], 10.00th=[ 253], 20.00th=[ 343], 00:33:58.095 | 30.00th=[ 359], 40.00th=[ 367], 50.00th=[ 383], 60.00th=[ 449], 00:33:58.095 | 70.00th=[ 494], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:33:58.095 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:58.095 | 99.99th=[42206] 00:33:58.095 bw ( KiB/s): min= 112, max= 1592, per=2.17%, avg=433.50, stdev=569.44, samples=6 00:33:58.095 iops : min= 28, max= 398, avg=108.33, stdev=142.38, samples=6 00:33:58.095 lat (usec) : 250=9.01%, 500=62.76%, 750=5.71% 00:33:58.095 lat (msec) : 2=0.30%, 50=21.92% 00:33:58.095 cpu : usr=0.00%, sys=0.32%, ctx=334, majf=0, minf=2 00:33:58.095 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:58.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.095 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.095 issued rwts: total=333,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.095 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:58.095 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=413538: Wed Nov 20 12:48:03 2024 00:33:58.095 read: IOPS=32, BW=130KiB/s (133kB/s)(436KiB/3363msec) 00:33:58.095 slat (usec): min=8, max=6742, avg=130.82, stdev=838.48 00:33:58.095 clat (usec): min=218, max=44985, avg=30520.35, stdev=17889.67 00:33:58.095 lat (usec): min=230, max=48016, avg=30652.16, stdev=17976.79 00:33:58.095 clat percentiles (usec): 00:33:58.095 | 1.00th=[ 221], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 241], 00:33:58.095 | 30.00th=[40633], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:33:58.095 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:58.095 | 99.00th=[42206], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:33:58.095 | 99.99th=[44827] 00:33:58.095 bw ( KiB/s): min= 96, max= 160, per=0.65%, avg=130.67, stdev=29.79, samples=6 00:33:58.095 iops : min= 24, max= 40, avg=32.67, stdev= 7.45, samples=6 00:33:58.095 lat (usec) : 250=21.82%, 500=3.64% 00:33:58.095 lat (msec) : 50=73.64% 00:33:58.095 cpu : usr=0.12%, sys=0.00%, ctx=113, majf=0, minf=1 00:33:58.095 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:58.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.095 complete : 0=0.9%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.095 issued rwts: total=110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.095 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:58.095 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=413539: Wed Nov 20 12:48:03 2024 00:33:58.095 read: IOPS=3871, BW=15.1MiB/s (15.9MB/s)(44.0MiB/2909msec) 00:33:58.095 slat (usec): min=6, max=11068, avg= 9.80, stdev=126.38 00:33:58.095 clat (usec): min=179, max=1894, avg=244.96, stdev=40.22 00:33:58.095 lat (usec): min=193, max=11471, avg=254.76, stdev=134.57 00:33:58.095 clat percentiles (usec): 00:33:58.095 | 1.00th=[ 194], 5.00th=[ 202], 10.00th=[ 215], 20.00th=[ 239], 00:33:58.095 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 249], 00:33:58.095 | 70.00th=[ 251], 80.00th=[ 253], 90.00th=[ 258], 95.00th=[ 262], 00:33:58.095 | 99.00th=[ 326], 99.50th=[ 424], 99.90th=[ 506], 99.95th=[ 1500], 00:33:58.095 | 99.99th=[ 1647] 00:33:58.095 bw ( KiB/s): min=15176, max=16912, per=78.81%, avg=15721.60, stdev=681.02, samples=5 00:33:58.095 iops : min= 3794, max= 4228, avg=3930.40, stdev=170.26, samples=5 00:33:58.095 lat (usec) : 250=68.44%, 500=31.42%, 750=0.07% 00:33:58.095 lat (msec) : 2=0.05% 00:33:58.095 cpu : usr=1.58%, sys=5.12%, ctx=11264, majf=0, minf=2 00:33:58.095 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:58.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.095 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.095 issued rwts: total=11262,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.095 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:58.095 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=413540: Wed Nov 20 12:48:03 2024 00:33:58.095 read: IOPS=1868, BW=7474KiB/s (7653kB/s)(19.8MiB/2713msec) 00:33:58.095 slat (nsec): min=7260, max=40492, avg=9753.93, stdev=2074.97 00:33:58.095 clat (usec): min=192, max=41310, avg=518.26, stdev=3327.59 00:33:58.095 lat (usec): min=200, max=41319, avg=528.01, stdev=3328.56 00:33:58.095 clat percentiles (usec): 00:33:58.095 | 1.00th=[ 212], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 233], 00:33:58.095 | 30.00th=[ 237], 40.00th=[ 239], 50.00th=[ 241], 60.00th=[ 245], 00:33:58.095 | 70.00th=[ 247], 80.00th=[ 251], 90.00th=[ 260], 95.00th=[ 269], 00:33:58.095 | 99.00th=[ 396], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:33:58.095 | 99.99th=[41157] 00:33:58.095 bw ( KiB/s): min= 96, max=15784, per=34.87%, avg=6955.20, stdev=7761.20, samples=5 00:33:58.095 iops : min= 24, max= 3946, avg=1738.80, stdev=1940.30, samples=5 00:33:58.095 lat (usec) : 250=77.14%, 500=22.07%, 750=0.04% 00:33:58.095 lat (msec) : 2=0.04%, 10=0.02%, 50=0.67% 00:33:58.095 cpu : usr=1.11%, sys=3.24%, ctx=5070, majf=0, minf=2 00:33:58.095 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:58.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.095 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.095 issued rwts: total=5070,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.095 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:58.095 00:33:58.095 Run status group 0 (all jobs): 00:33:58.095 READ: bw=19.5MiB/s (20.4MB/s), 130KiB/s-15.1MiB/s (133kB/s-15.9MB/s), io=65.5MiB (68.7MB), run=2713-3363msec 00:33:58.095 00:33:58.095 Disk stats (read/write): 00:33:58.095 nvme0n1: ios=331/0, merge=0/0, ticks=3047/0, in_queue=3047, util=95.31% 00:33:58.095 nvme0n2: ios=110/0, merge=0/0, ticks=3338/0, in_queue=3338, util=95.98% 00:33:58.095 nvme0n3: ios=11121/0, merge=0/0, ticks=2626/0, in_queue=2626, util=95.94% 00:33:58.095 nvme0n4: ios=4742/0, merge=0/0, ticks=2507/0, in_queue=2507, util=96.48% 00:33:58.095 12:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:58.095 12:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:33:58.352 12:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:58.352 12:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:33:58.609 12:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:58.609 12:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:33:58.868 12:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:58.868 12:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:33:58.868 12:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:33:58.868 12:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 413377 00:33:58.868 12:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:33:58.868 12:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:59.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:59.126 12:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:59.126 12:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:33:59.126 12:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:33:59.126 12:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:59.126 12:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:33:59.126 12:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:59.126 12:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:33:59.126 12:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:33:59.126 12:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:33:59.126 nvmf hotplug test: fio failed as expected 00:33:59.126 12:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:59.384 12:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:33:59.384 12:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:33:59.384 12:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:33:59.384 12:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:33:59.384 12:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:33:59.384 12:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:59.384 12:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:33:59.384 12:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:59.384 12:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:33:59.384 12:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:59.384 12:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:59.384 rmmod nvme_tcp 00:33:59.384 rmmod nvme_fabrics 00:33:59.384 rmmod nvme_keyring 00:33:59.384 12:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:59.384 12:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:33:59.384 12:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:33:59.384 12:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 410817 ']' 00:33:59.384 12:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 410817 00:33:59.384 12:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 410817 ']' 00:33:59.384 12:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 410817 00:33:59.384 12:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:33:59.384 12:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:59.384 12:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 410817 00:33:59.384 12:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:59.384 12:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:59.384 12:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 410817' 00:33:59.384 killing process with pid 410817 00:33:59.384 12:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 410817 00:33:59.384 12:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 410817 00:33:59.643 12:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:59.643 12:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:59.643 12:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:59.643 12:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:33:59.643 12:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:33:59.643 12:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:59.643 12:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:33:59.643 12:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:59.643 12:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:59.643 12:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:59.643 12:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:59.643 12:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:01.549 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:01.549 00:34:01.549 real 0m25.762s 00:34:01.549 user 1m30.608s 00:34:01.549 sys 0m11.215s 00:34:01.549 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:01.549 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:01.549 ************************************ 00:34:01.549 END TEST nvmf_fio_target 00:34:01.549 ************************************ 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:01.810 ************************************ 00:34:01.810 START TEST nvmf_bdevio 00:34:01.810 ************************************ 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:01.810 * Looking for test storage... 00:34:01.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:01.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:01.810 --rc genhtml_branch_coverage=1 00:34:01.810 --rc genhtml_function_coverage=1 00:34:01.810 --rc genhtml_legend=1 00:34:01.810 --rc geninfo_all_blocks=1 00:34:01.810 --rc geninfo_unexecuted_blocks=1 00:34:01.810 00:34:01.810 ' 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:01.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:01.810 --rc genhtml_branch_coverage=1 00:34:01.810 --rc genhtml_function_coverage=1 00:34:01.810 --rc genhtml_legend=1 00:34:01.810 --rc geninfo_all_blocks=1 00:34:01.810 --rc geninfo_unexecuted_blocks=1 00:34:01.810 00:34:01.810 ' 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:01.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:01.810 --rc genhtml_branch_coverage=1 00:34:01.810 --rc genhtml_function_coverage=1 00:34:01.810 --rc genhtml_legend=1 00:34:01.810 --rc geninfo_all_blocks=1 00:34:01.810 --rc geninfo_unexecuted_blocks=1 00:34:01.810 00:34:01.810 ' 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:01.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:01.810 --rc genhtml_branch_coverage=1 00:34:01.810 --rc genhtml_function_coverage=1 00:34:01.810 --rc genhtml_legend=1 00:34:01.810 --rc geninfo_all_blocks=1 00:34:01.810 --rc geninfo_unexecuted_blocks=1 00:34:01.810 00:34:01.810 ' 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:01.810 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:01.811 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:01.811 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:01.811 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:01.811 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:01.811 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:01.811 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:34:01.811 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:34:01.811 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:01.811 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:01.811 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:01.811 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:01.811 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:01.811 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:34:01.811 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:01.811 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:01.811 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:01.811 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:01.811 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:01.811 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:01.811 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:34:01.811 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:01.811 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:34:01.811 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:01.811 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:01.811 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:01.811 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:01.811 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:01.811 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:01.811 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:01.811 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:01.811 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:01.811 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:01.811 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:01.811 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:01.811 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:34:01.811 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:02.070 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:02.070 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:02.070 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:02.070 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:02.070 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:02.070 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:02.070 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:02.070 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:02.070 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:02.070 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:34:02.070 12:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:07.347 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:07.347 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:34:07.347 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:07.347 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:07.347 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:07.347 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:07.347 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:07.347 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:34:07.347 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:07.347 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:34:07.347 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:34:07.347 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:34:07.347 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:34:07.347 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:34:07.347 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:34:07.347 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:07.347 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:07.347 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:07.347 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:07.347 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:07.347 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:07.347 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:07.347 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:07.347 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:07.347 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:07.347 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:07.348 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:07.348 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:07.348 Found net devices under 0000:86:00.0: cvl_0_0 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:07.348 Found net devices under 0000:86:00.1: cvl_0_1 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:07.348 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:07.608 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:07.608 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:07.608 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:07.608 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:07.608 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:07.608 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:07.608 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:07.608 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:07.608 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:07.608 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.480 ms 00:34:07.608 00:34:07.608 --- 10.0.0.2 ping statistics --- 00:34:07.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:07.608 rtt min/avg/max/mdev = 0.480/0.480/0.480/0.000 ms 00:34:07.608 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:07.608 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:07.608 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:34:07.608 00:34:07.608 --- 10.0.0.1 ping statistics --- 00:34:07.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:07.608 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:34:07.608 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:07.608 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:34:07.608 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:07.608 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:07.608 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:07.608 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:07.608 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:07.608 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:07.608 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:07.867 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:34:07.867 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:07.867 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:07.867 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:07.868 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=418275 00:34:07.868 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 418275 00:34:07.868 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:34:07.868 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 418275 ']' 00:34:07.868 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:07.868 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:07.868 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:07.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:07.868 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:07.868 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:07.868 [2024-11-20 12:48:13.443334] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:07.868 [2024-11-20 12:48:13.444243] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:34:07.868 [2024-11-20 12:48:13.444280] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:07.868 [2024-11-20 12:48:13.520412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:07.868 [2024-11-20 12:48:13.561787] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:07.868 [2024-11-20 12:48:13.561821] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:07.868 [2024-11-20 12:48:13.561828] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:07.868 [2024-11-20 12:48:13.561834] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:07.868 [2024-11-20 12:48:13.561838] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:07.868 [2024-11-20 12:48:13.563293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:07.868 [2024-11-20 12:48:13.563400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:07.868 [2024-11-20 12:48:13.563504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:07.868 [2024-11-20 12:48:13.563505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:07.868 [2024-11-20 12:48:13.628665] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:07.868 [2024-11-20 12:48:13.629178] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:07.868 [2024-11-20 12:48:13.629555] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:07.868 [2024-11-20 12:48:13.629942] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:07.868 [2024-11-20 12:48:13.629975] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:08.127 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:08.127 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:34:08.127 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:08.127 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:08.127 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:08.127 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:08.127 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:08.127 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.127 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:08.127 [2024-11-20 12:48:13.696301] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:08.127 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.127 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:08.127 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.127 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:08.127 Malloc0 00:34:08.127 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.127 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:08.127 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.127 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:08.127 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.127 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:08.127 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.127 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:08.127 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.127 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:08.127 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.127 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:08.127 [2024-11-20 12:48:13.780462] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:08.127 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.127 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:34:08.127 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:34:08.127 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:34:08.127 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:34:08.127 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:08.127 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:08.127 { 00:34:08.127 "params": { 00:34:08.127 "name": "Nvme$subsystem", 00:34:08.127 "trtype": "$TEST_TRANSPORT", 00:34:08.127 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:08.127 "adrfam": "ipv4", 00:34:08.127 "trsvcid": "$NVMF_PORT", 00:34:08.127 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:08.127 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:08.127 "hdgst": ${hdgst:-false}, 00:34:08.127 "ddgst": ${ddgst:-false} 00:34:08.127 }, 00:34:08.127 "method": "bdev_nvme_attach_controller" 00:34:08.127 } 00:34:08.127 EOF 00:34:08.127 )") 00:34:08.127 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:34:08.127 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:34:08.127 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:34:08.127 12:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:08.127 "params": { 00:34:08.127 "name": "Nvme1", 00:34:08.127 "trtype": "tcp", 00:34:08.127 "traddr": "10.0.0.2", 00:34:08.127 "adrfam": "ipv4", 00:34:08.127 "trsvcid": "4420", 00:34:08.127 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:08.127 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:08.127 "hdgst": false, 00:34:08.127 "ddgst": false 00:34:08.127 }, 00:34:08.127 "method": "bdev_nvme_attach_controller" 00:34:08.127 }' 00:34:08.127 [2024-11-20 12:48:13.829745] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:34:08.127 [2024-11-20 12:48:13.829790] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid418320 ] 00:34:08.385 [2024-11-20 12:48:13.904518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:08.385 [2024-11-20 12:48:13.948045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:08.385 [2024-11-20 12:48:13.948155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:08.385 [2024-11-20 12:48:13.948155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:08.385 I/O targets: 00:34:08.385 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:34:08.385 00:34:08.385 00:34:08.385 CUnit - A unit testing framework for C - Version 2.1-3 00:34:08.385 http://cunit.sourceforge.net/ 00:34:08.385 00:34:08.385 00:34:08.385 Suite: bdevio tests on: Nvme1n1 00:34:08.385 Test: blockdev write read block ...passed 00:34:08.642 Test: blockdev write zeroes read block ...passed 00:34:08.642 Test: blockdev write zeroes read no split ...passed 00:34:08.642 Test: blockdev write zeroes read split ...passed 00:34:08.642 Test: blockdev write zeroes read split partial ...passed 00:34:08.642 Test: blockdev reset ...[2024-11-20 12:48:14.204693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:34:08.642 [2024-11-20 12:48:14.204758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc1340 (9): Bad file descriptor 00:34:08.642 [2024-11-20 12:48:14.256299] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:34:08.642 passed 00:34:08.642 Test: blockdev write read 8 blocks ...passed 00:34:08.642 Test: blockdev write read size > 128k ...passed 00:34:08.642 Test: blockdev write read invalid size ...passed 00:34:08.642 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:08.642 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:08.642 Test: blockdev write read max offset ...passed 00:34:08.899 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:08.899 Test: blockdev writev readv 8 blocks ...passed 00:34:08.899 Test: blockdev writev readv 30 x 1block ...passed 00:34:08.899 Test: blockdev writev readv block ...passed 00:34:08.899 Test: blockdev writev readv size > 128k ...passed 00:34:08.899 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:08.899 Test: blockdev comparev and writev ...[2024-11-20 12:48:14.466153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:08.900 [2024-11-20 12:48:14.466181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:08.900 [2024-11-20 12:48:14.466195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:08.900 [2024-11-20 12:48:14.466208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:08.900 [2024-11-20 12:48:14.466494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:08.900 [2024-11-20 12:48:14.466504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:08.900 [2024-11-20 12:48:14.466516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:08.900 [2024-11-20 12:48:14.466523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:08.900 [2024-11-20 12:48:14.466800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:08.900 [2024-11-20 12:48:14.466809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:08.900 [2024-11-20 12:48:14.466821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:08.900 [2024-11-20 12:48:14.466828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:08.900 [2024-11-20 12:48:14.467098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:08.900 [2024-11-20 12:48:14.467109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:08.900 [2024-11-20 12:48:14.467120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:08.900 [2024-11-20 12:48:14.467128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:08.900 passed 00:34:08.900 Test: blockdev nvme passthru rw ...passed 00:34:08.900 Test: blockdev nvme passthru vendor specific ...[2024-11-20 12:48:14.549664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:08.900 [2024-11-20 12:48:14.549679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:08.900 [2024-11-20 12:48:14.549788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:08.900 [2024-11-20 12:48:14.549797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:08.900 [2024-11-20 12:48:14.549904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:08.900 [2024-11-20 12:48:14.549913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:08.900 [2024-11-20 12:48:14.550016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:08.900 [2024-11-20 12:48:14.550025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:08.900 passed 00:34:08.900 Test: blockdev nvme admin passthru ...passed 00:34:08.900 Test: blockdev copy ...passed 00:34:08.900 00:34:08.900 Run Summary: Type Total Ran Passed Failed Inactive 00:34:08.900 suites 1 1 n/a 0 0 00:34:08.900 tests 23 23 23 0 0 00:34:08.900 asserts 152 152 152 0 n/a 00:34:08.900 00:34:08.900 Elapsed time = 1.028 seconds 00:34:09.159 12:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:09.159 12:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.159 12:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:09.159 12:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.159 12:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:34:09.159 12:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:34:09.159 12:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:09.159 12:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:34:09.159 12:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:09.159 12:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:34:09.159 12:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:09.159 12:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:09.159 rmmod nvme_tcp 00:34:09.159 rmmod nvme_fabrics 00:34:09.159 rmmod nvme_keyring 00:34:09.159 12:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:09.159 12:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:34:09.159 12:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:34:09.159 12:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 418275 ']' 00:34:09.159 12:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 418275 00:34:09.159 12:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 418275 ']' 00:34:09.159 12:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 418275 00:34:09.159 12:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:34:09.159 12:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:09.159 12:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 418275 00:34:09.159 12:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:34:09.159 12:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:34:09.159 12:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 418275' 00:34:09.159 killing process with pid 418275 00:34:09.159 12:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 418275 00:34:09.159 12:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 418275 00:34:09.418 12:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:09.418 12:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:09.418 12:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:09.418 12:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:34:09.418 12:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:34:09.418 12:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:09.418 12:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:34:09.418 12:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:09.418 12:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:09.418 12:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:09.418 12:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:09.418 12:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:11.952 12:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:11.952 00:34:11.952 real 0m9.771s 00:34:11.952 user 0m8.036s 00:34:11.952 sys 0m5.172s 00:34:11.952 12:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:11.952 12:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:11.952 ************************************ 00:34:11.952 END TEST nvmf_bdevio 00:34:11.952 ************************************ 00:34:11.952 12:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:34:11.952 00:34:11.952 real 4m32.477s 00:34:11.952 user 9m4.338s 00:34:11.952 sys 1m51.116s 00:34:11.952 12:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:11.952 12:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:11.952 ************************************ 00:34:11.952 END TEST nvmf_target_core_interrupt_mode 00:34:11.952 ************************************ 00:34:11.952 12:48:17 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:11.952 12:48:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:11.952 12:48:17 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:11.952 12:48:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:11.952 ************************************ 00:34:11.952 START TEST nvmf_interrupt 00:34:11.952 ************************************ 00:34:11.952 12:48:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:11.952 * Looking for test storage... 00:34:11.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:11.952 12:48:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:11.952 12:48:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:34:11.952 12:48:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:11.952 12:48:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:11.952 12:48:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:11.952 12:48:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:11.952 12:48:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:11.952 12:48:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:34:11.952 12:48:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:34:11.952 12:48:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:34:11.952 12:48:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:34:11.952 12:48:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:34:11.952 12:48:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:34:11.952 12:48:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:34:11.952 12:48:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:11.952 12:48:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:34:11.952 12:48:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:34:11.952 12:48:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:11.952 12:48:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:11.952 12:48:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:34:11.952 12:48:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:34:11.952 12:48:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:11.952 12:48:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:34:11.952 12:48:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:34:11.952 12:48:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:34:11.952 12:48:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:34:11.952 12:48:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:11.952 12:48:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:34:11.952 12:48:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:34:11.952 12:48:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:11.952 12:48:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:11.952 12:48:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:34:11.952 12:48:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:11.952 12:48:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:11.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:11.952 --rc genhtml_branch_coverage=1 00:34:11.952 --rc genhtml_function_coverage=1 00:34:11.952 --rc genhtml_legend=1 00:34:11.952 --rc geninfo_all_blocks=1 00:34:11.952 --rc geninfo_unexecuted_blocks=1 00:34:11.952 00:34:11.952 ' 00:34:11.952 12:48:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:11.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:11.952 --rc genhtml_branch_coverage=1 00:34:11.952 --rc genhtml_function_coverage=1 00:34:11.952 --rc genhtml_legend=1 00:34:11.952 --rc geninfo_all_blocks=1 00:34:11.952 --rc geninfo_unexecuted_blocks=1 00:34:11.952 00:34:11.952 ' 00:34:11.952 12:48:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:11.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:11.952 --rc genhtml_branch_coverage=1 00:34:11.952 --rc genhtml_function_coverage=1 00:34:11.952 --rc genhtml_legend=1 00:34:11.952 --rc geninfo_all_blocks=1 00:34:11.952 --rc geninfo_unexecuted_blocks=1 00:34:11.952 00:34:11.952 ' 00:34:11.952 12:48:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:11.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:11.952 --rc genhtml_branch_coverage=1 00:34:11.952 --rc genhtml_function_coverage=1 00:34:11.952 --rc genhtml_legend=1 00:34:11.952 --rc geninfo_all_blocks=1 00:34:11.952 --rc geninfo_unexecuted_blocks=1 00:34:11.952 00:34:11.952 ' 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:34:11.953 12:48:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:18.522 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:18.522 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:34:18.522 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:18.522 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:18.522 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:18.522 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:18.522 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:18.522 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:34:18.522 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:18.522 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:34:18.522 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:34:18.522 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:34:18.522 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:34:18.522 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:34:18.522 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:34:18.522 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:18.522 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:18.522 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:18.522 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:18.522 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:18.522 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:18.522 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:18.522 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:18.522 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:18.522 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:18.522 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:18.522 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:18.522 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:18.522 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:18.522 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:18.522 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:18.522 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:18.522 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:18.522 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:18.522 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:18.522 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:18.522 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:18.522 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:18.522 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:18.522 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:18.522 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:18.522 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:18.522 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:18.522 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:18.522 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:18.522 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:18.522 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:18.522 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:18.522 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:18.522 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:18.523 Found net devices under 0000:86:00.0: cvl_0_0 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:18.523 Found net devices under 0000:86:00.1: cvl_0_1 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:18.523 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:18.523 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.467 ms 00:34:18.523 00:34:18.523 --- 10.0.0.2 ping statistics --- 00:34:18.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:18.523 rtt min/avg/max/mdev = 0.467/0.467/0.467/0.000 ms 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:18.523 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:18.523 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:34:18.523 00:34:18.523 --- 10.0.0.1 ping statistics --- 00:34:18.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:18.523 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=422056 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 422056 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 422056 ']' 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:18.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:18.523 [2024-11-20 12:48:23.447961] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:18.523 [2024-11-20 12:48:23.448859] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:34:18.523 [2024-11-20 12:48:23.448892] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:18.523 [2024-11-20 12:48:23.525502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:18.523 [2024-11-20 12:48:23.566899] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:18.523 [2024-11-20 12:48:23.566932] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:18.523 [2024-11-20 12:48:23.566939] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:18.523 [2024-11-20 12:48:23.566945] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:18.523 [2024-11-20 12:48:23.566950] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:18.523 [2024-11-20 12:48:23.568130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:18.523 [2024-11-20 12:48:23.568132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:18.523 [2024-11-20 12:48:23.635240] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:18.523 [2024-11-20 12:48:23.635619] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:18.523 [2024-11-20 12:48:23.635909] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:34:18.523 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:34:18.524 5000+0 records in 00:34:18.524 5000+0 records out 00:34:18.524 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0174543 s, 587 MB/s 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:18.524 AIO0 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:18.524 [2024-11-20 12:48:23.764934] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:18.524 [2024-11-20 12:48:23.805267] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 422056 0 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 422056 0 idle 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=422056 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 422056 -w 256 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 422056 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.25 reactor_0' 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 422056 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.25 reactor_0 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 422056 1 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 422056 1 idle 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=422056 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:18.524 12:48:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:18.524 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:18.524 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 422056 -w 256 00:34:18.524 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:18.524 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 422060 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.00 reactor_1' 00:34:18.524 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 422060 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.00 reactor_1 00:34:18.524 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:18.524 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:18.524 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:18.524 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:18.524 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:18.524 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:18.524 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:18.524 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:18.524 12:48:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:34:18.524 12:48:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=422182 00:34:18.524 12:48:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:18.524 12:48:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:18.524 12:48:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:18.524 12:48:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 422056 0 00:34:18.524 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 422056 0 busy 00:34:18.524 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=422056 00:34:18.524 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:18.524 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:18.524 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:18.524 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:18.524 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:18.524 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:18.524 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:18.525 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:18.525 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 422056 -w 256 00:34:18.525 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:18.782 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 422056 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:00.44 reactor_0' 00:34:18.782 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 422056 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:00.44 reactor_0 00:34:18.782 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:18.782 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:18.782 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:34:18.782 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:34:18.782 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:18.782 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:18.782 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:18.782 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:18.782 12:48:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:18.782 12:48:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:18.782 12:48:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 422056 1 00:34:18.782 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 422056 1 busy 00:34:18.782 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=422056 00:34:18.782 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:18.782 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:18.782 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:18.782 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:18.782 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:18.782 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:18.782 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:18.783 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:18.783 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 422056 -w 256 00:34:18.783 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:19.040 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 422060 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:00.28 reactor_1' 00:34:19.040 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 422060 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:00.28 reactor_1 00:34:19.040 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:19.040 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:19.040 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:34:19.040 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:34:19.040 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:19.040 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:19.040 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:19.040 12:48:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:19.040 12:48:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 422182 00:34:29.000 Initializing NVMe Controllers 00:34:29.000 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:29.000 Controller IO queue size 256, less than required. 00:34:29.000 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:29.000 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:34:29.000 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:34:29.000 Initialization complete. Launching workers. 00:34:29.000 ======================================================== 00:34:29.000 Latency(us) 00:34:29.000 Device Information : IOPS MiB/s Average min max 00:34:29.000 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16995.80 66.39 15069.52 3176.25 30116.18 00:34:29.000 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16745.40 65.41 15290.99 8251.88 28348.38 00:34:29.000 ======================================================== 00:34:29.000 Total : 33741.19 131.80 15179.43 3176.25 30116.18 00:34:29.000 00:34:29.000 12:48:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:34:29.000 12:48:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 422056 0 00:34:29.000 12:48:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 422056 0 idle 00:34:29.000 12:48:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=422056 00:34:29.001 12:48:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:29.001 12:48:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:29.001 12:48:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:29.001 12:48:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:29.001 12:48:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:29.001 12:48:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:29.001 12:48:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:29.001 12:48:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:29.001 12:48:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:29.001 12:48:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:29.001 12:48:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 422056 -w 256 00:34:29.001 12:48:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 422056 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:20.24 reactor_0' 00:34:29.001 12:48:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 422056 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:20.24 reactor_0 00:34:29.001 12:48:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:29.001 12:48:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:29.001 12:48:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:29.001 12:48:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:29.001 12:48:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:29.001 12:48:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:29.001 12:48:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:29.001 12:48:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:29.001 12:48:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:34:29.001 12:48:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 422056 1 00:34:29.001 12:48:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 422056 1 idle 00:34:29.001 12:48:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=422056 00:34:29.001 12:48:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:29.001 12:48:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:29.001 12:48:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:29.001 12:48:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:29.001 12:48:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:29.001 12:48:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:29.001 12:48:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:29.001 12:48:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:29.001 12:48:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:29.001 12:48:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 422056 -w 256 00:34:29.001 12:48:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:29.001 12:48:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 422060 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.00 reactor_1' 00:34:29.001 12:48:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 422060 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.00 reactor_1 00:34:29.001 12:48:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:29.001 12:48:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:29.001 12:48:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:29.001 12:48:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:29.001 12:48:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:29.001 12:48:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:29.001 12:48:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:29.001 12:48:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:29.001 12:48:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:29.570 12:48:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:34:29.570 12:48:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:34:29.570 12:48:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:29.570 12:48:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:34:29.570 12:48:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:34:31.478 12:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:31.478 12:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:31.478 12:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:31.478 12:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:34:31.478 12:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:31.478 12:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:34:31.478 12:48:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:34:31.478 12:48:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 422056 0 00:34:31.478 12:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 422056 0 idle 00:34:31.478 12:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=422056 00:34:31.478 12:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:31.478 12:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:31.478 12:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:31.478 12:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:31.478 12:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:31.478 12:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:31.478 12:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:31.478 12:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:31.478 12:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:31.478 12:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 422056 -w 256 00:34:31.478 12:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:31.737 12:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 422056 root 20 0 128.2g 73728 34560 S 6.7 0.0 0:20.48 reactor_0' 00:34:31.737 12:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 422056 root 20 0 128.2g 73728 34560 S 6.7 0.0 0:20.48 reactor_0 00:34:31.737 12:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:31.737 12:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:31.737 12:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:34:31.737 12:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:34:31.737 12:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:31.737 12:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:31.737 12:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:31.737 12:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:31.737 12:48:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:34:31.737 12:48:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 422056 1 00:34:31.737 12:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 422056 1 idle 00:34:31.737 12:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=422056 00:34:31.737 12:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:31.737 12:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:31.737 12:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:31.737 12:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:31.737 12:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:31.737 12:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:31.737 12:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:31.738 12:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:31.738 12:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:31.738 12:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 422056 -w 256 00:34:31.738 12:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:31.738 12:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 422060 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:10.10 reactor_1' 00:34:31.738 12:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 422060 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:10.10 reactor_1 00:34:31.738 12:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:31.738 12:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:31.738 12:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:31.738 12:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:31.738 12:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:31.738 12:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:31.738 12:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:31.738 12:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:31.738 12:48:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:31.998 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:31.998 12:48:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:31.998 12:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:34:31.998 12:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:31.998 12:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:31.998 12:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:31.998 12:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:31.998 12:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:34:31.998 12:48:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:34:31.998 12:48:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:34:31.998 12:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:31.998 12:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:34:31.998 12:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:31.998 12:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:34:31.998 12:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:31.998 12:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:31.998 rmmod nvme_tcp 00:34:31.998 rmmod nvme_fabrics 00:34:31.998 rmmod nvme_keyring 00:34:31.998 12:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:31.998 12:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:34:31.998 12:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:34:31.998 12:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 422056 ']' 00:34:31.998 12:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 422056 00:34:31.998 12:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 422056 ']' 00:34:31.998 12:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 422056 00:34:31.998 12:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:34:31.998 12:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:31.998 12:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 422056 00:34:31.998 12:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:31.998 12:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:31.998 12:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 422056' 00:34:31.998 killing process with pid 422056 00:34:32.258 12:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 422056 00:34:32.258 12:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 422056 00:34:32.258 12:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:32.258 12:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:32.258 12:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:32.258 12:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:34:32.258 12:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:34:32.258 12:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:32.258 12:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:34:32.258 12:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:32.258 12:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:32.258 12:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:32.258 12:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:32.258 12:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:34.792 12:48:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:34.792 00:34:34.792 real 0m22.810s 00:34:34.792 user 0m39.622s 00:34:34.792 sys 0m8.463s 00:34:34.792 12:48:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:34.792 12:48:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:34.792 ************************************ 00:34:34.792 END TEST nvmf_interrupt 00:34:34.792 ************************************ 00:34:34.792 00:34:34.792 real 27m24.024s 00:34:34.792 user 56m20.656s 00:34:34.792 sys 9m24.385s 00:34:34.792 12:48:40 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:34.792 12:48:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:34.792 ************************************ 00:34:34.792 END TEST nvmf_tcp 00:34:34.792 ************************************ 00:34:34.792 12:48:40 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:34:34.792 12:48:40 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:34.792 12:48:40 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:34.792 12:48:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:34.792 12:48:40 -- common/autotest_common.sh@10 -- # set +x 00:34:34.792 ************************************ 00:34:34.792 START TEST spdkcli_nvmf_tcp 00:34:34.792 ************************************ 00:34:34.792 12:48:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:34.792 * Looking for test storage... 00:34:34.792 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:34.792 12:48:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:34.792 12:48:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:34:34.792 12:48:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:34.792 12:48:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:34.792 12:48:40 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:34.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.793 --rc genhtml_branch_coverage=1 00:34:34.793 --rc genhtml_function_coverage=1 00:34:34.793 --rc genhtml_legend=1 00:34:34.793 --rc geninfo_all_blocks=1 00:34:34.793 --rc geninfo_unexecuted_blocks=1 00:34:34.793 00:34:34.793 ' 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:34.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.793 --rc genhtml_branch_coverage=1 00:34:34.793 --rc genhtml_function_coverage=1 00:34:34.793 --rc genhtml_legend=1 00:34:34.793 --rc geninfo_all_blocks=1 00:34:34.793 --rc geninfo_unexecuted_blocks=1 00:34:34.793 00:34:34.793 ' 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:34.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.793 --rc genhtml_branch_coverage=1 00:34:34.793 --rc genhtml_function_coverage=1 00:34:34.793 --rc genhtml_legend=1 00:34:34.793 --rc geninfo_all_blocks=1 00:34:34.793 --rc geninfo_unexecuted_blocks=1 00:34:34.793 00:34:34.793 ' 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:34.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.793 --rc genhtml_branch_coverage=1 00:34:34.793 --rc genhtml_function_coverage=1 00:34:34.793 --rc genhtml_legend=1 00:34:34.793 --rc geninfo_all_blocks=1 00:34:34.793 --rc geninfo_unexecuted_blocks=1 00:34:34.793 00:34:34.793 ' 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:34.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=424963 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 424963 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 424963 ']' 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:34.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:34.793 12:48:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:34.793 [2024-11-20 12:48:40.431568] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:34:34.793 [2024-11-20 12:48:40.431611] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid424963 ] 00:34:34.793 [2024-11-20 12:48:40.506088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:34.793 [2024-11-20 12:48:40.547704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:34.793 [2024-11-20 12:48:40.547706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:35.052 12:48:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:35.052 12:48:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:34:35.052 12:48:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:35.052 12:48:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:35.052 12:48:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:35.052 12:48:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:35.052 12:48:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:35.052 12:48:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:35.052 12:48:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:35.052 12:48:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:35.052 12:48:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:35.052 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:35.052 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:35.052 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:35.052 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:35.052 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:35.052 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:35.052 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:35.052 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:35.052 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:35.052 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:35.052 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:35.052 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:35.052 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:35.052 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:35.052 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:35.052 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:35.052 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:35.052 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:35.052 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:35.052 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:35.052 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:35.052 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:35.052 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:35.052 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:35.052 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:35.052 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:35.052 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:35.052 ' 00:34:38.329 [2024-11-20 12:48:43.380950] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:39.264 [2024-11-20 12:48:44.717324] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:41.896 [2024-11-20 12:48:47.204936] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:43.792 [2024-11-20 12:48:49.379725] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:45.692 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:45.692 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:45.692 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:45.692 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:45.692 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:45.692 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:45.692 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:45.692 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:45.692 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:45.692 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:45.692 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:45.692 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:45.692 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:45.692 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:45.692 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:45.692 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:45.692 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:45.692 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:45.692 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:45.692 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:45.692 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:45.692 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:45.692 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:45.692 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:45.692 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:45.692 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:45.692 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:45.692 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:45.692 12:48:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:45.692 12:48:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:45.692 12:48:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:45.692 12:48:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:45.692 12:48:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:45.692 12:48:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:45.692 12:48:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:45.692 12:48:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:45.951 12:48:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:45.951 12:48:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:45.951 12:48:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:45.951 12:48:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:45.951 12:48:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:45.951 12:48:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:45.951 12:48:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:45.951 12:48:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:45.951 12:48:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:45.951 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:45.951 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:45.951 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:45.951 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:45.951 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:45.951 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:45.951 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:45.951 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:45.952 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:45.952 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:45.952 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:45.952 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:45.952 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:45.952 ' 00:34:52.520 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:52.520 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:52.520 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:52.520 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:52.520 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:34:52.520 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:34:52.520 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:52.520 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:52.520 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:52.520 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:52.520 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:52.520 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:52.520 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:52.520 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:52.520 12:48:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:52.520 12:48:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:52.520 12:48:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:52.520 12:48:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 424963 00:34:52.520 12:48:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 424963 ']' 00:34:52.521 12:48:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 424963 00:34:52.521 12:48:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:34:52.521 12:48:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:52.521 12:48:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 424963 00:34:52.521 12:48:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:52.521 12:48:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:52.521 12:48:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 424963' 00:34:52.521 killing process with pid 424963 00:34:52.521 12:48:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 424963 00:34:52.521 12:48:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 424963 00:34:52.521 12:48:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:52.521 12:48:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:52.521 12:48:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 424963 ']' 00:34:52.521 12:48:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 424963 00:34:52.521 12:48:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 424963 ']' 00:34:52.521 12:48:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 424963 00:34:52.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (424963) - No such process 00:34:52.521 12:48:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 424963 is not found' 00:34:52.521 Process with pid 424963 is not found 00:34:52.521 12:48:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:52.521 12:48:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:52.521 12:48:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:52.521 00:34:52.521 real 0m17.357s 00:34:52.521 user 0m38.283s 00:34:52.521 sys 0m0.802s 00:34:52.521 12:48:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:52.521 12:48:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:52.521 ************************************ 00:34:52.521 END TEST spdkcli_nvmf_tcp 00:34:52.521 ************************************ 00:34:52.521 12:48:57 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:52.521 12:48:57 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:52.521 12:48:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:52.521 12:48:57 -- common/autotest_common.sh@10 -- # set +x 00:34:52.521 ************************************ 00:34:52.521 START TEST nvmf_identify_passthru 00:34:52.521 ************************************ 00:34:52.521 12:48:57 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:52.521 * Looking for test storage... 00:34:52.521 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:52.521 12:48:57 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:52.521 12:48:57 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:34:52.521 12:48:57 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:52.521 12:48:57 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:52.521 12:48:57 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:52.521 12:48:57 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:52.521 12:48:57 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:52.521 12:48:57 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:34:52.521 12:48:57 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:34:52.521 12:48:57 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:34:52.521 12:48:57 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:34:52.521 12:48:57 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:34:52.521 12:48:57 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:34:52.521 12:48:57 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:34:52.521 12:48:57 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:52.521 12:48:57 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:34:52.521 12:48:57 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:34:52.521 12:48:57 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:52.521 12:48:57 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:52.521 12:48:57 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:34:52.521 12:48:57 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:34:52.521 12:48:57 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:52.521 12:48:57 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:34:52.521 12:48:57 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:34:52.521 12:48:57 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:34:52.521 12:48:57 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:34:52.521 12:48:57 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:52.521 12:48:57 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:34:52.521 12:48:57 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:34:52.521 12:48:57 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:52.521 12:48:57 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:52.521 12:48:57 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:34:52.521 12:48:57 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:52.521 12:48:57 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:52.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:52.521 --rc genhtml_branch_coverage=1 00:34:52.521 --rc genhtml_function_coverage=1 00:34:52.521 --rc genhtml_legend=1 00:34:52.521 --rc geninfo_all_blocks=1 00:34:52.521 --rc geninfo_unexecuted_blocks=1 00:34:52.521 00:34:52.521 ' 00:34:52.521 12:48:57 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:52.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:52.521 --rc genhtml_branch_coverage=1 00:34:52.521 --rc genhtml_function_coverage=1 00:34:52.521 --rc genhtml_legend=1 00:34:52.521 --rc geninfo_all_blocks=1 00:34:52.521 --rc geninfo_unexecuted_blocks=1 00:34:52.521 00:34:52.521 ' 00:34:52.521 12:48:57 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:52.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:52.521 --rc genhtml_branch_coverage=1 00:34:52.521 --rc genhtml_function_coverage=1 00:34:52.521 --rc genhtml_legend=1 00:34:52.521 --rc geninfo_all_blocks=1 00:34:52.521 --rc geninfo_unexecuted_blocks=1 00:34:52.521 00:34:52.521 ' 00:34:52.521 12:48:57 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:52.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:52.521 --rc genhtml_branch_coverage=1 00:34:52.521 --rc genhtml_function_coverage=1 00:34:52.521 --rc genhtml_legend=1 00:34:52.521 --rc geninfo_all_blocks=1 00:34:52.521 --rc geninfo_unexecuted_blocks=1 00:34:52.521 00:34:52.521 ' 00:34:52.521 12:48:57 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:52.521 12:48:57 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:34:52.521 12:48:57 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:52.521 12:48:57 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:52.521 12:48:57 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:52.521 12:48:57 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:52.521 12:48:57 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:52.521 12:48:57 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:52.521 12:48:57 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:52.521 12:48:57 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:52.521 12:48:57 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:52.521 12:48:57 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:52.521 12:48:57 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:34:52.521 12:48:57 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:34:52.521 12:48:57 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:52.521 12:48:57 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:52.521 12:48:57 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:52.521 12:48:57 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:52.521 12:48:57 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:52.521 12:48:57 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:34:52.521 12:48:57 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:52.521 12:48:57 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:52.521 12:48:57 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:52.521 12:48:57 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:52.522 12:48:57 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:52.522 12:48:57 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:52.522 12:48:57 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:52.522 12:48:57 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:52.522 12:48:57 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:34:52.522 12:48:57 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:52.522 12:48:57 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:52.522 12:48:57 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:52.522 12:48:57 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:52.522 12:48:57 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:52.522 12:48:57 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:52.522 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:52.522 12:48:57 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:52.522 12:48:57 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:52.522 12:48:57 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:52.522 12:48:57 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:52.522 12:48:57 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:34:52.522 12:48:57 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:52.522 12:48:57 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:52.522 12:48:57 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:52.522 12:48:57 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:52.522 12:48:57 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:52.522 12:48:57 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:52.522 12:48:57 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:52.522 12:48:57 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:52.522 12:48:57 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:52.522 12:48:57 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:52.522 12:48:57 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:52.522 12:48:57 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:52.522 12:48:57 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:52.522 12:48:57 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:52.522 12:48:57 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:52.522 12:48:57 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:52.522 12:48:57 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:52.522 12:48:57 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:52.522 12:48:57 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:52.522 12:48:57 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:34:52.522 12:48:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:57.850 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:57.850 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:57.850 Found net devices under 0000:86:00.0: cvl_0_0 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:57.850 Found net devices under 0000:86:00.1: cvl_0_1 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:57.850 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:57.851 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:57.851 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:57.851 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:57.851 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:57.851 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:57.851 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:57.851 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:57.851 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:57.851 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:57.851 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:57.851 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:57.851 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:57.851 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:57.851 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:57.851 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:57.851 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:57.851 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:57.851 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:58.110 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:58.110 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:58.110 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:58.110 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:58.110 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:58.110 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.471 ms 00:34:58.110 00:34:58.110 --- 10.0.0.2 ping statistics --- 00:34:58.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:58.110 rtt min/avg/max/mdev = 0.471/0.471/0.471/0.000 ms 00:34:58.110 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:58.110 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:58.110 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:34:58.110 00:34:58.110 --- 10.0.0.1 ping statistics --- 00:34:58.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:58.110 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:34:58.110 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:58.110 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:34:58.110 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:58.110 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:58.110 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:58.110 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:58.110 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:58.110 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:58.110 12:49:03 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:58.110 12:49:03 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:58.110 12:49:03 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:58.110 12:49:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:58.110 12:49:03 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:58.110 12:49:03 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:34:58.110 12:49:03 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:34:58.110 12:49:03 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:34:58.110 12:49:03 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:34:58.110 12:49:03 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:34:58.110 12:49:03 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:34:58.110 12:49:03 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:58.110 12:49:03 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:58.110 12:49:03 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:34:58.110 12:49:03 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:34:58.110 12:49:03 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:34:58.110 12:49:03 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:34:58.110 12:49:03 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:34:58.110 12:49:03 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:34:58.110 12:49:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:34:58.110 12:49:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:58.110 12:49:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:35:03.378 12:49:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLN951000C61P6AGN 00:35:03.378 12:49:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:35:03.378 12:49:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:35:03.378 12:49:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:35:08.652 12:49:13 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:35:08.652 12:49:13 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:35:08.652 12:49:13 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:08.652 12:49:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:08.652 12:49:13 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:35:08.652 12:49:13 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:08.652 12:49:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:08.652 12:49:13 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=432290 00:35:08.652 12:49:13 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:08.652 12:49:13 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:08.652 12:49:13 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 432290 00:35:08.652 12:49:13 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 432290 ']' 00:35:08.652 12:49:13 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:08.652 12:49:13 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:08.652 12:49:13 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:08.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:08.652 12:49:13 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:08.652 12:49:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:08.652 [2024-11-20 12:49:13.446535] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:35:08.652 [2024-11-20 12:49:13.446588] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:08.652 [2024-11-20 12:49:13.526804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:08.652 [2024-11-20 12:49:13.567627] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:08.652 [2024-11-20 12:49:13.567666] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:08.652 [2024-11-20 12:49:13.567673] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:08.652 [2024-11-20 12:49:13.567679] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:08.652 [2024-11-20 12:49:13.567684] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:08.652 [2024-11-20 12:49:13.569315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:08.652 [2024-11-20 12:49:13.569421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:08.652 [2024-11-20 12:49:13.569527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:08.652 [2024-11-20 12:49:13.569528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:08.652 12:49:14 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:08.652 12:49:14 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:35:08.652 12:49:14 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:35:08.652 12:49:14 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.652 12:49:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:08.652 INFO: Log level set to 20 00:35:08.652 INFO: Requests: 00:35:08.652 { 00:35:08.652 "jsonrpc": "2.0", 00:35:08.652 "method": "nvmf_set_config", 00:35:08.652 "id": 1, 00:35:08.652 "params": { 00:35:08.652 "admin_cmd_passthru": { 00:35:08.652 "identify_ctrlr": true 00:35:08.652 } 00:35:08.652 } 00:35:08.652 } 00:35:08.652 00:35:08.652 INFO: response: 00:35:08.652 { 00:35:08.652 "jsonrpc": "2.0", 00:35:08.652 "id": 1, 00:35:08.652 "result": true 00:35:08.652 } 00:35:08.652 00:35:08.652 12:49:14 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.652 12:49:14 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:35:08.652 12:49:14 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.652 12:49:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:08.652 INFO: Setting log level to 20 00:35:08.652 INFO: Setting log level to 20 00:35:08.652 INFO: Log level set to 20 00:35:08.652 INFO: Log level set to 20 00:35:08.652 INFO: Requests: 00:35:08.652 { 00:35:08.652 "jsonrpc": "2.0", 00:35:08.652 "method": "framework_start_init", 00:35:08.652 "id": 1 00:35:08.652 } 00:35:08.652 00:35:08.652 INFO: Requests: 00:35:08.652 { 00:35:08.652 "jsonrpc": "2.0", 00:35:08.652 "method": "framework_start_init", 00:35:08.652 "id": 1 00:35:08.652 } 00:35:08.652 00:35:08.652 [2024-11-20 12:49:14.354794] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:35:08.652 INFO: response: 00:35:08.652 { 00:35:08.652 "jsonrpc": "2.0", 00:35:08.652 "id": 1, 00:35:08.652 "result": true 00:35:08.652 } 00:35:08.652 00:35:08.652 INFO: response: 00:35:08.652 { 00:35:08.652 "jsonrpc": "2.0", 00:35:08.652 "id": 1, 00:35:08.652 "result": true 00:35:08.652 } 00:35:08.652 00:35:08.652 12:49:14 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.652 12:49:14 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:08.652 12:49:14 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.652 12:49:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:08.652 INFO: Setting log level to 40 00:35:08.652 INFO: Setting log level to 40 00:35:08.652 INFO: Setting log level to 40 00:35:08.652 [2024-11-20 12:49:14.368123] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:08.652 12:49:14 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.652 12:49:14 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:35:08.652 12:49:14 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:08.652 12:49:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:08.652 12:49:14 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:35:08.652 12:49:14 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.652 12:49:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:11.932 Nvme0n1 00:35:11.932 12:49:17 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.932 12:49:17 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:11.932 12:49:17 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.932 12:49:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:11.932 12:49:17 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.932 12:49:17 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:11.932 12:49:17 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.932 12:49:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:11.932 12:49:17 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.932 12:49:17 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:11.932 12:49:17 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.932 12:49:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:11.932 [2024-11-20 12:49:17.276721] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:11.932 12:49:17 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.932 12:49:17 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:11.932 12:49:17 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.932 12:49:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:11.932 [ 00:35:11.932 { 00:35:11.932 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:11.932 "subtype": "Discovery", 00:35:11.932 "listen_addresses": [], 00:35:11.932 "allow_any_host": true, 00:35:11.932 "hosts": [] 00:35:11.932 }, 00:35:11.932 { 00:35:11.932 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:11.932 "subtype": "NVMe", 00:35:11.932 "listen_addresses": [ 00:35:11.932 { 00:35:11.932 "trtype": "TCP", 00:35:11.932 "adrfam": "IPv4", 00:35:11.932 "traddr": "10.0.0.2", 00:35:11.932 "trsvcid": "4420" 00:35:11.932 } 00:35:11.932 ], 00:35:11.932 "allow_any_host": true, 00:35:11.932 "hosts": [], 00:35:11.932 "serial_number": "SPDK00000000000001", 00:35:11.932 "model_number": "SPDK bdev Controller", 00:35:11.932 "max_namespaces": 1, 00:35:11.932 "min_cntlid": 1, 00:35:11.932 "max_cntlid": 65519, 00:35:11.932 "namespaces": [ 00:35:11.932 { 00:35:11.932 "nsid": 1, 00:35:11.932 "bdev_name": "Nvme0n1", 00:35:11.932 "name": "Nvme0n1", 00:35:11.932 "nguid": "915B2A284AF747C3A37C4ACFF27F9562", 00:35:11.932 "uuid": "915b2a28-4af7-47c3-a37c-4acff27f9562" 00:35:11.932 } 00:35:11.932 ] 00:35:11.932 } 00:35:11.932 ] 00:35:11.932 12:49:17 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.932 12:49:17 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:11.932 12:49:17 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:11.932 12:49:17 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:11.932 12:49:17 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLN951000C61P6AGN 00:35:11.932 12:49:17 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:11.932 12:49:17 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:11.932 12:49:17 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:11.932 12:49:17 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:35:11.932 12:49:17 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLN951000C61P6AGN '!=' PHLN951000C61P6AGN ']' 00:35:11.932 12:49:17 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:35:11.932 12:49:17 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:11.932 12:49:17 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.932 12:49:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:11.932 12:49:17 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.932 12:49:17 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:11.932 12:49:17 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:11.932 12:49:17 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:11.932 12:49:17 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:35:11.932 12:49:17 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:11.932 12:49:17 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:35:11.932 12:49:17 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:11.932 12:49:17 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:11.932 rmmod nvme_tcp 00:35:11.932 rmmod nvme_fabrics 00:35:11.932 rmmod nvme_keyring 00:35:11.932 12:49:17 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:11.932 12:49:17 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:35:11.932 12:49:17 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:35:11.932 12:49:17 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 432290 ']' 00:35:11.932 12:49:17 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 432290 00:35:11.932 12:49:17 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 432290 ']' 00:35:11.932 12:49:17 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 432290 00:35:11.932 12:49:17 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:35:11.932 12:49:17 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:11.932 12:49:17 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 432290 00:35:11.932 12:49:17 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:11.932 12:49:17 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:11.932 12:49:17 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 432290' 00:35:11.932 killing process with pid 432290 00:35:11.932 12:49:17 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 432290 00:35:11.932 12:49:17 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 432290 00:35:14.460 12:49:19 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:14.460 12:49:19 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:14.460 12:49:19 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:14.460 12:49:19 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:35:14.460 12:49:19 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:35:14.460 12:49:19 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:14.460 12:49:19 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:35:14.460 12:49:19 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:14.460 12:49:19 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:14.460 12:49:19 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:14.460 12:49:19 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:14.460 12:49:19 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:16.367 12:49:21 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:16.367 00:35:16.367 real 0m24.204s 00:35:16.367 user 0m32.558s 00:35:16.367 sys 0m6.295s 00:35:16.367 12:49:21 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:16.367 12:49:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:16.367 ************************************ 00:35:16.367 END TEST nvmf_identify_passthru 00:35:16.367 ************************************ 00:35:16.367 12:49:21 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:16.367 12:49:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:16.367 12:49:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:16.367 12:49:21 -- common/autotest_common.sh@10 -- # set +x 00:35:16.367 ************************************ 00:35:16.367 START TEST nvmf_dif 00:35:16.367 ************************************ 00:35:16.367 12:49:21 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:16.367 * Looking for test storage... 00:35:16.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:16.367 12:49:21 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:16.367 12:49:21 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:35:16.367 12:49:21 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:16.367 12:49:22 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:16.367 12:49:22 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:16.367 12:49:22 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:16.367 12:49:22 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:16.367 12:49:22 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:35:16.367 12:49:22 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:35:16.368 12:49:22 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:35:16.368 12:49:22 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:35:16.368 12:49:22 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:35:16.368 12:49:22 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:35:16.368 12:49:22 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:35:16.368 12:49:22 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:16.368 12:49:22 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:35:16.368 12:49:22 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:35:16.368 12:49:22 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:16.368 12:49:22 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:16.368 12:49:22 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:35:16.368 12:49:22 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:35:16.368 12:49:22 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:16.368 12:49:22 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:35:16.368 12:49:22 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:35:16.368 12:49:22 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:35:16.368 12:49:22 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:35:16.368 12:49:22 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:16.368 12:49:22 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:35:16.368 12:49:22 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:35:16.368 12:49:22 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:16.368 12:49:22 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:16.368 12:49:22 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:35:16.368 12:49:22 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:16.368 12:49:22 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:16.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.368 --rc genhtml_branch_coverage=1 00:35:16.368 --rc genhtml_function_coverage=1 00:35:16.368 --rc genhtml_legend=1 00:35:16.368 --rc geninfo_all_blocks=1 00:35:16.368 --rc geninfo_unexecuted_blocks=1 00:35:16.368 00:35:16.368 ' 00:35:16.368 12:49:22 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:16.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.368 --rc genhtml_branch_coverage=1 00:35:16.368 --rc genhtml_function_coverage=1 00:35:16.368 --rc genhtml_legend=1 00:35:16.368 --rc geninfo_all_blocks=1 00:35:16.368 --rc geninfo_unexecuted_blocks=1 00:35:16.368 00:35:16.368 ' 00:35:16.368 12:49:22 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:16.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.368 --rc genhtml_branch_coverage=1 00:35:16.368 --rc genhtml_function_coverage=1 00:35:16.368 --rc genhtml_legend=1 00:35:16.368 --rc geninfo_all_blocks=1 00:35:16.368 --rc geninfo_unexecuted_blocks=1 00:35:16.368 00:35:16.368 ' 00:35:16.368 12:49:22 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:16.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.368 --rc genhtml_branch_coverage=1 00:35:16.368 --rc genhtml_function_coverage=1 00:35:16.368 --rc genhtml_legend=1 00:35:16.368 --rc geninfo_all_blocks=1 00:35:16.368 --rc geninfo_unexecuted_blocks=1 00:35:16.368 00:35:16.368 ' 00:35:16.368 12:49:22 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:16.368 12:49:22 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:16.368 12:49:22 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:16.368 12:49:22 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:16.368 12:49:22 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:16.368 12:49:22 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:16.368 12:49:22 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:16.368 12:49:22 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:16.368 12:49:22 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:16.368 12:49:22 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:16.368 12:49:22 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:16.368 12:49:22 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:16.368 12:49:22 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:35:16.368 12:49:22 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:35:16.368 12:49:22 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:16.368 12:49:22 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:16.368 12:49:22 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:16.368 12:49:22 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:16.368 12:49:22 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:16.368 12:49:22 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:35:16.368 12:49:22 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:16.368 12:49:22 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:16.368 12:49:22 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:16.368 12:49:22 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.368 12:49:22 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.368 12:49:22 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.368 12:49:22 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:16.368 12:49:22 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.368 12:49:22 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:35:16.368 12:49:22 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:16.368 12:49:22 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:16.368 12:49:22 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:16.368 12:49:22 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:16.368 12:49:22 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:16.368 12:49:22 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:16.368 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:16.368 12:49:22 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:16.368 12:49:22 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:16.368 12:49:22 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:16.368 12:49:22 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:16.368 12:49:22 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:16.368 12:49:22 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:16.368 12:49:22 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:16.368 12:49:22 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:16.368 12:49:22 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:16.368 12:49:22 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:16.368 12:49:22 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:16.368 12:49:22 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:16.368 12:49:22 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:16.368 12:49:22 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:16.368 12:49:22 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:16.368 12:49:22 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:16.368 12:49:22 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:16.368 12:49:22 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:16.369 12:49:22 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:35:16.369 12:49:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:22.938 12:49:27 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:22.938 12:49:27 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:35:22.938 12:49:27 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:22.939 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:22.939 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:22.939 Found net devices under 0000:86:00.0: cvl_0_0 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:22.939 Found net devices under 0000:86:00.1: cvl_0_1 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:22.939 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:22.939 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.496 ms 00:35:22.939 00:35:22.939 --- 10.0.0.2 ping statistics --- 00:35:22.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:22.939 rtt min/avg/max/mdev = 0.496/0.496/0.496/0.000 ms 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:22.939 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:22.939 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:35:22.939 00:35:22.939 --- 10.0.0.1 ping statistics --- 00:35:22.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:22.939 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:35:22.939 12:49:27 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:25.475 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:35:25.475 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:25.475 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:35:25.475 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:35:25.475 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:35:25.475 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:35:25.475 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:35:25.475 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:35:25.475 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:35:25.475 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:35:25.475 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:35:25.475 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:35:25.475 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:35:25.475 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:35:25.475 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:35:25.475 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:35:25.475 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:35:25.475 12:49:30 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:25.475 12:49:30 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:25.475 12:49:30 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:25.475 12:49:30 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:25.475 12:49:30 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:25.475 12:49:30 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:25.475 12:49:30 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:25.475 12:49:30 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:25.475 12:49:30 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:25.475 12:49:30 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:25.475 12:49:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:25.475 12:49:30 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=437989 00:35:25.475 12:49:30 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 437989 00:35:25.475 12:49:30 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:25.475 12:49:30 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 437989 ']' 00:35:25.475 12:49:30 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:25.475 12:49:30 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:25.475 12:49:30 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:25.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:25.475 12:49:30 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:25.475 12:49:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:25.475 [2024-11-20 12:49:30.925790] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:35:25.475 [2024-11-20 12:49:30.925835] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:25.475 [2024-11-20 12:49:31.001827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:25.475 [2024-11-20 12:49:31.042280] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:25.475 [2024-11-20 12:49:31.042316] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:25.475 [2024-11-20 12:49:31.042323] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:25.475 [2024-11-20 12:49:31.042329] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:25.475 [2024-11-20 12:49:31.042335] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:25.475 [2024-11-20 12:49:31.042901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:25.475 12:49:31 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:25.475 12:49:31 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:35:25.475 12:49:31 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:25.475 12:49:31 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:25.475 12:49:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:25.475 12:49:31 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:25.475 12:49:31 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:25.475 12:49:31 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:25.475 12:49:31 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.475 12:49:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:25.475 [2024-11-20 12:49:31.177660] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:25.475 12:49:31 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.475 12:49:31 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:25.475 12:49:31 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:25.475 12:49:31 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:25.475 12:49:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:25.475 ************************************ 00:35:25.475 START TEST fio_dif_1_default 00:35:25.475 ************************************ 00:35:25.475 12:49:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:35:25.475 12:49:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:25.475 12:49:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:25.475 12:49:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:25.476 12:49:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:25.476 12:49:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:25.476 12:49:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:25.476 12:49:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.476 12:49:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:25.476 bdev_null0 00:35:25.476 12:49:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.476 12:49:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:25.476 12:49:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.476 12:49:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:25.476 12:49:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.476 12:49:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:25.476 12:49:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.476 12:49:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:25.735 12:49:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.735 12:49:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:25.735 12:49:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.735 12:49:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:25.735 [2024-11-20 12:49:31.245976] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:25.735 12:49:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.735 12:49:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:25.735 12:49:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:25.735 12:49:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:25.735 12:49:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:35:25.735 12:49:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:25.735 12:49:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:35:25.735 12:49:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:25.735 12:49:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:25.735 12:49:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:25.735 12:49:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:25.735 { 00:35:25.735 "params": { 00:35:25.735 "name": "Nvme$subsystem", 00:35:25.735 "trtype": "$TEST_TRANSPORT", 00:35:25.735 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:25.735 "adrfam": "ipv4", 00:35:25.735 "trsvcid": "$NVMF_PORT", 00:35:25.735 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:25.735 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:25.735 "hdgst": ${hdgst:-false}, 00:35:25.735 "ddgst": ${ddgst:-false} 00:35:25.735 }, 00:35:25.735 "method": "bdev_nvme_attach_controller" 00:35:25.735 } 00:35:25.735 EOF 00:35:25.735 )") 00:35:25.735 12:49:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:25.735 12:49:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:25.735 12:49:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:25.735 12:49:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:25.735 12:49:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:25.735 12:49:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:25.735 12:49:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:35:25.735 12:49:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:25.735 12:49:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:25.735 12:49:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:35:25.735 12:49:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:25.735 12:49:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:25.735 12:49:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:25.735 12:49:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:35:25.735 12:49:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:25.735 12:49:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:35:25.735 12:49:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:35:25.735 12:49:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:25.735 "params": { 00:35:25.735 "name": "Nvme0", 00:35:25.735 "trtype": "tcp", 00:35:25.735 "traddr": "10.0.0.2", 00:35:25.735 "adrfam": "ipv4", 00:35:25.735 "trsvcid": "4420", 00:35:25.735 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:25.735 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:25.735 "hdgst": false, 00:35:25.735 "ddgst": false 00:35:25.735 }, 00:35:25.735 "method": "bdev_nvme_attach_controller" 00:35:25.735 }' 00:35:25.735 12:49:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:25.735 12:49:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:25.735 12:49:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:25.735 12:49:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:25.735 12:49:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:25.735 12:49:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:25.735 12:49:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:25.735 12:49:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:25.735 12:49:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:25.735 12:49:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:25.995 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:25.995 fio-3.35 00:35:25.995 Starting 1 thread 00:35:38.201 00:35:38.201 filename0: (groupid=0, jobs=1): err= 0: pid=438356: Wed Nov 20 12:49:42 2024 00:35:38.201 read: IOPS=220, BW=881KiB/s (902kB/s)(8848KiB/10040msec) 00:35:38.201 slat (nsec): min=5378, max=32338, avg=5981.77, stdev=904.62 00:35:38.201 clat (usec): min=369, max=45760, avg=18138.61, stdev=20225.62 00:35:38.201 lat (usec): min=374, max=45792, avg=18144.59, stdev=20225.56 00:35:38.201 clat percentiles (usec): 00:35:38.201 | 1.00th=[ 379], 5.00th=[ 388], 10.00th=[ 392], 20.00th=[ 400], 00:35:38.201 | 30.00th=[ 408], 40.00th=[ 420], 50.00th=[ 553], 60.00th=[40633], 00:35:38.201 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:35:38.201 | 99.00th=[42730], 99.50th=[42730], 99.90th=[45876], 99.95th=[45876], 00:35:38.201 | 99.99th=[45876] 00:35:38.201 bw ( KiB/s): min= 736, max= 1152, per=100.00%, avg=883.20, stdev=97.06, samples=20 00:35:38.201 iops : min= 184, max= 288, avg=220.80, stdev=24.27, samples=20 00:35:38.201 lat (usec) : 500=47.29%, 750=9.31% 00:35:38.201 lat (msec) : 50=43.40% 00:35:38.201 cpu : usr=93.14%, sys=6.60%, ctx=12, majf=0, minf=0 00:35:38.201 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:38.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:38.201 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:38.201 issued rwts: total=2212,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:38.201 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:38.201 00:35:38.201 Run status group 0 (all jobs): 00:35:38.201 READ: bw=881KiB/s (902kB/s), 881KiB/s-881KiB/s (902kB/s-902kB/s), io=8848KiB (9060kB), run=10040-10040msec 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.201 00:35:38.201 real 0m11.241s 00:35:38.201 user 0m16.484s 00:35:38.201 sys 0m1.030s 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:38.201 ************************************ 00:35:38.201 END TEST fio_dif_1_default 00:35:38.201 ************************************ 00:35:38.201 12:49:42 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:38.201 12:49:42 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:38.201 12:49:42 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:38.201 12:49:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:38.201 ************************************ 00:35:38.201 START TEST fio_dif_1_multi_subsystems 00:35:38.201 ************************************ 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:38.201 bdev_null0 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:38.201 [2024-11-20 12:49:42.558921] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:38.201 bdev_null1 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.201 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:38.202 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.202 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:38.202 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:38.202 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:38.202 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:35:38.202 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:38.202 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:35:38.202 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:38.202 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:38.202 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:38.202 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:38.202 { 00:35:38.202 "params": { 00:35:38.202 "name": "Nvme$subsystem", 00:35:38.202 "trtype": "$TEST_TRANSPORT", 00:35:38.202 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:38.202 "adrfam": "ipv4", 00:35:38.202 "trsvcid": "$NVMF_PORT", 00:35:38.202 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:38.202 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:38.202 "hdgst": ${hdgst:-false}, 00:35:38.202 "ddgst": ${ddgst:-false} 00:35:38.202 }, 00:35:38.202 "method": "bdev_nvme_attach_controller" 00:35:38.202 } 00:35:38.202 EOF 00:35:38.202 )") 00:35:38.202 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:38.202 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:38.202 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:38.202 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:38.202 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:38.202 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:38.202 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:35:38.202 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:38.202 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:38.202 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:35:38.202 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:38.202 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:38.202 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:38.202 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:38.202 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:35:38.202 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:38.202 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:38.202 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:38.202 { 00:35:38.202 "params": { 00:35:38.202 "name": "Nvme$subsystem", 00:35:38.202 "trtype": "$TEST_TRANSPORT", 00:35:38.202 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:38.202 "adrfam": "ipv4", 00:35:38.202 "trsvcid": "$NVMF_PORT", 00:35:38.202 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:38.202 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:38.202 "hdgst": ${hdgst:-false}, 00:35:38.202 "ddgst": ${ddgst:-false} 00:35:38.202 }, 00:35:38.202 "method": "bdev_nvme_attach_controller" 00:35:38.202 } 00:35:38.202 EOF 00:35:38.202 )") 00:35:38.202 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:38.202 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:38.202 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:35:38.202 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:35:38.202 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:35:38.202 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:38.202 "params": { 00:35:38.202 "name": "Nvme0", 00:35:38.202 "trtype": "tcp", 00:35:38.202 "traddr": "10.0.0.2", 00:35:38.202 "adrfam": "ipv4", 00:35:38.202 "trsvcid": "4420", 00:35:38.202 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:38.202 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:38.202 "hdgst": false, 00:35:38.202 "ddgst": false 00:35:38.202 }, 00:35:38.202 "method": "bdev_nvme_attach_controller" 00:35:38.202 },{ 00:35:38.202 "params": { 00:35:38.202 "name": "Nvme1", 00:35:38.202 "trtype": "tcp", 00:35:38.202 "traddr": "10.0.0.2", 00:35:38.202 "adrfam": "ipv4", 00:35:38.202 "trsvcid": "4420", 00:35:38.202 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:38.202 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:38.202 "hdgst": false, 00:35:38.202 "ddgst": false 00:35:38.202 }, 00:35:38.202 "method": "bdev_nvme_attach_controller" 00:35:38.202 }' 00:35:38.202 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:38.202 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:38.202 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:38.202 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:38.202 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:38.202 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:38.202 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:38.202 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:38.202 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:38.202 12:49:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:38.202 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:38.202 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:38.202 fio-3.35 00:35:38.202 Starting 2 threads 00:35:48.182 00:35:48.182 filename0: (groupid=0, jobs=1): err= 0: pid=440329: Wed Nov 20 12:49:53 2024 00:35:48.182 read: IOPS=95, BW=382KiB/s (391kB/s)(3824KiB/10009msec) 00:35:48.182 slat (nsec): min=5946, max=30156, avg=7565.21, stdev=2634.08 00:35:48.182 clat (usec): min=40903, max=42186, avg=41852.53, stdev=331.67 00:35:48.182 lat (usec): min=40909, max=42217, avg=41860.09, stdev=331.74 00:35:48.182 clat percentiles (usec): 00:35:48.182 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[42206], 00:35:48.182 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:35:48.182 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:48.182 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:48.182 | 99.99th=[42206] 00:35:48.182 bw ( KiB/s): min= 352, max= 384, per=39.90%, avg=380.80, stdev= 9.85, samples=20 00:35:48.182 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:35:48.182 lat (msec) : 50=100.00% 00:35:48.182 cpu : usr=96.89%, sys=2.82%, ctx=19, majf=0, minf=0 00:35:48.182 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:48.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.182 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.182 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.182 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:48.182 filename1: (groupid=0, jobs=1): err= 0: pid=440330: Wed Nov 20 12:49:53 2024 00:35:48.182 read: IOPS=142, BW=571KiB/s (585kB/s)(5728KiB/10029msec) 00:35:48.182 slat (nsec): min=5949, max=28828, avg=7211.45, stdev=2405.62 00:35:48.182 clat (usec): min=364, max=42592, avg=27990.68, stdev=19469.53 00:35:48.182 lat (usec): min=371, max=42599, avg=27997.89, stdev=19469.18 00:35:48.182 clat percentiles (usec): 00:35:48.182 | 1.00th=[ 371], 5.00th=[ 379], 10.00th=[ 388], 20.00th=[ 404], 00:35:48.182 | 30.00th=[ 453], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:35:48.182 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:48.182 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:35:48.182 | 99.99th=[42730] 00:35:48.182 bw ( KiB/s): min= 384, max= 768, per=59.95%, avg=571.20, stdev=189.25, samples=20 00:35:48.182 iops : min= 96, max= 192, avg=142.80, stdev=47.31, samples=20 00:35:48.182 lat (usec) : 500=32.68%, 750=0.28% 00:35:48.182 lat (msec) : 2=0.28%, 50=66.76% 00:35:48.182 cpu : usr=96.82%, sys=2.91%, ctx=6, majf=0, minf=2 00:35:48.182 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:48.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.182 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.182 issued rwts: total=1432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.182 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:48.182 00:35:48.183 Run status group 0 (all jobs): 00:35:48.183 READ: bw=952KiB/s (975kB/s), 382KiB/s-571KiB/s (391kB/s-585kB/s), io=9552KiB (9781kB), run=10009-10029msec 00:35:48.442 12:49:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:48.442 12:49:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:48.442 12:49:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:48.442 12:49:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:48.442 12:49:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:48.442 12:49:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:48.443 12:49:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.443 12:49:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:48.443 12:49:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.443 12:49:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:48.443 12:49:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.443 12:49:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:48.443 12:49:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.443 12:49:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:48.443 12:49:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:48.443 12:49:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:48.443 12:49:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:48.443 12:49:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.443 12:49:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:48.443 12:49:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.443 12:49:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:48.443 12:49:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.443 12:49:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:48.443 12:49:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.443 00:35:48.443 real 0m11.489s 00:35:48.443 user 0m26.564s 00:35:48.443 sys 0m0.905s 00:35:48.443 12:49:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:48.443 12:49:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:48.443 ************************************ 00:35:48.443 END TEST fio_dif_1_multi_subsystems 00:35:48.443 ************************************ 00:35:48.443 12:49:54 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:48.443 12:49:54 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:48.443 12:49:54 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:48.443 12:49:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:48.443 ************************************ 00:35:48.443 START TEST fio_dif_rand_params 00:35:48.443 ************************************ 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.443 bdev_null0 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.443 [2024-11-20 12:49:54.118763] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:48.443 { 00:35:48.443 "params": { 00:35:48.443 "name": "Nvme$subsystem", 00:35:48.443 "trtype": "$TEST_TRANSPORT", 00:35:48.443 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:48.443 "adrfam": "ipv4", 00:35:48.443 "trsvcid": "$NVMF_PORT", 00:35:48.443 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:48.443 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:48.443 "hdgst": ${hdgst:-false}, 00:35:48.443 "ddgst": ${ddgst:-false} 00:35:48.443 }, 00:35:48.443 "method": "bdev_nvme_attach_controller" 00:35:48.443 } 00:35:48.443 EOF 00:35:48.443 )") 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:48.443 "params": { 00:35:48.443 "name": "Nvme0", 00:35:48.443 "trtype": "tcp", 00:35:48.443 "traddr": "10.0.0.2", 00:35:48.443 "adrfam": "ipv4", 00:35:48.443 "trsvcid": "4420", 00:35:48.443 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:48.443 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:48.443 "hdgst": false, 00:35:48.443 "ddgst": false 00:35:48.443 }, 00:35:48.443 "method": "bdev_nvme_attach_controller" 00:35:48.443 }' 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:48.443 12:49:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:49.013 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:49.013 ... 00:35:49.013 fio-3.35 00:35:49.013 Starting 3 threads 00:35:55.576 00:35:55.576 filename0: (groupid=0, jobs=1): err= 0: pid=442292: Wed Nov 20 12:50:00 2024 00:35:55.576 read: IOPS=348, BW=43.6MiB/s (45.7MB/s)(218MiB/5003msec) 00:35:55.576 slat (nsec): min=6167, max=57564, avg=13913.37, stdev=6139.64 00:35:55.576 clat (usec): min=3191, max=87986, avg=8584.17, stdev=6173.42 00:35:55.576 lat (usec): min=3198, max=87994, avg=8598.09, stdev=6173.35 00:35:55.576 clat percentiles (usec): 00:35:55.576 | 1.00th=[ 3621], 5.00th=[ 5211], 10.00th=[ 5669], 20.00th=[ 6521], 00:35:55.576 | 30.00th=[ 7308], 40.00th=[ 7767], 50.00th=[ 8029], 60.00th=[ 8291], 00:35:55.576 | 70.00th=[ 8586], 80.00th=[ 8979], 90.00th=[ 9503], 95.00th=[10028], 00:35:55.576 | 99.00th=[49021], 99.50th=[49546], 99.90th=[51119], 99.95th=[87557], 00:35:55.576 | 99.99th=[87557] 00:35:55.576 bw ( KiB/s): min=34816, max=50688, per=35.96%, avg=44903.89, stdev=5209.88, samples=9 00:35:55.576 iops : min= 272, max= 396, avg=350.78, stdev=40.71, samples=9 00:35:55.576 lat (msec) : 4=2.06%, 10=93.24%, 20=2.69%, 50=1.66%, 100=0.34% 00:35:55.576 cpu : usr=95.84%, sys=3.84%, ctx=18, majf=0, minf=0 00:35:55.576 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:55.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.576 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.576 issued rwts: total=1745,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.576 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:55.576 filename0: (groupid=0, jobs=1): err= 0: pid=442293: Wed Nov 20 12:50:00 2024 00:35:55.576 read: IOPS=313, BW=39.2MiB/s (41.1MB/s)(198MiB/5044msec) 00:35:55.576 slat (nsec): min=6119, max=47449, avg=14500.84, stdev=6820.92 00:35:55.576 clat (usec): min=3409, max=52118, avg=9527.14, stdev=4807.98 00:35:55.576 lat (usec): min=3416, max=52129, avg=9541.64, stdev=4807.78 00:35:55.576 clat percentiles (usec): 00:35:55.576 | 1.00th=[ 5014], 5.00th=[ 5800], 10.00th=[ 6063], 20.00th=[ 6587], 00:35:55.576 | 30.00th=[ 7439], 40.00th=[ 8848], 50.00th=[ 9372], 60.00th=[10028], 00:35:55.576 | 70.00th=[10552], 80.00th=[11207], 90.00th=[11863], 95.00th=[12256], 00:35:55.576 | 99.00th=[45876], 99.50th=[48497], 99.90th=[51119], 99.95th=[52167], 00:35:55.576 | 99.99th=[52167] 00:35:55.576 bw ( KiB/s): min=27136, max=48896, per=32.37%, avg=40422.40, stdev=6622.49, samples=10 00:35:55.576 iops : min= 212, max= 382, avg=315.80, stdev=51.74, samples=10 00:35:55.576 lat (msec) : 4=0.44%, 10=60.59%, 20=37.70%, 50=1.01%, 100=0.25% 00:35:55.576 cpu : usr=93.60%, sys=4.46%, ctx=283, majf=0, minf=9 00:35:55.576 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:55.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.576 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.576 issued rwts: total=1581,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.576 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:55.576 filename0: (groupid=0, jobs=1): err= 0: pid=442294: Wed Nov 20 12:50:00 2024 00:35:55.576 read: IOPS=318, BW=39.9MiB/s (41.8MB/s)(199MiB/5003msec) 00:35:55.576 slat (nsec): min=6077, max=43802, avg=13177.62, stdev=5646.14 00:35:55.576 clat (usec): min=3110, max=51234, avg=9392.50, stdev=7747.67 00:35:55.576 lat (usec): min=3117, max=51242, avg=9405.68, stdev=7748.13 00:35:55.576 clat percentiles (usec): 00:35:55.576 | 1.00th=[ 4047], 5.00th=[ 5276], 10.00th=[ 5800], 20.00th=[ 7111], 00:35:55.576 | 30.00th=[ 7635], 40.00th=[ 7898], 50.00th=[ 8160], 60.00th=[ 8455], 00:35:55.576 | 70.00th=[ 8717], 80.00th=[ 8979], 90.00th=[ 9634], 95.00th=[10421], 00:35:55.576 | 99.00th=[50070], 99.50th=[50594], 99.90th=[50594], 99.95th=[51119], 00:35:55.576 | 99.99th=[51119] 00:35:55.576 bw ( KiB/s): min=30464, max=47360, per=32.96%, avg=41159.11, stdev=5945.84, samples=9 00:35:55.576 iops : min= 238, max= 370, avg=321.56, stdev=46.45, samples=9 00:35:55.576 lat (msec) : 4=0.94%, 10=92.35%, 20=3.13%, 50=2.51%, 100=1.07% 00:35:55.576 cpu : usr=95.40%, sys=3.62%, ctx=130, majf=0, minf=9 00:35:55.576 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:55.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.576 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.576 issued rwts: total=1595,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.576 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:55.576 00:35:55.576 Run status group 0 (all jobs): 00:35:55.576 READ: bw=122MiB/s (128MB/s), 39.2MiB/s-43.6MiB/s (41.1MB/s-45.7MB/s), io=615MiB (645MB), run=5003-5044msec 00:35:55.576 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:55.576 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:55.576 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:55.576 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:55.576 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:55.576 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:55.576 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.576 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.577 bdev_null0 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.577 [2024-11-20 12:50:00.505569] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.577 bdev_null1 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.577 bdev_null2 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:55.577 { 00:35:55.577 "params": { 00:35:55.577 "name": "Nvme$subsystem", 00:35:55.577 "trtype": "$TEST_TRANSPORT", 00:35:55.577 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:55.577 "adrfam": "ipv4", 00:35:55.577 "trsvcid": "$NVMF_PORT", 00:35:55.577 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:55.577 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:55.577 "hdgst": ${hdgst:-false}, 00:35:55.577 "ddgst": ${ddgst:-false} 00:35:55.577 }, 00:35:55.577 "method": "bdev_nvme_attach_controller" 00:35:55.577 } 00:35:55.577 EOF 00:35:55.577 )") 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:55.577 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:55.578 12:50:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:55.578 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:55.578 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:55.578 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:55.578 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:55.578 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:35:55.578 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:55.578 12:50:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:55.578 12:50:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:55.578 { 00:35:55.578 "params": { 00:35:55.578 "name": "Nvme$subsystem", 00:35:55.578 "trtype": "$TEST_TRANSPORT", 00:35:55.578 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:55.578 "adrfam": "ipv4", 00:35:55.578 "trsvcid": "$NVMF_PORT", 00:35:55.578 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:55.578 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:55.578 "hdgst": ${hdgst:-false}, 00:35:55.578 "ddgst": ${ddgst:-false} 00:35:55.578 }, 00:35:55.578 "method": "bdev_nvme_attach_controller" 00:35:55.578 } 00:35:55.578 EOF 00:35:55.578 )") 00:35:55.578 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:55.578 12:50:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:55.578 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:55.578 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:55.578 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:55.578 12:50:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:55.578 12:50:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:55.578 12:50:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:55.578 { 00:35:55.578 "params": { 00:35:55.578 "name": "Nvme$subsystem", 00:35:55.578 "trtype": "$TEST_TRANSPORT", 00:35:55.578 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:55.578 "adrfam": "ipv4", 00:35:55.578 "trsvcid": "$NVMF_PORT", 00:35:55.578 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:55.578 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:55.578 "hdgst": ${hdgst:-false}, 00:35:55.578 "ddgst": ${ddgst:-false} 00:35:55.578 }, 00:35:55.578 "method": "bdev_nvme_attach_controller" 00:35:55.578 } 00:35:55.578 EOF 00:35:55.578 )") 00:35:55.578 12:50:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:55.578 12:50:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:35:55.578 12:50:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:35:55.578 12:50:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:55.578 "params": { 00:35:55.578 "name": "Nvme0", 00:35:55.578 "trtype": "tcp", 00:35:55.578 "traddr": "10.0.0.2", 00:35:55.578 "adrfam": "ipv4", 00:35:55.578 "trsvcid": "4420", 00:35:55.578 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:55.578 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:55.578 "hdgst": false, 00:35:55.578 "ddgst": false 00:35:55.578 }, 00:35:55.578 "method": "bdev_nvme_attach_controller" 00:35:55.578 },{ 00:35:55.578 "params": { 00:35:55.578 "name": "Nvme1", 00:35:55.578 "trtype": "tcp", 00:35:55.578 "traddr": "10.0.0.2", 00:35:55.578 "adrfam": "ipv4", 00:35:55.578 "trsvcid": "4420", 00:35:55.578 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:55.578 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:55.578 "hdgst": false, 00:35:55.578 "ddgst": false 00:35:55.578 }, 00:35:55.578 "method": "bdev_nvme_attach_controller" 00:35:55.578 },{ 00:35:55.578 "params": { 00:35:55.578 "name": "Nvme2", 00:35:55.578 "trtype": "tcp", 00:35:55.578 "traddr": "10.0.0.2", 00:35:55.578 "adrfam": "ipv4", 00:35:55.578 "trsvcid": "4420", 00:35:55.578 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:55.578 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:55.578 "hdgst": false, 00:35:55.578 "ddgst": false 00:35:55.578 }, 00:35:55.578 "method": "bdev_nvme_attach_controller" 00:35:55.578 }' 00:35:55.578 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:55.578 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:55.578 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:55.578 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:55.578 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:55.578 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:55.578 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:55.578 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:55.578 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:55.578 12:50:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:55.578 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:55.578 ... 00:35:55.578 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:55.578 ... 00:35:55.578 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:55.578 ... 00:35:55.578 fio-3.35 00:35:55.578 Starting 24 threads 00:36:07.774 00:36:07.774 filename0: (groupid=0, jobs=1): err= 0: pid=443410: Wed Nov 20 12:50:12 2024 00:36:07.774 read: IOPS=532, BW=2129KiB/s (2180kB/s)(20.8MiB/10011msec) 00:36:07.774 slat (nsec): min=9140, max=97678, avg=32080.27, stdev=16315.45 00:36:07.774 clat (usec): min=13180, max=31318, avg=29746.77, stdev=1563.77 00:36:07.774 lat (usec): min=13203, max=31342, avg=29778.85, stdev=1564.76 00:36:07.774 clat percentiles (usec): 00:36:07.774 | 1.00th=[21365], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:36:07.774 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:36:07.774 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30278], 00:36:07.774 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31327], 99.95th=[31327], 00:36:07.774 | 99.99th=[31327] 00:36:07.774 bw ( KiB/s): min= 2048, max= 2299, per=4.17%, avg=2124.55, stdev=75.97, samples=20 00:36:07.774 iops : min= 512, max= 574, avg=531.10, stdev=18.90, samples=20 00:36:07.774 lat (msec) : 20=0.90%, 50=99.10% 00:36:07.774 cpu : usr=98.44%, sys=1.16%, ctx=14, majf=0, minf=9 00:36:07.774 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:07.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.774 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.774 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.774 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.774 filename0: (groupid=0, jobs=1): err= 0: pid=443411: Wed Nov 20 12:50:12 2024 00:36:07.774 read: IOPS=529, BW=2117KiB/s (2168kB/s)(20.7MiB/10007msec) 00:36:07.774 slat (usec): min=5, max=112, avg=32.53, stdev=15.48 00:36:07.774 clat (usec): min=9890, max=59077, avg=29905.16, stdev=2013.37 00:36:07.774 lat (usec): min=9905, max=59092, avg=29937.69, stdev=2012.55 00:36:07.774 clat percentiles (usec): 00:36:07.774 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:36:07.774 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:36:07.774 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30278], 00:36:07.774 | 99.00th=[30802], 99.50th=[31065], 99.90th=[58983], 99.95th=[58983], 00:36:07.774 | 99.99th=[58983] 00:36:07.774 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2112.00, stdev=77.69, samples=20 00:36:07.774 iops : min= 480, max= 544, avg=528.00, stdev=19.42, samples=20 00:36:07.774 lat (msec) : 10=0.09%, 20=0.21%, 50=99.40%, 100=0.30% 00:36:07.774 cpu : usr=98.77%, sys=0.84%, ctx=14, majf=0, minf=9 00:36:07.774 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:07.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.774 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.774 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.774 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.774 filename0: (groupid=0, jobs=1): err= 0: pid=443412: Wed Nov 20 12:50:12 2024 00:36:07.774 read: IOPS=529, BW=2117KiB/s (2168kB/s)(20.7MiB/10007msec) 00:36:07.774 slat (nsec): min=5273, max=88979, avg=31394.60, stdev=10389.10 00:36:07.774 clat (usec): min=10114, max=60286, avg=29944.61, stdev=1993.21 00:36:07.774 lat (usec): min=10139, max=60305, avg=29976.01, stdev=1992.63 00:36:07.774 clat percentiles (usec): 00:36:07.774 | 1.00th=[29492], 5.00th=[29754], 10.00th=[29754], 20.00th=[29754], 00:36:07.774 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:36:07.774 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30278], 00:36:07.774 | 99.00th=[30802], 99.50th=[31065], 99.90th=[58459], 99.95th=[58459], 00:36:07.774 | 99.99th=[60031] 00:36:07.774 bw ( KiB/s): min= 1923, max= 2176, per=4.15%, avg=2112.15, stdev=77.30, samples=20 00:36:07.774 iops : min= 480, max= 544, avg=528.00, stdev=19.42, samples=20 00:36:07.774 lat (msec) : 20=0.30%, 50=99.40%, 100=0.30% 00:36:07.774 cpu : usr=98.76%, sys=0.81%, ctx=33, majf=0, minf=9 00:36:07.774 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:07.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.774 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.774 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.774 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.774 filename0: (groupid=0, jobs=1): err= 0: pid=443413: Wed Nov 20 12:50:12 2024 00:36:07.774 read: IOPS=529, BW=2117KiB/s (2168kB/s)(20.7MiB/10006msec) 00:36:07.774 slat (usec): min=5, max=115, avg=32.33, stdev=14.89 00:36:07.774 clat (usec): min=20377, max=38395, avg=29976.21, stdev=727.53 00:36:07.774 lat (usec): min=20443, max=38412, avg=30008.54, stdev=723.70 00:36:07.774 clat percentiles (usec): 00:36:07.774 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:36:07.774 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:36:07.774 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30278], 95.00th=[30278], 00:36:07.774 | 99.00th=[30802], 99.50th=[31065], 99.90th=[38536], 99.95th=[38536], 00:36:07.774 | 99.99th=[38536] 00:36:07.774 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2115.37, stdev=65.66, samples=19 00:36:07.774 iops : min= 512, max= 544, avg=528.84, stdev=16.42, samples=19 00:36:07.774 lat (msec) : 50=100.00% 00:36:07.774 cpu : usr=98.48%, sys=1.13%, ctx=18, majf=0, minf=9 00:36:07.774 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:07.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.774 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.774 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.774 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.774 filename0: (groupid=0, jobs=1): err= 0: pid=443414: Wed Nov 20 12:50:12 2024 00:36:07.774 read: IOPS=529, BW=2118KiB/s (2168kB/s)(20.7MiB/10008msec) 00:36:07.774 slat (usec): min=4, max=110, avg=31.45, stdev=13.90 00:36:07.774 clat (usec): min=9918, max=70662, avg=29922.90, stdev=2373.59 00:36:07.774 lat (usec): min=9938, max=70677, avg=29954.35, stdev=2372.86 00:36:07.774 clat percentiles (usec): 00:36:07.774 | 1.00th=[22152], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:36:07.774 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:36:07.774 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30278], 00:36:07.774 | 99.00th=[35914], 99.50th=[40109], 99.90th=[59507], 99.95th=[59507], 00:36:07.774 | 99.99th=[70779] 00:36:07.774 bw ( KiB/s): min= 1888, max= 2224, per=4.15%, avg=2112.80, stdev=84.74, samples=20 00:36:07.774 iops : min= 472, max= 556, avg=528.20, stdev=21.18, samples=20 00:36:07.774 lat (msec) : 10=0.09%, 20=0.32%, 50=99.28%, 100=0.30% 00:36:07.774 cpu : usr=98.74%, sys=0.88%, ctx=9, majf=0, minf=9 00:36:07.774 IO depths : 1=5.9%, 2=11.9%, 4=24.2%, 8=51.3%, 16=6.7%, 32=0.0%, >=64=0.0% 00:36:07.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.774 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.774 issued rwts: total=5298,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.774 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.774 filename0: (groupid=0, jobs=1): err= 0: pid=443415: Wed Nov 20 12:50:12 2024 00:36:07.774 read: IOPS=529, BW=2117KiB/s (2168kB/s)(20.7MiB/10006msec) 00:36:07.774 slat (usec): min=7, max=108, avg=32.69, stdev=14.37 00:36:07.774 clat (usec): min=20596, max=37859, avg=29970.29, stdev=695.03 00:36:07.774 lat (usec): min=20664, max=37871, avg=30002.98, stdev=691.72 00:36:07.774 clat percentiles (usec): 00:36:07.774 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:36:07.774 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:36:07.774 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30278], 00:36:07.774 | 99.00th=[30802], 99.50th=[31065], 99.90th=[38011], 99.95th=[38011], 00:36:07.774 | 99.99th=[38011] 00:36:07.774 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2115.37, stdev=65.66, samples=19 00:36:07.774 iops : min= 512, max= 544, avg=528.84, stdev=16.42, samples=19 00:36:07.774 lat (msec) : 50=100.00% 00:36:07.774 cpu : usr=98.63%, sys=1.00%, ctx=13, majf=0, minf=9 00:36:07.774 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:07.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.775 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.775 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.775 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.775 filename0: (groupid=0, jobs=1): err= 0: pid=443416: Wed Nov 20 12:50:12 2024 00:36:07.775 read: IOPS=530, BW=2121KiB/s (2172kB/s)(20.8MiB/10019msec) 00:36:07.775 slat (nsec): min=7442, max=93866, avg=31943.22, stdev=16587.55 00:36:07.775 clat (usec): min=19872, max=31816, avg=29856.70, stdev=769.66 00:36:07.775 lat (usec): min=19880, max=31841, avg=29888.65, stdev=772.37 00:36:07.775 clat percentiles (usec): 00:36:07.775 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:36:07.775 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:36:07.775 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30278], 00:36:07.775 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31851], 99.95th=[31851], 00:36:07.775 | 99.99th=[31851] 00:36:07.775 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2115.37, stdev=65.66, samples=19 00:36:07.775 iops : min= 512, max= 544, avg=528.84, stdev=16.42, samples=19 00:36:07.775 lat (msec) : 20=0.30%, 50=99.70% 00:36:07.775 cpu : usr=98.38%, sys=1.24%, ctx=13, majf=0, minf=9 00:36:07.775 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:07.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.775 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.775 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.775 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.775 filename0: (groupid=0, jobs=1): err= 0: pid=443417: Wed Nov 20 12:50:12 2024 00:36:07.775 read: IOPS=532, BW=2129KiB/s (2180kB/s)(20.8MiB/10011msec) 00:36:07.775 slat (nsec): min=7671, max=94687, avg=33783.50, stdev=16888.45 00:36:07.775 clat (usec): min=13123, max=31301, avg=29749.87, stdev=1575.56 00:36:07.775 lat (usec): min=13153, max=31325, avg=29783.65, stdev=1576.36 00:36:07.775 clat percentiles (usec): 00:36:07.775 | 1.00th=[21365], 5.00th=[29492], 10.00th=[29492], 20.00th=[29754], 00:36:07.775 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:36:07.775 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30278], 00:36:07.775 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31327], 99.95th=[31327], 00:36:07.775 | 99.99th=[31327] 00:36:07.775 bw ( KiB/s): min= 2048, max= 2304, per=4.17%, avg=2124.80, stdev=76.58, samples=20 00:36:07.775 iops : min= 512, max= 576, avg=531.20, stdev=19.14, samples=20 00:36:07.775 lat (msec) : 20=0.90%, 50=99.10% 00:36:07.775 cpu : usr=98.55%, sys=1.07%, ctx=12, majf=0, minf=9 00:36:07.775 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:07.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.775 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.775 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.775 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.775 filename1: (groupid=0, jobs=1): err= 0: pid=443418: Wed Nov 20 12:50:12 2024 00:36:07.775 read: IOPS=532, BW=2129KiB/s (2180kB/s)(20.8MiB/10011msec) 00:36:07.775 slat (nsec): min=6985, max=68552, avg=21652.19, stdev=12096.89 00:36:07.775 clat (usec): min=10639, max=31287, avg=29893.67, stdev=1583.57 00:36:07.775 lat (usec): min=10661, max=31315, avg=29915.32, stdev=1582.82 00:36:07.775 clat percentiles (usec): 00:36:07.775 | 1.00th=[21365], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:36:07.775 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:36:07.775 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:36:07.775 | 99.00th=[31065], 99.50th=[31065], 99.90th=[31327], 99.95th=[31327], 00:36:07.775 | 99.99th=[31327] 00:36:07.775 bw ( KiB/s): min= 2048, max= 2304, per=4.17%, avg=2124.80, stdev=76.58, samples=20 00:36:07.775 iops : min= 512, max= 576, avg=531.20, stdev=19.14, samples=20 00:36:07.775 lat (msec) : 20=0.90%, 50=99.10% 00:36:07.775 cpu : usr=98.57%, sys=0.97%, ctx=62, majf=0, minf=9 00:36:07.775 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:07.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.775 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.775 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.775 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.775 filename1: (groupid=0, jobs=1): err= 0: pid=443420: Wed Nov 20 12:50:12 2024 00:36:07.775 read: IOPS=529, BW=2117KiB/s (2168kB/s)(20.7MiB/10007msec) 00:36:07.775 slat (usec): min=4, max=102, avg=32.27, stdev=13.14 00:36:07.775 clat (usec): min=9881, max=58725, avg=29929.54, stdev=1998.63 00:36:07.775 lat (usec): min=9904, max=58737, avg=29961.81, stdev=1997.40 00:36:07.775 clat percentiles (usec): 00:36:07.775 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:36:07.775 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:36:07.775 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30278], 00:36:07.775 | 99.00th=[30802], 99.50th=[31065], 99.90th=[58459], 99.95th=[58459], 00:36:07.775 | 99.99th=[58983] 00:36:07.775 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2112.00, stdev=77.69, samples=20 00:36:07.775 iops : min= 480, max= 544, avg=528.00, stdev=19.42, samples=20 00:36:07.775 lat (msec) : 10=0.11%, 20=0.19%, 50=99.40%, 100=0.30% 00:36:07.775 cpu : usr=98.67%, sys=0.95%, ctx=16, majf=0, minf=9 00:36:07.775 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:07.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.775 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.775 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.775 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.775 filename1: (groupid=0, jobs=1): err= 0: pid=443421: Wed Nov 20 12:50:12 2024 00:36:07.775 read: IOPS=529, BW=2117KiB/s (2168kB/s)(20.7MiB/10008msec) 00:36:07.775 slat (usec): min=4, max=112, avg=34.33, stdev=14.35 00:36:07.775 clat (usec): min=9836, max=61274, avg=29939.48, stdev=2042.06 00:36:07.775 lat (usec): min=9864, max=61287, avg=29973.81, stdev=2040.28 00:36:07.775 clat percentiles (usec): 00:36:07.775 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:36:07.775 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:36:07.775 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30278], 00:36:07.775 | 99.00th=[30802], 99.50th=[31065], 99.90th=[59507], 99.95th=[59507], 00:36:07.775 | 99.99th=[61080] 00:36:07.775 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2112.00, stdev=77.69, samples=20 00:36:07.775 iops : min= 480, max= 544, avg=528.00, stdev=19.42, samples=20 00:36:07.775 lat (msec) : 10=0.15%, 20=0.15%, 50=99.40%, 100=0.30% 00:36:07.775 cpu : usr=98.33%, sys=1.29%, ctx=19, majf=0, minf=9 00:36:07.775 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:07.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.775 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.775 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.775 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.775 filename1: (groupid=0, jobs=1): err= 0: pid=443422: Wed Nov 20 12:50:12 2024 00:36:07.775 read: IOPS=529, BW=2118KiB/s (2168kB/s)(20.7MiB/10004msec) 00:36:07.775 slat (usec): min=7, max=103, avg=30.16, stdev=14.16 00:36:07.775 clat (usec): min=18916, max=37813, avg=29993.38, stdev=767.23 00:36:07.775 lat (usec): min=18988, max=37826, avg=30023.54, stdev=764.29 00:36:07.775 clat percentiles (usec): 00:36:07.775 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:36:07.775 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:36:07.775 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30278], 95.00th=[30278], 00:36:07.775 | 99.00th=[30802], 99.50th=[31327], 99.90th=[38011], 99.95th=[38011], 00:36:07.775 | 99.99th=[38011] 00:36:07.775 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2115.37, stdev=65.66, samples=19 00:36:07.775 iops : min= 512, max= 544, avg=528.84, stdev=16.42, samples=19 00:36:07.775 lat (msec) : 20=0.30%, 50=99.70% 00:36:07.775 cpu : usr=98.49%, sys=1.14%, ctx=15, majf=0, minf=9 00:36:07.775 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:07.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.775 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.775 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.775 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.775 filename1: (groupid=0, jobs=1): err= 0: pid=443423: Wed Nov 20 12:50:12 2024 00:36:07.775 read: IOPS=532, BW=2129KiB/s (2180kB/s)(20.8MiB/10011msec) 00:36:07.775 slat (nsec): min=9363, max=96655, avg=34269.13, stdev=16821.60 00:36:07.775 clat (usec): min=13125, max=31352, avg=29731.93, stdev=1560.98 00:36:07.775 lat (usec): min=13151, max=31382, avg=29766.20, stdev=1562.43 00:36:07.775 clat percentiles (usec): 00:36:07.775 | 1.00th=[21365], 5.00th=[29492], 10.00th=[29492], 20.00th=[29754], 00:36:07.775 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:36:07.775 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30278], 00:36:07.775 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31327], 99.95th=[31327], 00:36:07.775 | 99.99th=[31327] 00:36:07.775 bw ( KiB/s): min= 2048, max= 2304, per=4.17%, avg=2124.80, stdev=76.58, samples=20 00:36:07.775 iops : min= 512, max= 576, avg=531.20, stdev=19.14, samples=20 00:36:07.775 lat (msec) : 20=0.90%, 50=99.10% 00:36:07.775 cpu : usr=98.58%, sys=1.04%, ctx=14, majf=0, minf=9 00:36:07.775 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:07.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.775 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.775 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.775 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.775 filename1: (groupid=0, jobs=1): err= 0: pid=443424: Wed Nov 20 12:50:12 2024 00:36:07.775 read: IOPS=529, BW=2117KiB/s (2168kB/s)(20.7MiB/10008msec) 00:36:07.775 slat (nsec): min=6203, max=58369, avg=12556.22, stdev=6286.22 00:36:07.775 clat (usec): min=19416, max=41902, avg=30126.65, stdev=890.06 00:36:07.775 lat (usec): min=19429, max=41924, avg=30139.21, stdev=889.43 00:36:07.775 clat percentiles (usec): 00:36:07.775 | 1.00th=[29492], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:36:07.775 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:36:07.776 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:36:07.776 | 99.00th=[30802], 99.50th=[31327], 99.90th=[41681], 99.95th=[41681], 00:36:07.776 | 99.99th=[41681] 00:36:07.776 bw ( KiB/s): min= 2048, max= 2176, per=4.14%, avg=2108.63, stdev=65.66, samples=19 00:36:07.776 iops : min= 512, max= 544, avg=527.16, stdev=16.42, samples=19 00:36:07.776 lat (msec) : 20=0.30%, 50=99.70% 00:36:07.776 cpu : usr=98.58%, sys=1.04%, ctx=15, majf=0, minf=9 00:36:07.776 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:07.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.776 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.776 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.776 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.776 filename1: (groupid=0, jobs=1): err= 0: pid=443425: Wed Nov 20 12:50:12 2024 00:36:07.776 read: IOPS=532, BW=2129KiB/s (2180kB/s)(20.8MiB/10011msec) 00:36:07.776 slat (nsec): min=8292, max=93563, avg=33073.55, stdev=16495.51 00:36:07.776 clat (usec): min=12115, max=31352, avg=29737.50, stdev=1564.38 00:36:07.776 lat (usec): min=12139, max=31380, avg=29770.57, stdev=1565.69 00:36:07.776 clat percentiles (usec): 00:36:07.776 | 1.00th=[21365], 5.00th=[29492], 10.00th=[29492], 20.00th=[29754], 00:36:07.776 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:36:07.776 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30278], 00:36:07.776 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31327], 99.95th=[31327], 00:36:07.776 | 99.99th=[31327] 00:36:07.776 bw ( KiB/s): min= 2048, max= 2304, per=4.17%, avg=2124.80, stdev=76.58, samples=20 00:36:07.776 iops : min= 512, max= 576, avg=531.20, stdev=19.14, samples=20 00:36:07.776 lat (msec) : 20=0.90%, 50=99.10% 00:36:07.776 cpu : usr=98.47%, sys=1.14%, ctx=14, majf=0, minf=9 00:36:07.776 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:07.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.776 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.776 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.776 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.776 filename1: (groupid=0, jobs=1): err= 0: pid=443426: Wed Nov 20 12:50:12 2024 00:36:07.776 read: IOPS=533, BW=2134KiB/s (2185kB/s)(20.8MiB/10001msec) 00:36:07.776 slat (usec): min=5, max=102, avg=26.75, stdev=15.42 00:36:07.776 clat (usec): min=10623, max=64275, avg=29750.29, stdev=2911.45 00:36:07.776 lat (usec): min=10643, max=64294, avg=29777.05, stdev=2911.71 00:36:07.776 clat percentiles (usec): 00:36:07.776 | 1.00th=[21890], 5.00th=[23725], 10.00th=[29492], 20.00th=[29754], 00:36:07.776 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:36:07.776 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30802], 00:36:07.776 | 99.00th=[38011], 99.50th=[43779], 99.90th=[51119], 99.95th=[51119], 00:36:07.776 | 99.99th=[64226] 00:36:07.776 bw ( KiB/s): min= 1936, max= 2288, per=4.18%, avg=2132.21, stdev=87.45, samples=19 00:36:07.776 iops : min= 484, max= 572, avg=533.05, stdev=21.86, samples=19 00:36:07.776 lat (msec) : 20=0.34%, 50=99.21%, 100=0.45% 00:36:07.776 cpu : usr=98.50%, sys=1.13%, ctx=13, majf=0, minf=9 00:36:07.776 IO depths : 1=4.4%, 2=8.9%, 4=18.8%, 8=58.8%, 16=9.1%, 32=0.0%, >=64=0.0% 00:36:07.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.776 complete : 0=0.0%, 4=92.6%, 8=2.6%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.776 issued rwts: total=5336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.776 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.776 filename2: (groupid=0, jobs=1): err= 0: pid=443427: Wed Nov 20 12:50:12 2024 00:36:07.776 read: IOPS=532, BW=2129KiB/s (2180kB/s)(20.8MiB/10011msec) 00:36:07.776 slat (nsec): min=11649, max=93533, avg=33451.88, stdev=16564.57 00:36:07.776 clat (usec): min=12124, max=31372, avg=29738.38, stdev=1565.51 00:36:07.776 lat (usec): min=12146, max=31396, avg=29771.83, stdev=1566.76 00:36:07.776 clat percentiles (usec): 00:36:07.776 | 1.00th=[21365], 5.00th=[29492], 10.00th=[29492], 20.00th=[29754], 00:36:07.776 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:36:07.776 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30278], 00:36:07.776 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31327], 99.95th=[31327], 00:36:07.776 | 99.99th=[31327] 00:36:07.776 bw ( KiB/s): min= 2048, max= 2304, per=4.17%, avg=2124.80, stdev=76.58, samples=20 00:36:07.776 iops : min= 512, max= 576, avg=531.20, stdev=19.14, samples=20 00:36:07.776 lat (msec) : 20=0.90%, 50=99.10% 00:36:07.776 cpu : usr=98.42%, sys=1.20%, ctx=14, majf=0, minf=9 00:36:07.776 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:07.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.776 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.776 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.776 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.776 filename2: (groupid=0, jobs=1): err= 0: pid=443428: Wed Nov 20 12:50:12 2024 00:36:07.776 read: IOPS=532, BW=2129KiB/s (2180kB/s)(20.8MiB/10011msec) 00:36:07.776 slat (nsec): min=8852, max=97439, avg=34386.30, stdev=16739.66 00:36:07.776 clat (usec): min=10733, max=45433, avg=29729.75, stdev=1623.89 00:36:07.776 lat (usec): min=10755, max=45464, avg=29764.14, stdev=1625.34 00:36:07.776 clat percentiles (usec): 00:36:07.776 | 1.00th=[21365], 5.00th=[29492], 10.00th=[29492], 20.00th=[29754], 00:36:07.776 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:36:07.776 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30278], 00:36:07.776 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31327], 99.95th=[31327], 00:36:07.776 | 99.99th=[45351] 00:36:07.776 bw ( KiB/s): min= 2048, max= 2304, per=4.17%, avg=2124.80, stdev=76.58, samples=20 00:36:07.776 iops : min= 512, max= 576, avg=531.20, stdev=19.14, samples=20 00:36:07.776 lat (msec) : 20=0.94%, 50=99.06% 00:36:07.776 cpu : usr=98.59%, sys=1.03%, ctx=12, majf=0, minf=9 00:36:07.776 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:07.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.776 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.776 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.776 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.776 filename2: (groupid=0, jobs=1): err= 0: pid=443429: Wed Nov 20 12:50:12 2024 00:36:07.776 read: IOPS=529, BW=2117KiB/s (2168kB/s)(20.7MiB/10005msec) 00:36:07.776 slat (usec): min=4, max=109, avg=34.38, stdev=14.18 00:36:07.776 clat (usec): min=20519, max=36690, avg=29909.72, stdev=670.22 00:36:07.776 lat (usec): min=20550, max=36703, avg=29944.10, stdev=668.79 00:36:07.776 clat percentiles (usec): 00:36:07.776 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:36:07.776 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:36:07.776 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30278], 00:36:07.776 | 99.00th=[30802], 99.50th=[31065], 99.90th=[36439], 99.95th=[36439], 00:36:07.776 | 99.99th=[36439] 00:36:07.776 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2115.58, stdev=65.44, samples=19 00:36:07.776 iops : min= 512, max= 544, avg=528.89, stdev=16.36, samples=19 00:36:07.776 lat (msec) : 50=100.00% 00:36:07.776 cpu : usr=98.39%, sys=1.23%, ctx=9, majf=0, minf=9 00:36:07.776 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:07.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.776 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.776 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.776 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.776 filename2: (groupid=0, jobs=1): err= 0: pid=443431: Wed Nov 20 12:50:12 2024 00:36:07.776 read: IOPS=529, BW=2117KiB/s (2168kB/s)(20.7MiB/10008msec) 00:36:07.776 slat (nsec): min=6175, max=87093, avg=22526.20, stdev=10760.20 00:36:07.776 clat (usec): min=19402, max=41970, avg=30060.95, stdev=897.11 00:36:07.776 lat (usec): min=19428, max=41986, avg=30083.48, stdev=895.68 00:36:07.776 clat percentiles (usec): 00:36:07.776 | 1.00th=[29492], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:36:07.776 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:36:07.776 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:36:07.776 | 99.00th=[30802], 99.50th=[31065], 99.90th=[41681], 99.95th=[42206], 00:36:07.776 | 99.99th=[42206] 00:36:07.776 bw ( KiB/s): min= 2048, max= 2176, per=4.14%, avg=2108.63, stdev=65.66, samples=19 00:36:07.776 iops : min= 512, max= 544, avg=527.16, stdev=16.42, samples=19 00:36:07.776 lat (msec) : 20=0.30%, 50=99.70% 00:36:07.776 cpu : usr=98.76%, sys=0.87%, ctx=17, majf=0, minf=9 00:36:07.776 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:07.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.776 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.776 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.776 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.776 filename2: (groupid=0, jobs=1): err= 0: pid=443432: Wed Nov 20 12:50:12 2024 00:36:07.776 read: IOPS=529, BW=2116KiB/s (2167kB/s)(20.7MiB/10009msec) 00:36:07.776 slat (usec): min=4, max=111, avg=33.30, stdev=14.39 00:36:07.776 clat (usec): min=9856, max=59235, avg=29910.47, stdev=2021.79 00:36:07.776 lat (usec): min=9878, max=59249, avg=29943.78, stdev=2020.70 00:36:07.776 clat percentiles (usec): 00:36:07.776 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:36:07.776 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:36:07.776 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30278], 00:36:07.776 | 99.00th=[30802], 99.50th=[31065], 99.90th=[58983], 99.95th=[58983], 00:36:07.776 | 99.99th=[58983] 00:36:07.776 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2112.00, stdev=77.69, samples=20 00:36:07.776 iops : min= 480, max= 544, avg=528.00, stdev=19.42, samples=20 00:36:07.776 lat (msec) : 10=0.13%, 20=0.17%, 50=99.40%, 100=0.30% 00:36:07.776 cpu : usr=98.38%, sys=1.24%, ctx=13, majf=0, minf=9 00:36:07.776 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:07.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.777 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.777 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.777 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.777 filename2: (groupid=0, jobs=1): err= 0: pid=443433: Wed Nov 20 12:50:12 2024 00:36:07.777 read: IOPS=555, BW=2222KiB/s (2275kB/s)(21.8MiB/10025msec) 00:36:07.777 slat (nsec): min=6207, max=53836, avg=9405.52, stdev=3240.59 00:36:07.777 clat (usec): min=1187, max=31365, avg=28718.41, stdev=5974.60 00:36:07.777 lat (usec): min=1193, max=31389, avg=28727.81, stdev=5974.44 00:36:07.777 clat percentiles (usec): 00:36:07.777 | 1.00th=[ 1336], 5.00th=[16909], 10.00th=[30016], 20.00th=[30016], 00:36:07.777 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:36:07.777 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30278], 95.00th=[30540], 00:36:07.777 | 99.00th=[30802], 99.50th=[30802], 99.90th=[31327], 99.95th=[31327], 00:36:07.777 | 99.99th=[31327] 00:36:07.777 bw ( KiB/s): min= 2048, max= 4096, per=4.36%, avg=2221.20, stdev=445.78, samples=20 00:36:07.777 iops : min= 512, max= 1024, avg=555.30, stdev=111.44, samples=20 00:36:07.777 lat (msec) : 2=3.84%, 4=0.18%, 10=0.29%, 20=1.28%, 50=94.41% 00:36:07.777 cpu : usr=98.49%, sys=1.06%, ctx=43, majf=0, minf=9 00:36:07.777 IO depths : 1=5.9%, 2=12.0%, 4=24.2%, 8=51.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:36:07.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.777 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.777 issued rwts: total=5568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.777 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.777 filename2: (groupid=0, jobs=1): err= 0: pid=443434: Wed Nov 20 12:50:12 2024 00:36:07.777 read: IOPS=532, BW=2129KiB/s (2180kB/s)(20.8MiB/10011msec) 00:36:07.777 slat (usec): min=7, max=104, avg=31.65, stdev=17.15 00:36:07.777 clat (usec): min=13168, max=52419, avg=29782.59, stdev=1693.48 00:36:07.777 lat (usec): min=13194, max=52436, avg=29814.24, stdev=1694.16 00:36:07.777 clat percentiles (usec): 00:36:07.777 | 1.00th=[21365], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:36:07.777 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:36:07.777 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 00:36:07.777 | 99.00th=[31327], 99.50th=[31851], 99.90th=[33817], 99.95th=[34866], 00:36:07.777 | 99.99th=[52167] 00:36:07.777 bw ( KiB/s): min= 2048, max= 2283, per=4.17%, avg=2124.55, stdev=70.88, samples=20 00:36:07.777 iops : min= 512, max= 570, avg=531.10, stdev=17.63, samples=20 00:36:07.777 lat (msec) : 20=0.94%, 50=99.02%, 100=0.04% 00:36:07.777 cpu : usr=98.71%, sys=0.90%, ctx=9, majf=0, minf=9 00:36:07.777 IO depths : 1=1.9%, 2=8.2%, 4=25.0%, 8=54.3%, 16=10.6%, 32=0.0%, >=64=0.0% 00:36:07.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.777 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.777 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.777 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.777 filename2: (groupid=0, jobs=1): err= 0: pid=443435: Wed Nov 20 12:50:12 2024 00:36:07.777 read: IOPS=529, BW=2117KiB/s (2168kB/s)(20.7MiB/10006msec) 00:36:07.777 slat (usec): min=7, max=112, avg=28.00, stdev=16.46 00:36:07.777 clat (usec): min=9210, max=60170, avg=30038.41, stdev=1999.39 00:36:07.777 lat (usec): min=9223, max=60181, avg=30066.41, stdev=1998.93 00:36:07.777 clat percentiles (usec): 00:36:07.777 | 1.00th=[29230], 5.00th=[29754], 10.00th=[29754], 20.00th=[29754], 00:36:07.777 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:36:07.777 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30278], 95.00th=[30278], 00:36:07.777 | 99.00th=[30802], 99.50th=[31065], 99.90th=[58459], 99.95th=[58459], 00:36:07.777 | 99.99th=[60031] 00:36:07.777 bw ( KiB/s): min= 1923, max= 2176, per=4.15%, avg=2112.15, stdev=76.07, samples=20 00:36:07.777 iops : min= 480, max= 544, avg=528.00, stdev=19.12, samples=20 00:36:07.777 lat (msec) : 10=0.13%, 20=0.17%, 50=99.40%, 100=0.30% 00:36:07.777 cpu : usr=98.68%, sys=0.95%, ctx=13, majf=0, minf=9 00:36:07.777 IO depths : 1=1.9%, 2=8.2%, 4=25.0%, 8=54.3%, 16=10.6%, 32=0.0%, >=64=0.0% 00:36:07.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.777 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.777 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.777 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.777 00:36:07.777 Run status group 0 (all jobs): 00:36:07.777 READ: bw=49.8MiB/s (52.2MB/s), 2116KiB/s-2222KiB/s (2167kB/s-2275kB/s), io=499MiB (523MB), run=10001-10025msec 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:07.777 bdev_null0 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:07.777 [2024-11-20 12:50:12.333125] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:07.777 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:07.778 bdev_null1 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:07.778 { 00:36:07.778 "params": { 00:36:07.778 "name": "Nvme$subsystem", 00:36:07.778 "trtype": "$TEST_TRANSPORT", 00:36:07.778 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:07.778 "adrfam": "ipv4", 00:36:07.778 "trsvcid": "$NVMF_PORT", 00:36:07.778 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:07.778 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:07.778 "hdgst": ${hdgst:-false}, 00:36:07.778 "ddgst": ${ddgst:-false} 00:36:07.778 }, 00:36:07.778 "method": "bdev_nvme_attach_controller" 00:36:07.778 } 00:36:07.778 EOF 00:36:07.778 )") 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:07.778 { 00:36:07.778 "params": { 00:36:07.778 "name": "Nvme$subsystem", 00:36:07.778 "trtype": "$TEST_TRANSPORT", 00:36:07.778 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:07.778 "adrfam": "ipv4", 00:36:07.778 "trsvcid": "$NVMF_PORT", 00:36:07.778 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:07.778 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:07.778 "hdgst": ${hdgst:-false}, 00:36:07.778 "ddgst": ${ddgst:-false} 00:36:07.778 }, 00:36:07.778 "method": "bdev_nvme_attach_controller" 00:36:07.778 } 00:36:07.778 EOF 00:36:07.778 )") 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:07.778 "params": { 00:36:07.778 "name": "Nvme0", 00:36:07.778 "trtype": "tcp", 00:36:07.778 "traddr": "10.0.0.2", 00:36:07.778 "adrfam": "ipv4", 00:36:07.778 "trsvcid": "4420", 00:36:07.778 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:07.778 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:07.778 "hdgst": false, 00:36:07.778 "ddgst": false 00:36:07.778 }, 00:36:07.778 "method": "bdev_nvme_attach_controller" 00:36:07.778 },{ 00:36:07.778 "params": { 00:36:07.778 "name": "Nvme1", 00:36:07.778 "trtype": "tcp", 00:36:07.778 "traddr": "10.0.0.2", 00:36:07.778 "adrfam": "ipv4", 00:36:07.778 "trsvcid": "4420", 00:36:07.778 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:07.778 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:07.778 "hdgst": false, 00:36:07.778 "ddgst": false 00:36:07.778 }, 00:36:07.778 "method": "bdev_nvme_attach_controller" 00:36:07.778 }' 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:07.778 12:50:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:07.778 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:07.778 ... 00:36:07.778 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:07.778 ... 00:36:07.778 fio-3.35 00:36:07.778 Starting 4 threads 00:36:13.173 00:36:13.173 filename0: (groupid=0, jobs=1): err= 0: pid=445325: Wed Nov 20 12:50:18 2024 00:36:13.173 read: IOPS=2731, BW=21.3MiB/s (22.4MB/s)(107MiB/5002msec) 00:36:13.173 slat (nsec): min=5960, max=71886, avg=15230.88, stdev=8865.55 00:36:13.173 clat (usec): min=574, max=5708, avg=2884.17, stdev=461.68 00:36:13.173 lat (usec): min=582, max=5720, avg=2899.40, stdev=463.21 00:36:13.173 clat percentiles (usec): 00:36:13.173 | 1.00th=[ 1647], 5.00th=[ 2147], 10.00th=[ 2311], 20.00th=[ 2540], 00:36:13.173 | 30.00th=[ 2704], 40.00th=[ 2835], 50.00th=[ 2900], 60.00th=[ 2999], 00:36:13.173 | 70.00th=[ 3097], 80.00th=[ 3195], 90.00th=[ 3359], 95.00th=[ 3523], 00:36:13.173 | 99.00th=[ 4228], 99.50th=[ 4555], 99.90th=[ 5145], 99.95th=[ 5211], 00:36:13.173 | 99.99th=[ 5669] 00:36:13.173 bw ( KiB/s): min=20464, max=23792, per=25.73%, avg=21928.89, stdev=1123.03, samples=9 00:36:13.173 iops : min= 2558, max= 2974, avg=2741.11, stdev=140.38, samples=9 00:36:13.173 lat (usec) : 750=0.02%, 1000=0.23% 00:36:13.173 lat (msec) : 2=2.25%, 4=95.92%, 10=1.57% 00:36:13.173 cpu : usr=96.28%, sys=3.04%, ctx=79, majf=0, minf=9 00:36:13.173 IO depths : 1=0.4%, 2=7.0%, 4=64.1%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:13.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.173 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.173 issued rwts: total=13662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:13.173 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:13.173 filename0: (groupid=0, jobs=1): err= 0: pid=445326: Wed Nov 20 12:50:18 2024 00:36:13.173 read: IOPS=2626, BW=20.5MiB/s (21.5MB/s)(103MiB/5001msec) 00:36:13.173 slat (nsec): min=5892, max=64750, avg=14996.13, stdev=11089.23 00:36:13.173 clat (usec): min=504, max=5951, avg=2999.12, stdev=454.08 00:36:13.173 lat (usec): min=528, max=6008, avg=3014.11, stdev=454.72 00:36:13.173 clat percentiles (usec): 00:36:13.173 | 1.00th=[ 1909], 5.00th=[ 2311], 10.00th=[ 2474], 20.00th=[ 2704], 00:36:13.173 | 30.00th=[ 2835], 40.00th=[ 2900], 50.00th=[ 2966], 60.00th=[ 3064], 00:36:13.173 | 70.00th=[ 3163], 80.00th=[ 3261], 90.00th=[ 3458], 95.00th=[ 3720], 00:36:13.173 | 99.00th=[ 4555], 99.50th=[ 4817], 99.90th=[ 5342], 99.95th=[ 5669], 00:36:13.173 | 99.99th=[ 5932] 00:36:13.173 bw ( KiB/s): min=19334, max=22080, per=24.69%, avg=21049.56, stdev=954.61, samples=9 00:36:13.173 iops : min= 2416, max= 2760, avg=2631.11, stdev=119.50, samples=9 00:36:13.173 lat (usec) : 750=0.02%, 1000=0.02% 00:36:13.173 lat (msec) : 2=1.35%, 4=95.65%, 10=2.97% 00:36:13.173 cpu : usr=96.96%, sys=2.70%, ctx=7, majf=0, minf=9 00:36:13.173 IO depths : 1=0.5%, 2=5.7%, 4=66.5%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:13.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.173 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.173 issued rwts: total=13135,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:13.173 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:13.173 filename1: (groupid=0, jobs=1): err= 0: pid=445327: Wed Nov 20 12:50:18 2024 00:36:13.173 read: IOPS=2698, BW=21.1MiB/s (22.1MB/s)(105MiB/5002msec) 00:36:13.173 slat (nsec): min=5985, max=71761, avg=15239.88, stdev=11085.70 00:36:13.173 clat (usec): min=814, max=5865, avg=2916.08, stdev=437.93 00:36:13.173 lat (usec): min=821, max=5872, avg=2931.32, stdev=439.46 00:36:13.173 clat percentiles (usec): 00:36:13.173 | 1.00th=[ 1778], 5.00th=[ 2212], 10.00th=[ 2376], 20.00th=[ 2606], 00:36:13.173 | 30.00th=[ 2737], 40.00th=[ 2868], 50.00th=[ 2933], 60.00th=[ 2999], 00:36:13.173 | 70.00th=[ 3097], 80.00th=[ 3195], 90.00th=[ 3392], 95.00th=[ 3556], 00:36:13.173 | 99.00th=[ 4228], 99.50th=[ 4424], 99.90th=[ 4883], 99.95th=[ 5145], 00:36:13.173 | 99.99th=[ 5866] 00:36:13.173 bw ( KiB/s): min=20304, max=23440, per=25.33%, avg=21591.11, stdev=991.16, samples=9 00:36:13.173 iops : min= 2538, max= 2930, avg=2698.89, stdev=123.90, samples=9 00:36:13.173 lat (usec) : 1000=0.02% 00:36:13.173 lat (msec) : 2=2.51%, 4=95.66%, 10=1.81% 00:36:13.173 cpu : usr=97.32%, sys=2.34%, ctx=8, majf=0, minf=9 00:36:13.173 IO depths : 1=0.9%, 2=7.2%, 4=64.4%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:13.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.173 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.173 issued rwts: total=13497,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:13.173 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:13.173 filename1: (groupid=0, jobs=1): err= 0: pid=445328: Wed Nov 20 12:50:18 2024 00:36:13.173 read: IOPS=2599, BW=20.3MiB/s (21.3MB/s)(102MiB/5001msec) 00:36:13.173 slat (nsec): min=5959, max=71752, avg=15168.90, stdev=11370.74 00:36:13.173 clat (usec): min=622, max=5891, avg=3028.41, stdev=472.09 00:36:13.173 lat (usec): min=642, max=5901, avg=3043.58, stdev=472.60 00:36:13.173 clat percentiles (usec): 00:36:13.173 | 1.00th=[ 2008], 5.00th=[ 2343], 10.00th=[ 2540], 20.00th=[ 2737], 00:36:13.173 | 30.00th=[ 2835], 40.00th=[ 2900], 50.00th=[ 2966], 60.00th=[ 3064], 00:36:13.173 | 70.00th=[ 3163], 80.00th=[ 3294], 90.00th=[ 3523], 95.00th=[ 3851], 00:36:13.173 | 99.00th=[ 4686], 99.50th=[ 4948], 99.90th=[ 5604], 99.95th=[ 5800], 00:36:13.173 | 99.99th=[ 5866] 00:36:13.173 bw ( KiB/s): min=19856, max=22224, per=24.42%, avg=20813.44, stdev=772.43, samples=9 00:36:13.173 iops : min= 2482, max= 2778, avg=2601.67, stdev=96.56, samples=9 00:36:13.173 lat (usec) : 750=0.02%, 1000=0.04% 00:36:13.173 lat (msec) : 2=0.92%, 4=95.15%, 10=3.87% 00:36:13.173 cpu : usr=97.78%, sys=1.90%, ctx=6, majf=0, minf=9 00:36:13.173 IO depths : 1=0.2%, 2=7.5%, 4=64.5%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:13.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.173 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.173 issued rwts: total=13001,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:13.173 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:13.173 00:36:13.173 Run status group 0 (all jobs): 00:36:13.173 READ: bw=83.2MiB/s (87.3MB/s), 20.3MiB/s-21.3MiB/s (21.3MB/s-22.4MB/s), io=416MiB (437MB), run=5001-5002msec 00:36:13.173 12:50:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:36:13.173 12:50:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:13.173 12:50:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:13.173 12:50:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:13.173 12:50:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:13.173 12:50:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:13.173 12:50:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.173 12:50:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:13.173 12:50:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.173 12:50:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:13.173 12:50:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.173 12:50:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:13.173 12:50:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.173 12:50:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:13.173 12:50:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:13.173 12:50:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:13.173 12:50:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:13.173 12:50:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.173 12:50:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:13.173 12:50:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.173 12:50:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:13.173 12:50:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.173 12:50:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:13.173 12:50:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.173 00:36:13.173 real 0m24.714s 00:36:13.173 user 4m53.078s 00:36:13.173 sys 0m4.735s 00:36:13.173 12:50:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:13.173 12:50:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:13.173 ************************************ 00:36:13.173 END TEST fio_dif_rand_params 00:36:13.173 ************************************ 00:36:13.173 12:50:18 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:36:13.173 12:50:18 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:13.173 12:50:18 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:13.173 12:50:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:13.173 ************************************ 00:36:13.173 START TEST fio_dif_digest 00:36:13.173 ************************************ 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:13.173 bdev_null0 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:13.173 [2024-11-20 12:50:18.916098] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:13.173 { 00:36:13.173 "params": { 00:36:13.173 "name": "Nvme$subsystem", 00:36:13.173 "trtype": "$TEST_TRANSPORT", 00:36:13.173 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:13.173 "adrfam": "ipv4", 00:36:13.173 "trsvcid": "$NVMF_PORT", 00:36:13.173 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:13.173 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:13.173 "hdgst": ${hdgst:-false}, 00:36:13.173 "ddgst": ${ddgst:-false} 00:36:13.173 }, 00:36:13.173 "method": "bdev_nvme_attach_controller" 00:36:13.173 } 00:36:13.173 EOF 00:36:13.173 )") 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:13.173 12:50:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:36:13.174 12:50:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:13.174 12:50:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:13.174 12:50:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:36:13.174 12:50:18 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:36:13.174 12:50:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:13.174 12:50:18 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:36:13.174 12:50:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:36:13.174 12:50:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:13.174 12:50:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:36:13.174 12:50:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:36:13.174 12:50:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:13.174 "params": { 00:36:13.174 "name": "Nvme0", 00:36:13.174 "trtype": "tcp", 00:36:13.174 "traddr": "10.0.0.2", 00:36:13.174 "adrfam": "ipv4", 00:36:13.174 "trsvcid": "4420", 00:36:13.174 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:13.174 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:13.174 "hdgst": true, 00:36:13.174 "ddgst": true 00:36:13.174 }, 00:36:13.174 "method": "bdev_nvme_attach_controller" 00:36:13.174 }' 00:36:13.430 12:50:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:13.430 12:50:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:13.430 12:50:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:13.430 12:50:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:13.430 12:50:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:13.430 12:50:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:13.430 12:50:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:13.430 12:50:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:13.430 12:50:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:13.430 12:50:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:13.687 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:13.687 ... 00:36:13.687 fio-3.35 00:36:13.687 Starting 3 threads 00:36:25.894 00:36:25.894 filename0: (groupid=0, jobs=1): err= 0: pid=446593: Wed Nov 20 12:50:29 2024 00:36:25.894 read: IOPS=299, BW=37.4MiB/s (39.3MB/s)(375MiB/10005msec) 00:36:25.894 slat (nsec): min=6213, max=32198, avg=10965.94, stdev=1724.15 00:36:25.894 clat (usec): min=6387, max=13202, avg=10001.54, stdev=737.74 00:36:25.894 lat (usec): min=6398, max=13214, avg=10012.51, stdev=737.69 00:36:25.894 clat percentiles (usec): 00:36:25.894 | 1.00th=[ 8356], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9372], 00:36:25.894 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10159], 00:36:25.894 | 70.00th=[10421], 80.00th=[10552], 90.00th=[10945], 95.00th=[11207], 00:36:25.894 | 99.00th=[11994], 99.50th=[12125], 99.90th=[12649], 99.95th=[13042], 00:36:25.894 | 99.99th=[13173] 00:36:25.894 bw ( KiB/s): min=36352, max=39424, per=35.27%, avg=38386.53, stdev=851.08, samples=19 00:36:25.894 iops : min= 284, max= 308, avg=299.89, stdev= 6.65, samples=19 00:36:25.894 lat (msec) : 10=49.92%, 20=50.08% 00:36:25.894 cpu : usr=94.74%, sys=4.97%, ctx=16, majf=0, minf=46 00:36:25.894 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:25.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.894 issued rwts: total=2997,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.894 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:25.894 filename0: (groupid=0, jobs=1): err= 0: pid=446594: Wed Nov 20 12:50:29 2024 00:36:25.894 read: IOPS=280, BW=35.1MiB/s (36.8MB/s)(353MiB/10043msec) 00:36:25.894 slat (nsec): min=6243, max=52490, avg=11054.21, stdev=1838.96 00:36:25.894 clat (usec): min=7802, max=48508, avg=10653.28, stdev=1244.10 00:36:25.894 lat (usec): min=7814, max=48520, avg=10664.34, stdev=1244.11 00:36:25.894 clat percentiles (usec): 00:36:25.894 | 1.00th=[ 8848], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10028], 00:36:25.894 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10552], 60.00th=[10814], 00:36:25.894 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[11863], 00:36:25.894 | 99.00th=[12649], 99.50th=[12911], 99.90th=[13960], 99.95th=[46400], 00:36:25.894 | 99.99th=[48497] 00:36:25.894 bw ( KiB/s): min=35328, max=37376, per=33.15%, avg=36083.20, stdev=560.10, samples=20 00:36:25.894 iops : min= 276, max= 292, avg=281.90, stdev= 4.38, samples=20 00:36:25.894 lat (msec) : 10=19.21%, 20=80.72%, 50=0.07% 00:36:25.894 cpu : usr=94.85%, sys=4.86%, ctx=16, majf=0, minf=78 00:36:25.894 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:25.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.894 issued rwts: total=2821,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.894 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:25.894 filename0: (groupid=0, jobs=1): err= 0: pid=446595: Wed Nov 20 12:50:29 2024 00:36:25.894 read: IOPS=271, BW=33.9MiB/s (35.5MB/s)(340MiB/10044msec) 00:36:25.894 slat (nsec): min=6221, max=26102, avg=11153.80, stdev=1502.34 00:36:25.894 clat (usec): min=8664, max=51002, avg=11037.38, stdev=1242.00 00:36:25.894 lat (usec): min=8677, max=51014, avg=11048.53, stdev=1242.04 00:36:25.894 clat percentiles (usec): 00:36:25.894 | 1.00th=[ 9372], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10421], 00:36:25.894 | 30.00th=[10552], 40.00th=[10814], 50.00th=[10945], 60.00th=[11207], 00:36:25.894 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11994], 95.00th=[12387], 00:36:25.894 | 99.00th=[13042], 99.50th=[13173], 99.90th=[14484], 99.95th=[43779], 00:36:25.894 | 99.99th=[51119] 00:36:25.894 bw ( KiB/s): min=34048, max=35840, per=32.00%, avg=34828.80, stdev=443.21, samples=20 00:36:25.894 iops : min= 266, max= 280, avg=272.10, stdev= 3.46, samples=20 00:36:25.894 lat (msec) : 10=7.34%, 20=92.58%, 50=0.04%, 100=0.04% 00:36:25.894 cpu : usr=94.79%, sys=4.92%, ctx=18, majf=0, minf=47 00:36:25.894 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:25.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.895 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.895 issued rwts: total=2723,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.895 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:25.895 00:36:25.895 Run status group 0 (all jobs): 00:36:25.895 READ: bw=106MiB/s (111MB/s), 33.9MiB/s-37.4MiB/s (35.5MB/s-39.3MB/s), io=1068MiB (1119MB), run=10005-10044msec 00:36:25.895 12:50:30 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:25.895 12:50:30 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:25.895 12:50:30 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:25.895 12:50:30 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:25.895 12:50:30 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:25.895 12:50:30 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:25.895 12:50:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.895 12:50:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:25.895 12:50:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.895 12:50:30 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:25.895 12:50:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.895 12:50:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:25.895 12:50:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.895 00:36:25.895 real 0m11.226s 00:36:25.895 user 0m34.867s 00:36:25.895 sys 0m1.864s 00:36:25.895 12:50:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:25.895 12:50:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:25.895 ************************************ 00:36:25.895 END TEST fio_dif_digest 00:36:25.895 ************************************ 00:36:25.895 12:50:30 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:25.895 12:50:30 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:25.895 12:50:30 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:25.895 12:50:30 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:36:25.895 12:50:30 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:25.895 12:50:30 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:36:25.895 12:50:30 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:25.895 12:50:30 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:25.895 rmmod nvme_tcp 00:36:25.895 rmmod nvme_fabrics 00:36:25.895 rmmod nvme_keyring 00:36:25.895 12:50:30 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:25.895 12:50:30 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:36:25.895 12:50:30 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:36:25.895 12:50:30 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 437989 ']' 00:36:25.895 12:50:30 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 437989 00:36:25.895 12:50:30 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 437989 ']' 00:36:25.895 12:50:30 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 437989 00:36:25.895 12:50:30 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:36:25.895 12:50:30 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:25.895 12:50:30 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 437989 00:36:25.895 12:50:30 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:25.895 12:50:30 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:25.895 12:50:30 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 437989' 00:36:25.895 killing process with pid 437989 00:36:25.895 12:50:30 nvmf_dif -- common/autotest_common.sh@973 -- # kill 437989 00:36:25.895 12:50:30 nvmf_dif -- common/autotest_common.sh@978 -- # wait 437989 00:36:25.895 12:50:30 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:36:25.895 12:50:30 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:27.812 Waiting for block devices as requested 00:36:27.812 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:36:27.812 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:27.812 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:27.812 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:27.812 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:28.071 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:28.071 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:28.071 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:28.071 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:28.331 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:28.331 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:28.331 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:28.590 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:28.590 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:28.590 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:28.849 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:28.849 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:28.849 12:50:34 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:28.849 12:50:34 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:28.849 12:50:34 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:36:28.849 12:50:34 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:36:28.849 12:50:34 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:28.849 12:50:34 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:36:28.849 12:50:34 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:28.849 12:50:34 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:28.849 12:50:34 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:28.849 12:50:34 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:28.849 12:50:34 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:31.386 12:50:36 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:31.386 00:36:31.386 real 1m14.735s 00:36:31.386 user 7m11.142s 00:36:31.387 sys 0m20.424s 00:36:31.387 12:50:36 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:31.387 12:50:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:31.387 ************************************ 00:36:31.387 END TEST nvmf_dif 00:36:31.387 ************************************ 00:36:31.387 12:50:36 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:31.387 12:50:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:31.387 12:50:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:31.387 12:50:36 -- common/autotest_common.sh@10 -- # set +x 00:36:31.387 ************************************ 00:36:31.387 START TEST nvmf_abort_qd_sizes 00:36:31.387 ************************************ 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:31.387 * Looking for test storage... 00:36:31.387 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:31.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:31.387 --rc genhtml_branch_coverage=1 00:36:31.387 --rc genhtml_function_coverage=1 00:36:31.387 --rc genhtml_legend=1 00:36:31.387 --rc geninfo_all_blocks=1 00:36:31.387 --rc geninfo_unexecuted_blocks=1 00:36:31.387 00:36:31.387 ' 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:31.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:31.387 --rc genhtml_branch_coverage=1 00:36:31.387 --rc genhtml_function_coverage=1 00:36:31.387 --rc genhtml_legend=1 00:36:31.387 --rc geninfo_all_blocks=1 00:36:31.387 --rc geninfo_unexecuted_blocks=1 00:36:31.387 00:36:31.387 ' 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:31.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:31.387 --rc genhtml_branch_coverage=1 00:36:31.387 --rc genhtml_function_coverage=1 00:36:31.387 --rc genhtml_legend=1 00:36:31.387 --rc geninfo_all_blocks=1 00:36:31.387 --rc geninfo_unexecuted_blocks=1 00:36:31.387 00:36:31.387 ' 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:31.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:31.387 --rc genhtml_branch_coverage=1 00:36:31.387 --rc genhtml_function_coverage=1 00:36:31.387 --rc genhtml_legend=1 00:36:31.387 --rc geninfo_all_blocks=1 00:36:31.387 --rc geninfo_unexecuted_blocks=1 00:36:31.387 00:36:31.387 ' 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:31.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:31.387 12:50:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:31.388 12:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:31.388 12:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:31.388 12:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:36:31.388 12:50:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:36:36.664 Found 0000:86:00.0 (0x8086 - 0x159b) 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:36:36.664 Found 0000:86:00.1 (0x8086 - 0x159b) 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:36:36.664 Found net devices under 0000:86:00.0: cvl_0_0 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:36:36.664 Found net devices under 0000:86:00.1: cvl_0_1 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:36.664 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:36.923 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:36.923 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:36.923 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:36.923 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:36.923 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:36.923 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:36.923 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:36.923 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:36.923 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:36.923 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:36.923 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:36.923 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:36.923 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.448 ms 00:36:36.923 00:36:36.923 --- 10.0.0.2 ping statistics --- 00:36:36.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:36.923 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:36:36.923 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:36.923 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:36.923 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:36:36.923 00:36:36.923 --- 10.0.0.1 ping statistics --- 00:36:36.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:36.923 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:36:36.923 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:36.923 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:36:36.923 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:36:36.923 12:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:40.213 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:40.213 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:40.213 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:40.213 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:40.213 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:40.213 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:40.213 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:40.213 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:40.213 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:40.213 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:40.213 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:40.213 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:40.213 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:40.213 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:40.213 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:40.213 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:41.149 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:36:41.408 12:50:47 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:41.408 12:50:47 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:41.408 12:50:47 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:41.408 12:50:47 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:41.408 12:50:47 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:41.408 12:50:47 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:41.408 12:50:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:41.408 12:50:47 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:41.408 12:50:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:41.408 12:50:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:41.408 12:50:47 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=454391 00:36:41.408 12:50:47 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 454391 00:36:41.408 12:50:47 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:41.408 12:50:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 454391 ']' 00:36:41.408 12:50:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:41.408 12:50:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:41.408 12:50:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:41.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:41.408 12:50:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:41.408 12:50:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:41.408 [2024-11-20 12:50:47.119916] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:36:41.408 [2024-11-20 12:50:47.119954] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:41.666 [2024-11-20 12:50:47.199703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:41.666 [2024-11-20 12:50:47.244692] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:41.666 [2024-11-20 12:50:47.244725] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:41.666 [2024-11-20 12:50:47.244732] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:41.666 [2024-11-20 12:50:47.244738] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:41.666 [2024-11-20 12:50:47.244744] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:41.666 [2024-11-20 12:50:47.246227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:41.666 [2024-11-20 12:50:47.246265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:41.666 [2024-11-20 12:50:47.246288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:41.666 [2024-11-20 12:50:47.246288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:41.666 12:50:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:41.666 12:50:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:36:41.666 12:50:47 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:41.666 12:50:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:41.666 12:50:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:41.666 12:50:47 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:41.666 12:50:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:41.666 12:50:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:41.666 12:50:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:41.666 12:50:47 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:36:41.666 12:50:47 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:36:41.666 12:50:47 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:36:41.666 12:50:47 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:41.666 12:50:47 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:36:41.666 12:50:47 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:36:41.666 12:50:47 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:36:41.666 12:50:47 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:36:41.666 12:50:47 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:36:41.666 12:50:47 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:36:41.666 12:50:47 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:36:41.666 12:50:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:41.666 12:50:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:36:41.666 12:50:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:41.666 12:50:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:41.666 12:50:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:41.666 12:50:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:41.923 ************************************ 00:36:41.923 START TEST spdk_target_abort 00:36:41.923 ************************************ 00:36:41.923 12:50:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:36:41.923 12:50:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:41.923 12:50:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:36:41.923 12:50:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.923 12:50:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:45.202 spdk_targetn1 00:36:45.202 12:50:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.202 12:50:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:45.202 12:50:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.202 12:50:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:45.202 [2024-11-20 12:50:50.265185] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:45.202 12:50:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.202 12:50:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:45.202 12:50:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.202 12:50:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:45.202 12:50:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.202 12:50:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:45.202 12:50:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.202 12:50:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:45.202 12:50:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.202 12:50:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:45.202 12:50:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.202 12:50:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:45.202 [2024-11-20 12:50:50.304169] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:45.202 12:50:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.202 12:50:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:45.202 12:50:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:45.202 12:50:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:45.202 12:50:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:45.202 12:50:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:45.202 12:50:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:45.202 12:50:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:45.202 12:50:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:45.202 12:50:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:45.202 12:50:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:45.202 12:50:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:45.202 12:50:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:45.202 12:50:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:45.202 12:50:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:45.202 12:50:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:45.202 12:50:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:45.202 12:50:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:45.202 12:50:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:45.202 12:50:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:45.202 12:50:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:45.202 12:50:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:48.477 Initializing NVMe Controllers 00:36:48.477 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:48.477 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:48.477 Initialization complete. Launching workers. 00:36:48.477 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 16032, failed: 0 00:36:48.477 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1432, failed to submit 14600 00:36:48.477 success 741, unsuccessful 691, failed 0 00:36:48.477 12:50:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:48.477 12:50:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:51.754 Initializing NVMe Controllers 00:36:51.754 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:51.754 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:51.754 Initialization complete. Launching workers. 00:36:51.754 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8664, failed: 0 00:36:51.754 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1238, failed to submit 7426 00:36:51.754 success 314, unsuccessful 924, failed 0 00:36:51.754 12:50:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:51.754 12:50:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:55.030 Initializing NVMe Controllers 00:36:55.030 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:55.030 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:55.030 Initialization complete. Launching workers. 00:36:55.030 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38322, failed: 0 00:36:55.031 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2751, failed to submit 35571 00:36:55.031 success 613, unsuccessful 2138, failed 0 00:36:55.031 12:51:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:55.031 12:51:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.031 12:51:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:55.031 12:51:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.031 12:51:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:55.031 12:51:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.031 12:51:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:56.469 12:51:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:56.469 12:51:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 454391 00:36:56.469 12:51:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 454391 ']' 00:36:56.469 12:51:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 454391 00:36:56.469 12:51:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:36:56.469 12:51:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:56.469 12:51:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 454391 00:36:56.469 12:51:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:56.469 12:51:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:56.469 12:51:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 454391' 00:36:56.469 killing process with pid 454391 00:36:56.469 12:51:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 454391 00:36:56.469 12:51:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 454391 00:36:56.728 00:36:56.728 real 0m14.895s 00:36:56.728 user 0m56.869s 00:36:56.728 sys 0m2.651s 00:36:56.728 12:51:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:56.728 12:51:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:56.728 ************************************ 00:36:56.728 END TEST spdk_target_abort 00:36:56.728 ************************************ 00:36:56.728 12:51:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:56.728 12:51:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:56.728 12:51:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:56.728 12:51:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:56.728 ************************************ 00:36:56.728 START TEST kernel_target_abort 00:36:56.728 ************************************ 00:36:56.728 12:51:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:36:56.728 12:51:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:56.728 12:51:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:36:56.728 12:51:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:56.728 12:51:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:56.728 12:51:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:56.728 12:51:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:56.728 12:51:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:56.728 12:51:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:56.728 12:51:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:56.728 12:51:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:56.728 12:51:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:56.728 12:51:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:56.728 12:51:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:56.728 12:51:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:36:56.728 12:51:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:56.728 12:51:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:56.728 12:51:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:56.728 12:51:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:36:56.728 12:51:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:36:56.728 12:51:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:36:56.728 12:51:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:56.728 12:51:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:00.018 Waiting for block devices as requested 00:37:00.018 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:37:00.018 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:37:00.018 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:37:00.018 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:37:00.018 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:37:00.018 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:37:00.018 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:37:00.018 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:37:00.277 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:37:00.277 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:37:00.277 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:37:00.277 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:37:00.537 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:37:00.537 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:37:00.537 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:37:00.796 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:37:00.796 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:37:00.796 12:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:37:00.796 12:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:00.796 12:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:37:00.796 12:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:37:00.796 12:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:00.796 12:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:37:00.796 12:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:37:00.796 12:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:37:00.796 12:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:37:01.055 No valid GPT data, bailing 00:37:01.055 12:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:01.055 12:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:37:01.055 12:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:37:01.055 12:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:37:01.055 12:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:37:01.055 12:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:01.055 12:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:01.055 12:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:01.055 12:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:37:01.055 12:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:37:01.055 12:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:37:01.055 12:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:37:01.055 12:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:37:01.055 12:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:37:01.055 12:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:37:01.055 12:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:37:01.055 12:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:01.055 12:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:37:01.055 00:37:01.055 Discovery Log Number of Records 2, Generation counter 2 00:37:01.055 =====Discovery Log Entry 0====== 00:37:01.055 trtype: tcp 00:37:01.055 adrfam: ipv4 00:37:01.055 subtype: current discovery subsystem 00:37:01.055 treq: not specified, sq flow control disable supported 00:37:01.055 portid: 1 00:37:01.055 trsvcid: 4420 00:37:01.055 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:01.055 traddr: 10.0.0.1 00:37:01.055 eflags: none 00:37:01.055 sectype: none 00:37:01.055 =====Discovery Log Entry 1====== 00:37:01.055 trtype: tcp 00:37:01.055 adrfam: ipv4 00:37:01.055 subtype: nvme subsystem 00:37:01.055 treq: not specified, sq flow control disable supported 00:37:01.056 portid: 1 00:37:01.056 trsvcid: 4420 00:37:01.056 subnqn: nqn.2016-06.io.spdk:testnqn 00:37:01.056 traddr: 10.0.0.1 00:37:01.056 eflags: none 00:37:01.056 sectype: none 00:37:01.056 12:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:37:01.056 12:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:01.056 12:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:01.056 12:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:37:01.056 12:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:01.056 12:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:01.056 12:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:01.056 12:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:01.056 12:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:01.056 12:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:01.056 12:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:01.056 12:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:01.056 12:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:01.056 12:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:01.056 12:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:37:01.056 12:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:01.056 12:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:37:01.056 12:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:01.056 12:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:01.056 12:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:01.056 12:51:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:04.340 Initializing NVMe Controllers 00:37:04.340 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:04.340 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:04.340 Initialization complete. Launching workers. 00:37:04.340 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 95347, failed: 0 00:37:04.340 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 95347, failed to submit 0 00:37:04.340 success 0, unsuccessful 95347, failed 0 00:37:04.340 12:51:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:04.340 12:51:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:07.628 Initializing NVMe Controllers 00:37:07.628 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:07.628 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:07.628 Initialization complete. Launching workers. 00:37:07.628 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 150511, failed: 0 00:37:07.628 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 38018, failed to submit 112493 00:37:07.628 success 0, unsuccessful 38018, failed 0 00:37:07.628 12:51:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:07.628 12:51:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:10.977 Initializing NVMe Controllers 00:37:10.977 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:10.977 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:10.977 Initialization complete. Launching workers. 00:37:10.977 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 142563, failed: 0 00:37:10.977 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35714, failed to submit 106849 00:37:10.977 success 0, unsuccessful 35714, failed 0 00:37:10.977 12:51:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:37:10.977 12:51:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:37:10.977 12:51:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:37:10.977 12:51:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:10.977 12:51:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:10.977 12:51:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:10.977 12:51:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:10.977 12:51:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:37:10.977 12:51:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:37:10.977 12:51:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:13.512 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:37:13.512 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:37:13.512 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:37:13.512 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:37:13.512 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:37:13.512 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:37:13.512 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:37:13.512 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:37:13.512 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:37:13.512 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:37:13.512 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:37:13.512 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:37:13.512 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:37:13.512 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:37:13.512 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:37:13.512 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:37:14.886 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:37:14.886 00:37:14.886 real 0m18.183s 00:37:14.886 user 0m9.117s 00:37:14.886 sys 0m5.134s 00:37:14.886 12:51:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:14.886 12:51:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:14.886 ************************************ 00:37:14.886 END TEST kernel_target_abort 00:37:14.886 ************************************ 00:37:14.887 12:51:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:37:14.887 12:51:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:37:14.887 12:51:20 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:14.887 12:51:20 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:37:14.887 12:51:20 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:14.887 12:51:20 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:37:14.887 12:51:20 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:14.887 12:51:20 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:14.887 rmmod nvme_tcp 00:37:15.145 rmmod nvme_fabrics 00:37:15.145 rmmod nvme_keyring 00:37:15.145 12:51:20 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:15.145 12:51:20 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:37:15.145 12:51:20 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:37:15.145 12:51:20 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 454391 ']' 00:37:15.145 12:51:20 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 454391 00:37:15.145 12:51:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 454391 ']' 00:37:15.145 12:51:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 454391 00:37:15.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (454391) - No such process 00:37:15.145 12:51:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 454391 is not found' 00:37:15.145 Process with pid 454391 is not found 00:37:15.145 12:51:20 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:37:15.145 12:51:20 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:17.679 Waiting for block devices as requested 00:37:17.679 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:37:17.938 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:37:17.938 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:37:18.197 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:37:18.197 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:37:18.197 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:37:18.197 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:37:18.455 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:37:18.455 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:37:18.455 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:37:18.714 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:37:18.714 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:37:18.714 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:37:18.714 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:37:18.973 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:37:18.973 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:37:18.973 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:37:19.232 12:51:24 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:19.232 12:51:24 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:19.232 12:51:24 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:37:19.232 12:51:24 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:37:19.232 12:51:24 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:19.232 12:51:24 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:37:19.232 12:51:24 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:19.232 12:51:24 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:19.232 12:51:24 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:19.232 12:51:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:19.232 12:51:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:21.136 12:51:26 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:21.137 00:37:21.137 real 0m50.158s 00:37:21.137 user 1m10.379s 00:37:21.137 sys 0m16.433s 00:37:21.137 12:51:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:21.137 12:51:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:21.137 ************************************ 00:37:21.137 END TEST nvmf_abort_qd_sizes 00:37:21.137 ************************************ 00:37:21.137 12:51:26 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:21.137 12:51:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:21.137 12:51:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:21.137 12:51:26 -- common/autotest_common.sh@10 -- # set +x 00:37:21.395 ************************************ 00:37:21.395 START TEST keyring_file 00:37:21.395 ************************************ 00:37:21.395 12:51:26 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:21.395 * Looking for test storage... 00:37:21.396 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:21.396 12:51:26 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:21.396 12:51:26 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:37:21.396 12:51:26 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:21.396 12:51:27 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:21.396 12:51:27 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:21.396 12:51:27 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:21.396 12:51:27 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:21.396 12:51:27 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:37:21.396 12:51:27 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:37:21.396 12:51:27 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:37:21.396 12:51:27 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:37:21.396 12:51:27 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:37:21.396 12:51:27 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:37:21.396 12:51:27 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:37:21.396 12:51:27 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:21.396 12:51:27 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:37:21.396 12:51:27 keyring_file -- scripts/common.sh@345 -- # : 1 00:37:21.396 12:51:27 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:21.396 12:51:27 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:21.396 12:51:27 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:37:21.396 12:51:27 keyring_file -- scripts/common.sh@353 -- # local d=1 00:37:21.396 12:51:27 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:21.396 12:51:27 keyring_file -- scripts/common.sh@355 -- # echo 1 00:37:21.396 12:51:27 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:37:21.396 12:51:27 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:37:21.396 12:51:27 keyring_file -- scripts/common.sh@353 -- # local d=2 00:37:21.396 12:51:27 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:21.396 12:51:27 keyring_file -- scripts/common.sh@355 -- # echo 2 00:37:21.396 12:51:27 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:37:21.396 12:51:27 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:21.396 12:51:27 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:21.396 12:51:27 keyring_file -- scripts/common.sh@368 -- # return 0 00:37:21.396 12:51:27 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:21.396 12:51:27 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:21.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:21.396 --rc genhtml_branch_coverage=1 00:37:21.396 --rc genhtml_function_coverage=1 00:37:21.396 --rc genhtml_legend=1 00:37:21.396 --rc geninfo_all_blocks=1 00:37:21.396 --rc geninfo_unexecuted_blocks=1 00:37:21.396 00:37:21.396 ' 00:37:21.396 12:51:27 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:21.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:21.396 --rc genhtml_branch_coverage=1 00:37:21.396 --rc genhtml_function_coverage=1 00:37:21.396 --rc genhtml_legend=1 00:37:21.396 --rc geninfo_all_blocks=1 00:37:21.396 --rc geninfo_unexecuted_blocks=1 00:37:21.396 00:37:21.396 ' 00:37:21.396 12:51:27 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:21.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:21.396 --rc genhtml_branch_coverage=1 00:37:21.396 --rc genhtml_function_coverage=1 00:37:21.396 --rc genhtml_legend=1 00:37:21.396 --rc geninfo_all_blocks=1 00:37:21.396 --rc geninfo_unexecuted_blocks=1 00:37:21.396 00:37:21.396 ' 00:37:21.396 12:51:27 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:21.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:21.396 --rc genhtml_branch_coverage=1 00:37:21.396 --rc genhtml_function_coverage=1 00:37:21.396 --rc genhtml_legend=1 00:37:21.396 --rc geninfo_all_blocks=1 00:37:21.396 --rc geninfo_unexecuted_blocks=1 00:37:21.396 00:37:21.396 ' 00:37:21.396 12:51:27 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:21.396 12:51:27 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:21.396 12:51:27 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:37:21.396 12:51:27 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:21.396 12:51:27 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:21.396 12:51:27 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:21.396 12:51:27 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:21.396 12:51:27 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:21.396 12:51:27 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:21.396 12:51:27 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:21.396 12:51:27 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:21.396 12:51:27 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:21.396 12:51:27 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:21.396 12:51:27 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:37:21.396 12:51:27 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:37:21.396 12:51:27 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:21.396 12:51:27 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:21.396 12:51:27 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:21.396 12:51:27 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:21.396 12:51:27 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:21.396 12:51:27 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:37:21.396 12:51:27 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:21.396 12:51:27 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:21.396 12:51:27 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:21.396 12:51:27 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:21.396 12:51:27 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:21.396 12:51:27 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:21.396 12:51:27 keyring_file -- paths/export.sh@5 -- # export PATH 00:37:21.396 12:51:27 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:21.396 12:51:27 keyring_file -- nvmf/common.sh@51 -- # : 0 00:37:21.396 12:51:27 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:21.396 12:51:27 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:21.396 12:51:27 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:21.396 12:51:27 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:21.396 12:51:27 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:21.396 12:51:27 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:21.396 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:21.396 12:51:27 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:21.396 12:51:27 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:21.396 12:51:27 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:21.396 12:51:27 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:21.396 12:51:27 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:21.396 12:51:27 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:21.397 12:51:27 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:37:21.397 12:51:27 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:37:21.397 12:51:27 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:37:21.397 12:51:27 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:21.397 12:51:27 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:21.397 12:51:27 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:21.397 12:51:27 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:21.397 12:51:27 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:21.397 12:51:27 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:21.397 12:51:27 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.MlQC28GKEW 00:37:21.397 12:51:27 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:21.397 12:51:27 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:21.397 12:51:27 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:37:21.397 12:51:27 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:21.397 12:51:27 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:37:21.397 12:51:27 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:37:21.397 12:51:27 keyring_file -- nvmf/common.sh@733 -- # python - 00:37:21.397 12:51:27 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.MlQC28GKEW 00:37:21.397 12:51:27 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.MlQC28GKEW 00:37:21.656 12:51:27 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.MlQC28GKEW 00:37:21.656 12:51:27 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:37:21.656 12:51:27 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:21.656 12:51:27 keyring_file -- keyring/common.sh@17 -- # name=key1 00:37:21.656 12:51:27 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:21.656 12:51:27 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:21.656 12:51:27 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:21.656 12:51:27 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.YDf7cfFFO2 00:37:21.656 12:51:27 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:21.656 12:51:27 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:21.656 12:51:27 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:37:21.656 12:51:27 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:21.656 12:51:27 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:37:21.656 12:51:27 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:37:21.656 12:51:27 keyring_file -- nvmf/common.sh@733 -- # python - 00:37:21.656 12:51:27 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.YDf7cfFFO2 00:37:21.656 12:51:27 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.YDf7cfFFO2 00:37:21.656 12:51:27 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.YDf7cfFFO2 00:37:21.656 12:51:27 keyring_file -- keyring/file.sh@30 -- # tgtpid=463693 00:37:21.656 12:51:27 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:21.656 12:51:27 keyring_file -- keyring/file.sh@32 -- # waitforlisten 463693 00:37:21.656 12:51:27 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 463693 ']' 00:37:21.656 12:51:27 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:21.656 12:51:27 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:21.656 12:51:27 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:21.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:21.656 12:51:27 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:21.656 12:51:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:21.656 [2024-11-20 12:51:27.264689] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:37:21.656 [2024-11-20 12:51:27.264735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid463693 ] 00:37:21.656 [2024-11-20 12:51:27.340173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:21.656 [2024-11-20 12:51:27.382091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:21.915 12:51:27 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:21.915 12:51:27 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:37:21.915 12:51:27 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:37:21.915 12:51:27 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.915 12:51:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:21.915 [2024-11-20 12:51:27.589971] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:21.915 null0 00:37:21.915 [2024-11-20 12:51:27.622028] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:21.915 [2024-11-20 12:51:27.622238] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:21.915 12:51:27 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.915 12:51:27 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:21.915 12:51:27 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:37:21.915 12:51:27 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:21.915 12:51:27 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:37:21.915 12:51:27 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:21.915 12:51:27 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:37:21.915 12:51:27 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:21.915 12:51:27 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:21.915 12:51:27 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.915 12:51:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:21.915 [2024-11-20 12:51:27.650095] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:37:21.915 request: 00:37:21.915 { 00:37:21.915 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:37:21.915 "secure_channel": false, 00:37:21.915 "listen_address": { 00:37:21.915 "trtype": "tcp", 00:37:21.915 "traddr": "127.0.0.1", 00:37:21.915 "trsvcid": "4420" 00:37:21.915 }, 00:37:21.915 "method": "nvmf_subsystem_add_listener", 00:37:21.915 "req_id": 1 00:37:21.915 } 00:37:21.915 Got JSON-RPC error response 00:37:21.915 response: 00:37:21.915 { 00:37:21.915 "code": -32602, 00:37:21.915 "message": "Invalid parameters" 00:37:21.915 } 00:37:21.915 12:51:27 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:37:21.915 12:51:27 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:37:21.915 12:51:27 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:21.915 12:51:27 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:21.915 12:51:27 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:21.915 12:51:27 keyring_file -- keyring/file.sh@47 -- # bperfpid=463704 00:37:21.915 12:51:27 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:37:21.915 12:51:27 keyring_file -- keyring/file.sh@49 -- # waitforlisten 463704 /var/tmp/bperf.sock 00:37:21.915 12:51:27 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 463704 ']' 00:37:21.915 12:51:27 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:21.915 12:51:27 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:21.915 12:51:27 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:21.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:21.915 12:51:27 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:21.915 12:51:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:22.174 [2024-11-20 12:51:27.702608] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:37:22.174 [2024-11-20 12:51:27.702647] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid463704 ] 00:37:22.174 [2024-11-20 12:51:27.775995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:22.174 [2024-11-20 12:51:27.815997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:22.174 12:51:27 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:22.174 12:51:27 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:37:22.174 12:51:27 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.MlQC28GKEW 00:37:22.174 12:51:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.MlQC28GKEW 00:37:22.433 12:51:28 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.YDf7cfFFO2 00:37:22.433 12:51:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.YDf7cfFFO2 00:37:22.691 12:51:28 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:37:22.691 12:51:28 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:37:22.691 12:51:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:22.691 12:51:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:22.691 12:51:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:22.950 12:51:28 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.MlQC28GKEW == \/\t\m\p\/\t\m\p\.\M\l\Q\C\2\8\G\K\E\W ]] 00:37:22.950 12:51:28 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:37:22.950 12:51:28 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:37:22.950 12:51:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:22.950 12:51:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:22.950 12:51:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:22.950 12:51:28 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.YDf7cfFFO2 == \/\t\m\p\/\t\m\p\.\Y\D\f\7\c\f\F\F\O\2 ]] 00:37:22.950 12:51:28 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:37:22.950 12:51:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:22.950 12:51:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:22.950 12:51:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:22.950 12:51:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:22.950 12:51:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:23.209 12:51:28 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:37:23.209 12:51:28 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:37:23.209 12:51:28 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:23.209 12:51:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:23.209 12:51:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:23.209 12:51:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:23.209 12:51:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:23.468 12:51:29 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:37:23.468 12:51:29 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:23.468 12:51:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:23.726 [2024-11-20 12:51:29.246405] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:23.726 nvme0n1 00:37:23.726 12:51:29 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:37:23.726 12:51:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:23.726 12:51:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:23.726 12:51:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:23.726 12:51:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:23.726 12:51:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:23.985 12:51:29 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:37:23.985 12:51:29 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:37:23.985 12:51:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:23.985 12:51:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:23.985 12:51:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:23.985 12:51:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:23.985 12:51:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:23.985 12:51:29 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:37:23.985 12:51:29 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:24.244 Running I/O for 1 seconds... 00:37:25.179 19366.00 IOPS, 75.65 MiB/s 00:37:25.179 Latency(us) 00:37:25.179 [2024-11-20T11:51:30.945Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:25.179 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:37:25.179 nvme0n1 : 1.00 19421.85 75.87 0.00 0.00 6579.56 2262.55 10111.27 00:37:25.179 [2024-11-20T11:51:30.945Z] =================================================================================================================== 00:37:25.179 [2024-11-20T11:51:30.945Z] Total : 19421.85 75.87 0.00 0.00 6579.56 2262.55 10111.27 00:37:25.179 { 00:37:25.179 "results": [ 00:37:25.179 { 00:37:25.179 "job": "nvme0n1", 00:37:25.179 "core_mask": "0x2", 00:37:25.179 "workload": "randrw", 00:37:25.179 "percentage": 50, 00:37:25.179 "status": "finished", 00:37:25.179 "queue_depth": 128, 00:37:25.179 "io_size": 4096, 00:37:25.179 "runtime": 1.003818, 00:37:25.179 "iops": 19421.847386677666, 00:37:25.179 "mibps": 75.86659135420963, 00:37:25.179 "io_failed": 0, 00:37:25.179 "io_timeout": 0, 00:37:25.179 "avg_latency_us": 6579.556460128574, 00:37:25.179 "min_latency_us": 2262.552380952381, 00:37:25.179 "max_latency_us": 10111.26857142857 00:37:25.179 } 00:37:25.179 ], 00:37:25.179 "core_count": 1 00:37:25.179 } 00:37:25.179 12:51:30 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:25.179 12:51:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:25.438 12:51:31 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:37:25.438 12:51:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:25.438 12:51:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:25.438 12:51:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:25.438 12:51:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:25.438 12:51:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:25.697 12:51:31 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:37:25.697 12:51:31 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:37:25.697 12:51:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:25.697 12:51:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:25.697 12:51:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:25.697 12:51:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:25.697 12:51:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:25.697 12:51:31 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:37:25.697 12:51:31 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:25.697 12:51:31 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:37:25.697 12:51:31 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:25.697 12:51:31 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:37:25.697 12:51:31 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:25.697 12:51:31 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:37:25.697 12:51:31 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:25.697 12:51:31 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:25.697 12:51:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:25.956 [2024-11-20 12:51:31.614005] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:25.956 [2024-11-20 12:51:31.614153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6dad00 (107): Transport endpoint is not connected 00:37:25.956 [2024-11-20 12:51:31.615148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6dad00 (9): Bad file descriptor 00:37:25.956 [2024-11-20 12:51:31.616150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:37:25.956 [2024-11-20 12:51:31.616159] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:25.956 [2024-11-20 12:51:31.616166] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:37:25.956 [2024-11-20 12:51:31.616175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:37:25.956 request: 00:37:25.956 { 00:37:25.956 "name": "nvme0", 00:37:25.956 "trtype": "tcp", 00:37:25.956 "traddr": "127.0.0.1", 00:37:25.956 "adrfam": "ipv4", 00:37:25.956 "trsvcid": "4420", 00:37:25.956 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:25.956 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:25.956 "prchk_reftag": false, 00:37:25.956 "prchk_guard": false, 00:37:25.956 "hdgst": false, 00:37:25.956 "ddgst": false, 00:37:25.956 "psk": "key1", 00:37:25.956 "allow_unrecognized_csi": false, 00:37:25.956 "method": "bdev_nvme_attach_controller", 00:37:25.956 "req_id": 1 00:37:25.956 } 00:37:25.956 Got JSON-RPC error response 00:37:25.956 response: 00:37:25.956 { 00:37:25.956 "code": -5, 00:37:25.956 "message": "Input/output error" 00:37:25.956 } 00:37:25.956 12:51:31 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:37:25.956 12:51:31 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:25.956 12:51:31 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:25.956 12:51:31 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:25.957 12:51:31 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:37:25.957 12:51:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:25.957 12:51:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:25.957 12:51:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:25.957 12:51:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:25.957 12:51:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:26.216 12:51:31 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:37:26.216 12:51:31 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:37:26.216 12:51:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:26.216 12:51:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:26.216 12:51:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:26.216 12:51:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:26.216 12:51:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:26.475 12:51:32 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:37:26.475 12:51:32 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:37:26.475 12:51:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:26.475 12:51:32 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:37:26.475 12:51:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:37:26.734 12:51:32 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:37:26.734 12:51:32 keyring_file -- keyring/file.sh@78 -- # jq length 00:37:26.734 12:51:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:26.993 12:51:32 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:37:26.993 12:51:32 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.MlQC28GKEW 00:37:26.993 12:51:32 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.MlQC28GKEW 00:37:26.993 12:51:32 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:37:26.993 12:51:32 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.MlQC28GKEW 00:37:26.993 12:51:32 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:37:26.993 12:51:32 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:26.993 12:51:32 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:37:26.993 12:51:32 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:26.993 12:51:32 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.MlQC28GKEW 00:37:26.993 12:51:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.MlQC28GKEW 00:37:27.252 [2024-11-20 12:51:32.772215] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.MlQC28GKEW': 0100660 00:37:27.252 [2024-11-20 12:51:32.772239] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:37:27.252 request: 00:37:27.252 { 00:37:27.252 "name": "key0", 00:37:27.252 "path": "/tmp/tmp.MlQC28GKEW", 00:37:27.252 "method": "keyring_file_add_key", 00:37:27.252 "req_id": 1 00:37:27.252 } 00:37:27.252 Got JSON-RPC error response 00:37:27.252 response: 00:37:27.252 { 00:37:27.252 "code": -1, 00:37:27.252 "message": "Operation not permitted" 00:37:27.252 } 00:37:27.252 12:51:32 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:37:27.252 12:51:32 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:27.253 12:51:32 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:27.253 12:51:32 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:27.253 12:51:32 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.MlQC28GKEW 00:37:27.253 12:51:32 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.MlQC28GKEW 00:37:27.253 12:51:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.MlQC28GKEW 00:37:27.253 12:51:32 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.MlQC28GKEW 00:37:27.253 12:51:32 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:37:27.253 12:51:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:27.253 12:51:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:27.253 12:51:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:27.253 12:51:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:27.253 12:51:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:27.512 12:51:33 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:37:27.512 12:51:33 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:27.512 12:51:33 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:37:27.512 12:51:33 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:27.512 12:51:33 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:37:27.512 12:51:33 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:27.512 12:51:33 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:37:27.512 12:51:33 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:27.512 12:51:33 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:27.512 12:51:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:27.771 [2024-11-20 12:51:33.369798] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.MlQC28GKEW': No such file or directory 00:37:27.771 [2024-11-20 12:51:33.369820] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:37:27.771 [2024-11-20 12:51:33.369835] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:37:27.771 [2024-11-20 12:51:33.369858] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:37:27.771 [2024-11-20 12:51:33.369865] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:27.771 [2024-11-20 12:51:33.369872] bdev_nvme.c:6764:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:37:27.771 request: 00:37:27.771 { 00:37:27.771 "name": "nvme0", 00:37:27.771 "trtype": "tcp", 00:37:27.771 "traddr": "127.0.0.1", 00:37:27.771 "adrfam": "ipv4", 00:37:27.771 "trsvcid": "4420", 00:37:27.771 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:27.771 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:27.771 "prchk_reftag": false, 00:37:27.771 "prchk_guard": false, 00:37:27.771 "hdgst": false, 00:37:27.771 "ddgst": false, 00:37:27.771 "psk": "key0", 00:37:27.771 "allow_unrecognized_csi": false, 00:37:27.771 "method": "bdev_nvme_attach_controller", 00:37:27.771 "req_id": 1 00:37:27.771 } 00:37:27.771 Got JSON-RPC error response 00:37:27.771 response: 00:37:27.771 { 00:37:27.771 "code": -19, 00:37:27.771 "message": "No such device" 00:37:27.771 } 00:37:27.771 12:51:33 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:37:27.771 12:51:33 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:27.771 12:51:33 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:27.771 12:51:33 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:27.771 12:51:33 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:37:27.771 12:51:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:28.031 12:51:33 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:28.031 12:51:33 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:28.031 12:51:33 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:28.031 12:51:33 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:28.031 12:51:33 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:28.031 12:51:33 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:28.031 12:51:33 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.R4rt820o64 00:37:28.031 12:51:33 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:28.031 12:51:33 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:28.031 12:51:33 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:37:28.031 12:51:33 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:28.031 12:51:33 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:37:28.031 12:51:33 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:37:28.031 12:51:33 keyring_file -- nvmf/common.sh@733 -- # python - 00:37:28.031 12:51:33 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.R4rt820o64 00:37:28.031 12:51:33 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.R4rt820o64 00:37:28.031 12:51:33 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.R4rt820o64 00:37:28.031 12:51:33 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.R4rt820o64 00:37:28.031 12:51:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.R4rt820o64 00:37:28.290 12:51:33 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:28.290 12:51:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:28.290 nvme0n1 00:37:28.549 12:51:34 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:37:28.549 12:51:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:28.549 12:51:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:28.549 12:51:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:28.549 12:51:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:28.549 12:51:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:28.549 12:51:34 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:37:28.549 12:51:34 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:37:28.549 12:51:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:28.809 12:51:34 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:37:28.809 12:51:34 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:37:28.809 12:51:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:28.809 12:51:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:28.809 12:51:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:29.067 12:51:34 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:37:29.067 12:51:34 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:37:29.067 12:51:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:29.067 12:51:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:29.067 12:51:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:29.067 12:51:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:29.068 12:51:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:29.326 12:51:34 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:37:29.326 12:51:34 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:29.326 12:51:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:29.326 12:51:35 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:37:29.326 12:51:35 keyring_file -- keyring/file.sh@105 -- # jq length 00:37:29.326 12:51:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:29.586 12:51:35 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:37:29.586 12:51:35 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.R4rt820o64 00:37:29.586 12:51:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.R4rt820o64 00:37:29.845 12:51:35 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.YDf7cfFFO2 00:37:29.845 12:51:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.YDf7cfFFO2 00:37:30.105 12:51:35 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:30.105 12:51:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:30.105 nvme0n1 00:37:30.364 12:51:35 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:37:30.364 12:51:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:37:30.625 12:51:36 keyring_file -- keyring/file.sh@113 -- # config='{ 00:37:30.625 "subsystems": [ 00:37:30.625 { 00:37:30.625 "subsystem": "keyring", 00:37:30.625 "config": [ 00:37:30.625 { 00:37:30.625 "method": "keyring_file_add_key", 00:37:30.625 "params": { 00:37:30.625 "name": "key0", 00:37:30.625 "path": "/tmp/tmp.R4rt820o64" 00:37:30.625 } 00:37:30.625 }, 00:37:30.625 { 00:37:30.625 "method": "keyring_file_add_key", 00:37:30.625 "params": { 00:37:30.625 "name": "key1", 00:37:30.625 "path": "/tmp/tmp.YDf7cfFFO2" 00:37:30.625 } 00:37:30.625 } 00:37:30.625 ] 00:37:30.625 }, 00:37:30.625 { 00:37:30.625 "subsystem": "iobuf", 00:37:30.625 "config": [ 00:37:30.625 { 00:37:30.625 "method": "iobuf_set_options", 00:37:30.625 "params": { 00:37:30.625 "small_pool_count": 8192, 00:37:30.625 "large_pool_count": 1024, 00:37:30.625 "small_bufsize": 8192, 00:37:30.625 "large_bufsize": 135168, 00:37:30.625 "enable_numa": false 00:37:30.625 } 00:37:30.625 } 00:37:30.625 ] 00:37:30.625 }, 00:37:30.625 { 00:37:30.625 "subsystem": "sock", 00:37:30.625 "config": [ 00:37:30.625 { 00:37:30.625 "method": "sock_set_default_impl", 00:37:30.625 "params": { 00:37:30.625 "impl_name": "posix" 00:37:30.625 } 00:37:30.625 }, 00:37:30.625 { 00:37:30.625 "method": "sock_impl_set_options", 00:37:30.625 "params": { 00:37:30.625 "impl_name": "ssl", 00:37:30.625 "recv_buf_size": 4096, 00:37:30.625 "send_buf_size": 4096, 00:37:30.625 "enable_recv_pipe": true, 00:37:30.625 "enable_quickack": false, 00:37:30.625 "enable_placement_id": 0, 00:37:30.625 "enable_zerocopy_send_server": true, 00:37:30.625 "enable_zerocopy_send_client": false, 00:37:30.625 "zerocopy_threshold": 0, 00:37:30.625 "tls_version": 0, 00:37:30.625 "enable_ktls": false 00:37:30.625 } 00:37:30.625 }, 00:37:30.625 { 00:37:30.625 "method": "sock_impl_set_options", 00:37:30.625 "params": { 00:37:30.625 "impl_name": "posix", 00:37:30.625 "recv_buf_size": 2097152, 00:37:30.625 "send_buf_size": 2097152, 00:37:30.625 "enable_recv_pipe": true, 00:37:30.625 "enable_quickack": false, 00:37:30.625 "enable_placement_id": 0, 00:37:30.625 "enable_zerocopy_send_server": true, 00:37:30.625 "enable_zerocopy_send_client": false, 00:37:30.625 "zerocopy_threshold": 0, 00:37:30.625 "tls_version": 0, 00:37:30.625 "enable_ktls": false 00:37:30.625 } 00:37:30.625 } 00:37:30.625 ] 00:37:30.625 }, 00:37:30.625 { 00:37:30.625 "subsystem": "vmd", 00:37:30.625 "config": [] 00:37:30.625 }, 00:37:30.625 { 00:37:30.625 "subsystem": "accel", 00:37:30.625 "config": [ 00:37:30.625 { 00:37:30.625 "method": "accel_set_options", 00:37:30.625 "params": { 00:37:30.625 "small_cache_size": 128, 00:37:30.625 "large_cache_size": 16, 00:37:30.625 "task_count": 2048, 00:37:30.625 "sequence_count": 2048, 00:37:30.625 "buf_count": 2048 00:37:30.625 } 00:37:30.625 } 00:37:30.625 ] 00:37:30.625 }, 00:37:30.625 { 00:37:30.625 "subsystem": "bdev", 00:37:30.625 "config": [ 00:37:30.625 { 00:37:30.625 "method": "bdev_set_options", 00:37:30.625 "params": { 00:37:30.625 "bdev_io_pool_size": 65535, 00:37:30.625 "bdev_io_cache_size": 256, 00:37:30.625 "bdev_auto_examine": true, 00:37:30.625 "iobuf_small_cache_size": 128, 00:37:30.625 "iobuf_large_cache_size": 16 00:37:30.625 } 00:37:30.625 }, 00:37:30.625 { 00:37:30.625 "method": "bdev_raid_set_options", 00:37:30.625 "params": { 00:37:30.625 "process_window_size_kb": 1024, 00:37:30.625 "process_max_bandwidth_mb_sec": 0 00:37:30.625 } 00:37:30.625 }, 00:37:30.625 { 00:37:30.625 "method": "bdev_iscsi_set_options", 00:37:30.625 "params": { 00:37:30.625 "timeout_sec": 30 00:37:30.625 } 00:37:30.625 }, 00:37:30.625 { 00:37:30.625 "method": "bdev_nvme_set_options", 00:37:30.625 "params": { 00:37:30.625 "action_on_timeout": "none", 00:37:30.625 "timeout_us": 0, 00:37:30.625 "timeout_admin_us": 0, 00:37:30.625 "keep_alive_timeout_ms": 10000, 00:37:30.625 "arbitration_burst": 0, 00:37:30.625 "low_priority_weight": 0, 00:37:30.625 "medium_priority_weight": 0, 00:37:30.625 "high_priority_weight": 0, 00:37:30.625 "nvme_adminq_poll_period_us": 10000, 00:37:30.625 "nvme_ioq_poll_period_us": 0, 00:37:30.625 "io_queue_requests": 512, 00:37:30.625 "delay_cmd_submit": true, 00:37:30.625 "transport_retry_count": 4, 00:37:30.625 "bdev_retry_count": 3, 00:37:30.625 "transport_ack_timeout": 0, 00:37:30.625 "ctrlr_loss_timeout_sec": 0, 00:37:30.625 "reconnect_delay_sec": 0, 00:37:30.625 "fast_io_fail_timeout_sec": 0, 00:37:30.625 "disable_auto_failback": false, 00:37:30.625 "generate_uuids": false, 00:37:30.625 "transport_tos": 0, 00:37:30.625 "nvme_error_stat": false, 00:37:30.625 "rdma_srq_size": 0, 00:37:30.625 "io_path_stat": false, 00:37:30.625 "allow_accel_sequence": false, 00:37:30.625 "rdma_max_cq_size": 0, 00:37:30.625 "rdma_cm_event_timeout_ms": 0, 00:37:30.625 "dhchap_digests": [ 00:37:30.625 "sha256", 00:37:30.625 "sha384", 00:37:30.625 "sha512" 00:37:30.625 ], 00:37:30.625 "dhchap_dhgroups": [ 00:37:30.625 "null", 00:37:30.625 "ffdhe2048", 00:37:30.625 "ffdhe3072", 00:37:30.625 "ffdhe4096", 00:37:30.625 "ffdhe6144", 00:37:30.625 "ffdhe8192" 00:37:30.625 ] 00:37:30.625 } 00:37:30.625 }, 00:37:30.626 { 00:37:30.626 "method": "bdev_nvme_attach_controller", 00:37:30.626 "params": { 00:37:30.626 "name": "nvme0", 00:37:30.626 "trtype": "TCP", 00:37:30.626 "adrfam": "IPv4", 00:37:30.626 "traddr": "127.0.0.1", 00:37:30.626 "trsvcid": "4420", 00:37:30.626 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:30.626 "prchk_reftag": false, 00:37:30.626 "prchk_guard": false, 00:37:30.626 "ctrlr_loss_timeout_sec": 0, 00:37:30.626 "reconnect_delay_sec": 0, 00:37:30.626 "fast_io_fail_timeout_sec": 0, 00:37:30.626 "psk": "key0", 00:37:30.626 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:30.626 "hdgst": false, 00:37:30.626 "ddgst": false, 00:37:30.626 "multipath": "multipath" 00:37:30.626 } 00:37:30.626 }, 00:37:30.626 { 00:37:30.626 "method": "bdev_nvme_set_hotplug", 00:37:30.626 "params": { 00:37:30.626 "period_us": 100000, 00:37:30.626 "enable": false 00:37:30.626 } 00:37:30.626 }, 00:37:30.626 { 00:37:30.626 "method": "bdev_wait_for_examine" 00:37:30.626 } 00:37:30.626 ] 00:37:30.626 }, 00:37:30.626 { 00:37:30.626 "subsystem": "nbd", 00:37:30.626 "config": [] 00:37:30.626 } 00:37:30.626 ] 00:37:30.626 }' 00:37:30.626 12:51:36 keyring_file -- keyring/file.sh@115 -- # killprocess 463704 00:37:30.626 12:51:36 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 463704 ']' 00:37:30.626 12:51:36 keyring_file -- common/autotest_common.sh@958 -- # kill -0 463704 00:37:30.626 12:51:36 keyring_file -- common/autotest_common.sh@959 -- # uname 00:37:30.626 12:51:36 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:30.626 12:51:36 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 463704 00:37:30.626 12:51:36 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:30.626 12:51:36 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:30.626 12:51:36 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 463704' 00:37:30.626 killing process with pid 463704 00:37:30.626 12:51:36 keyring_file -- common/autotest_common.sh@973 -- # kill 463704 00:37:30.626 Received shutdown signal, test time was about 1.000000 seconds 00:37:30.626 00:37:30.626 Latency(us) 00:37:30.626 [2024-11-20T11:51:36.392Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:30.626 [2024-11-20T11:51:36.392Z] =================================================================================================================== 00:37:30.626 [2024-11-20T11:51:36.392Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:30.626 12:51:36 keyring_file -- common/autotest_common.sh@978 -- # wait 463704 00:37:30.626 12:51:36 keyring_file -- keyring/file.sh@118 -- # bperfpid=465216 00:37:30.626 12:51:36 keyring_file -- keyring/file.sh@120 -- # waitforlisten 465216 /var/tmp/bperf.sock 00:37:30.626 12:51:36 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 465216 ']' 00:37:30.626 12:51:36 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:30.626 12:51:36 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:37:30.626 12:51:36 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:30.626 12:51:36 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:30.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:30.626 12:51:36 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:37:30.626 "subsystems": [ 00:37:30.626 { 00:37:30.626 "subsystem": "keyring", 00:37:30.626 "config": [ 00:37:30.626 { 00:37:30.626 "method": "keyring_file_add_key", 00:37:30.626 "params": { 00:37:30.626 "name": "key0", 00:37:30.626 "path": "/tmp/tmp.R4rt820o64" 00:37:30.626 } 00:37:30.626 }, 00:37:30.626 { 00:37:30.626 "method": "keyring_file_add_key", 00:37:30.626 "params": { 00:37:30.626 "name": "key1", 00:37:30.626 "path": "/tmp/tmp.YDf7cfFFO2" 00:37:30.626 } 00:37:30.626 } 00:37:30.626 ] 00:37:30.626 }, 00:37:30.626 { 00:37:30.626 "subsystem": "iobuf", 00:37:30.626 "config": [ 00:37:30.626 { 00:37:30.626 "method": "iobuf_set_options", 00:37:30.626 "params": { 00:37:30.626 "small_pool_count": 8192, 00:37:30.626 "large_pool_count": 1024, 00:37:30.626 "small_bufsize": 8192, 00:37:30.626 "large_bufsize": 135168, 00:37:30.626 "enable_numa": false 00:37:30.626 } 00:37:30.626 } 00:37:30.626 ] 00:37:30.626 }, 00:37:30.626 { 00:37:30.626 "subsystem": "sock", 00:37:30.626 "config": [ 00:37:30.626 { 00:37:30.626 "method": "sock_set_default_impl", 00:37:30.626 "params": { 00:37:30.626 "impl_name": "posix" 00:37:30.626 } 00:37:30.626 }, 00:37:30.626 { 00:37:30.626 "method": "sock_impl_set_options", 00:37:30.626 "params": { 00:37:30.626 "impl_name": "ssl", 00:37:30.626 "recv_buf_size": 4096, 00:37:30.626 "send_buf_size": 4096, 00:37:30.626 "enable_recv_pipe": true, 00:37:30.626 "enable_quickack": false, 00:37:30.626 "enable_placement_id": 0, 00:37:30.626 "enable_zerocopy_send_server": true, 00:37:30.626 "enable_zerocopy_send_client": false, 00:37:30.626 "zerocopy_threshold": 0, 00:37:30.626 "tls_version": 0, 00:37:30.626 "enable_ktls": false 00:37:30.626 } 00:37:30.626 }, 00:37:30.626 { 00:37:30.626 "method": "sock_impl_set_options", 00:37:30.626 "params": { 00:37:30.626 "impl_name": "posix", 00:37:30.626 "recv_buf_size": 2097152, 00:37:30.626 "send_buf_size": 2097152, 00:37:30.626 "enable_recv_pipe": true, 00:37:30.626 "enable_quickack": false, 00:37:30.626 "enable_placement_id": 0, 00:37:30.626 "enable_zerocopy_send_server": true, 00:37:30.626 "enable_zerocopy_send_client": false, 00:37:30.626 "zerocopy_threshold": 0, 00:37:30.626 "tls_version": 0, 00:37:30.626 "enable_ktls": false 00:37:30.626 } 00:37:30.626 } 00:37:30.626 ] 00:37:30.626 }, 00:37:30.626 { 00:37:30.626 "subsystem": "vmd", 00:37:30.626 "config": [] 00:37:30.626 }, 00:37:30.626 { 00:37:30.626 "subsystem": "accel", 00:37:30.626 "config": [ 00:37:30.626 { 00:37:30.626 "method": "accel_set_options", 00:37:30.626 "params": { 00:37:30.626 "small_cache_size": 128, 00:37:30.626 "large_cache_size": 16, 00:37:30.626 "task_count": 2048, 00:37:30.626 "sequence_count": 2048, 00:37:30.626 "buf_count": 2048 00:37:30.626 } 00:37:30.626 } 00:37:30.626 ] 00:37:30.626 }, 00:37:30.626 { 00:37:30.626 "subsystem": "bdev", 00:37:30.626 "config": [ 00:37:30.626 { 00:37:30.626 "method": "bdev_set_options", 00:37:30.626 "params": { 00:37:30.626 "bdev_io_pool_size": 65535, 00:37:30.626 "bdev_io_cache_size": 256, 00:37:30.626 "bdev_auto_examine": true, 00:37:30.626 "iobuf_small_cache_size": 128, 00:37:30.626 "iobuf_large_cache_size": 16 00:37:30.626 } 00:37:30.626 }, 00:37:30.626 { 00:37:30.626 "method": "bdev_raid_set_options", 00:37:30.626 "params": { 00:37:30.626 "process_window_size_kb": 1024, 00:37:30.626 "process_max_bandwidth_mb_sec": 0 00:37:30.626 } 00:37:30.626 }, 00:37:30.626 { 00:37:30.626 "method": "bdev_iscsi_set_options", 00:37:30.626 "params": { 00:37:30.626 "timeout_sec": 30 00:37:30.626 } 00:37:30.626 }, 00:37:30.626 { 00:37:30.626 "method": "bdev_nvme_set_options", 00:37:30.626 "params": { 00:37:30.626 "action_on_timeout": "none", 00:37:30.626 "timeout_us": 0, 00:37:30.626 "timeout_admin_us": 0, 00:37:30.626 "keep_alive_timeout_ms": 10000, 00:37:30.626 "arbitration_burst": 0, 00:37:30.626 "low_priority_weight": 0, 00:37:30.626 "medium_priority_weight": 0, 00:37:30.626 "high_priority_weight": 0, 00:37:30.626 "nvme_adminq_poll_period_us": 10000, 00:37:30.626 "nvme_ioq_poll_period_us": 0, 00:37:30.626 "io_queue_requests": 512, 00:37:30.626 "delay_cmd_submit": true, 00:37:30.626 "transport_retry_count": 4, 00:37:30.626 "bdev_retry_count": 3, 00:37:30.626 "transport_ack_timeout": 0, 00:37:30.626 "ctrlr_loss_timeout_sec": 0, 00:37:30.626 "reconnect_delay_sec": 0, 00:37:30.626 "fast_io_fail_timeout_sec": 0, 00:37:30.626 "disable_auto_failback": false, 00:37:30.626 "generate_uuids": false, 00:37:30.626 "transport_tos": 0, 00:37:30.626 "nvme_error_stat": false, 00:37:30.626 "rdma_srq_size": 0, 00:37:30.626 "io_path_stat": false, 00:37:30.627 "allow_accel_sequence": false, 00:37:30.627 "rdma_max_cq_size": 0, 00:37:30.627 "rdma_cm_event_timeout_ms": 0, 00:37:30.627 "dhchap_digests": [ 00:37:30.627 "sha256", 00:37:30.627 "sha384", 00:37:30.627 "sha512" 00:37:30.627 ], 00:37:30.627 "dhchap_dhgroups": [ 00:37:30.627 "null", 00:37:30.627 "ffdhe2048", 00:37:30.627 "ffdhe3072", 00:37:30.627 "ffdhe4096", 00:37:30.627 "ffdhe6144", 00:37:30.627 "ffdhe8192" 00:37:30.627 ] 00:37:30.627 } 00:37:30.627 }, 00:37:30.627 { 00:37:30.627 "method": "bdev_nvme_attach_controller", 00:37:30.627 "params": { 00:37:30.627 "name": "nvme0", 00:37:30.627 "trtype": "TCP", 00:37:30.627 "adrfam": "IPv4", 00:37:30.627 "traddr": "127.0.0.1", 00:37:30.627 "trsvcid": "4420", 00:37:30.627 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:30.627 "prchk_reftag": false, 00:37:30.627 "prchk_guard": false, 00:37:30.627 "ctrlr_loss_timeout_sec": 0, 00:37:30.627 "reconnect_delay_sec": 0, 00:37:30.627 "fast_io_fail_timeout_sec": 0, 00:37:30.627 "psk": "key0", 00:37:30.627 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:30.627 "hdgst": false, 00:37:30.627 "ddgst": false, 00:37:30.627 "multipath": "multipath" 00:37:30.627 } 00:37:30.627 }, 00:37:30.627 { 00:37:30.627 "method": "bdev_nvme_set_hotplug", 00:37:30.627 "params": { 00:37:30.627 "period_us": 100000, 00:37:30.627 "enable": false 00:37:30.627 } 00:37:30.627 }, 00:37:30.627 { 00:37:30.627 "method": "bdev_wait_for_examine" 00:37:30.627 } 00:37:30.627 ] 00:37:30.627 }, 00:37:30.627 { 00:37:30.627 "subsystem": "nbd", 00:37:30.627 "config": [] 00:37:30.627 } 00:37:30.627 ] 00:37:30.627 }' 00:37:30.627 12:51:36 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:30.627 12:51:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:30.627 [2024-11-20 12:51:36.380391] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:37:30.627 [2024-11-20 12:51:36.380437] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid465216 ] 00:37:30.886 [2024-11-20 12:51:36.453945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:30.886 [2024-11-20 12:51:36.495677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:31.145 [2024-11-20 12:51:36.655000] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:31.712 12:51:37 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:31.713 12:51:37 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:37:31.713 12:51:37 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:37:31.713 12:51:37 keyring_file -- keyring/file.sh@121 -- # jq length 00:37:31.713 12:51:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:31.713 12:51:37 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:37:31.713 12:51:37 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:37:31.713 12:51:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:31.713 12:51:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:31.713 12:51:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:31.713 12:51:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:31.713 12:51:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:31.971 12:51:37 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:37:31.971 12:51:37 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:37:31.971 12:51:37 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:31.971 12:51:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:31.971 12:51:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:31.971 12:51:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:31.971 12:51:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:32.230 12:51:37 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:37:32.230 12:51:37 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:37:32.230 12:51:37 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:37:32.230 12:51:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:37:32.489 12:51:38 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:37:32.489 12:51:38 keyring_file -- keyring/file.sh@1 -- # cleanup 00:37:32.489 12:51:38 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.R4rt820o64 /tmp/tmp.YDf7cfFFO2 00:37:32.489 12:51:38 keyring_file -- keyring/file.sh@20 -- # killprocess 465216 00:37:32.489 12:51:38 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 465216 ']' 00:37:32.489 12:51:38 keyring_file -- common/autotest_common.sh@958 -- # kill -0 465216 00:37:32.489 12:51:38 keyring_file -- common/autotest_common.sh@959 -- # uname 00:37:32.489 12:51:38 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:32.489 12:51:38 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 465216 00:37:32.489 12:51:38 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:32.489 12:51:38 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:32.489 12:51:38 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 465216' 00:37:32.489 killing process with pid 465216 00:37:32.489 12:51:38 keyring_file -- common/autotest_common.sh@973 -- # kill 465216 00:37:32.489 Received shutdown signal, test time was about 1.000000 seconds 00:37:32.489 00:37:32.489 Latency(us) 00:37:32.489 [2024-11-20T11:51:38.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:32.489 [2024-11-20T11:51:38.255Z] =================================================================================================================== 00:37:32.489 [2024-11-20T11:51:38.255Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:32.489 12:51:38 keyring_file -- common/autotest_common.sh@978 -- # wait 465216 00:37:32.489 12:51:38 keyring_file -- keyring/file.sh@21 -- # killprocess 463693 00:37:32.489 12:51:38 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 463693 ']' 00:37:32.489 12:51:38 keyring_file -- common/autotest_common.sh@958 -- # kill -0 463693 00:37:32.489 12:51:38 keyring_file -- common/autotest_common.sh@959 -- # uname 00:37:32.489 12:51:38 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:32.489 12:51:38 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 463693 00:37:32.749 12:51:38 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:32.749 12:51:38 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:32.749 12:51:38 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 463693' 00:37:32.749 killing process with pid 463693 00:37:32.749 12:51:38 keyring_file -- common/autotest_common.sh@973 -- # kill 463693 00:37:32.749 12:51:38 keyring_file -- common/autotest_common.sh@978 -- # wait 463693 00:37:33.008 00:37:33.008 real 0m11.669s 00:37:33.008 user 0m29.004s 00:37:33.008 sys 0m2.664s 00:37:33.008 12:51:38 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:33.008 12:51:38 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:33.008 ************************************ 00:37:33.008 END TEST keyring_file 00:37:33.008 ************************************ 00:37:33.008 12:51:38 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:37:33.008 12:51:38 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:33.008 12:51:38 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:33.008 12:51:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:33.008 12:51:38 -- common/autotest_common.sh@10 -- # set +x 00:37:33.008 ************************************ 00:37:33.008 START TEST keyring_linux 00:37:33.008 ************************************ 00:37:33.008 12:51:38 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:33.008 Joined session keyring: 464434023 00:37:33.008 * Looking for test storage... 00:37:33.008 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:33.008 12:51:38 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:33.008 12:51:38 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:37:33.008 12:51:38 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:33.269 12:51:38 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:33.269 12:51:38 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:33.269 12:51:38 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:33.269 12:51:38 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:33.269 12:51:38 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:37:33.269 12:51:38 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:37:33.269 12:51:38 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:37:33.269 12:51:38 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:37:33.269 12:51:38 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:37:33.269 12:51:38 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:37:33.269 12:51:38 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:37:33.269 12:51:38 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:33.269 12:51:38 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:37:33.269 12:51:38 keyring_linux -- scripts/common.sh@345 -- # : 1 00:37:33.269 12:51:38 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:33.269 12:51:38 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:33.269 12:51:38 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:37:33.269 12:51:38 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:37:33.269 12:51:38 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:33.269 12:51:38 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:37:33.269 12:51:38 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:37:33.269 12:51:38 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:37:33.269 12:51:38 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:37:33.269 12:51:38 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:33.269 12:51:38 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:37:33.269 12:51:38 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:37:33.269 12:51:38 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:33.269 12:51:38 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:33.269 12:51:38 keyring_linux -- scripts/common.sh@368 -- # return 0 00:37:33.269 12:51:38 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:33.269 12:51:38 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:33.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:33.269 --rc genhtml_branch_coverage=1 00:37:33.269 --rc genhtml_function_coverage=1 00:37:33.269 --rc genhtml_legend=1 00:37:33.269 --rc geninfo_all_blocks=1 00:37:33.269 --rc geninfo_unexecuted_blocks=1 00:37:33.269 00:37:33.269 ' 00:37:33.269 12:51:38 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:33.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:33.269 --rc genhtml_branch_coverage=1 00:37:33.269 --rc genhtml_function_coverage=1 00:37:33.269 --rc genhtml_legend=1 00:37:33.269 --rc geninfo_all_blocks=1 00:37:33.269 --rc geninfo_unexecuted_blocks=1 00:37:33.269 00:37:33.269 ' 00:37:33.269 12:51:38 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:33.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:33.269 --rc genhtml_branch_coverage=1 00:37:33.269 --rc genhtml_function_coverage=1 00:37:33.269 --rc genhtml_legend=1 00:37:33.269 --rc geninfo_all_blocks=1 00:37:33.269 --rc geninfo_unexecuted_blocks=1 00:37:33.269 00:37:33.269 ' 00:37:33.269 12:51:38 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:33.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:33.269 --rc genhtml_branch_coverage=1 00:37:33.269 --rc genhtml_function_coverage=1 00:37:33.269 --rc genhtml_legend=1 00:37:33.269 --rc geninfo_all_blocks=1 00:37:33.269 --rc geninfo_unexecuted_blocks=1 00:37:33.269 00:37:33.269 ' 00:37:33.269 12:51:38 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:33.269 12:51:38 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:33.269 12:51:38 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:37:33.269 12:51:38 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:33.269 12:51:38 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:33.269 12:51:38 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:33.269 12:51:38 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:33.269 12:51:38 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:33.269 12:51:38 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:33.269 12:51:38 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:33.269 12:51:38 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:33.269 12:51:38 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:33.269 12:51:38 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:33.269 12:51:38 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:37:33.269 12:51:38 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:37:33.269 12:51:38 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:33.269 12:51:38 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:33.269 12:51:38 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:33.269 12:51:38 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:33.269 12:51:38 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:33.269 12:51:38 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:37:33.269 12:51:38 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:33.269 12:51:38 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:33.269 12:51:38 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:33.269 12:51:38 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:33.269 12:51:38 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:33.269 12:51:38 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:33.269 12:51:38 keyring_linux -- paths/export.sh@5 -- # export PATH 00:37:33.269 12:51:38 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:33.269 12:51:38 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:37:33.269 12:51:38 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:33.269 12:51:38 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:33.269 12:51:38 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:33.269 12:51:38 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:33.269 12:51:38 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:33.269 12:51:38 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:33.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:33.269 12:51:38 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:33.269 12:51:38 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:33.269 12:51:38 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:33.269 12:51:38 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:33.269 12:51:38 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:33.269 12:51:38 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:33.269 12:51:38 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:37:33.270 12:51:38 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:37:33.270 12:51:38 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:37:33.270 12:51:38 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:37:33.270 12:51:38 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:33.270 12:51:38 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:37:33.270 12:51:38 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:33.270 12:51:38 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:33.270 12:51:38 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:37:33.270 12:51:38 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:33.270 12:51:38 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:33.270 12:51:38 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:37:33.270 12:51:38 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:33.270 12:51:38 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:37:33.270 12:51:38 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:37:33.270 12:51:38 keyring_linux -- nvmf/common.sh@733 -- # python - 00:37:33.270 12:51:38 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:37:33.270 12:51:38 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:37:33.270 /tmp/:spdk-test:key0 00:37:33.270 12:51:38 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:37:33.270 12:51:38 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:33.270 12:51:38 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:37:33.270 12:51:38 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:33.270 12:51:38 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:33.270 12:51:38 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:37:33.270 12:51:38 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:33.270 12:51:38 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:33.270 12:51:38 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:37:33.270 12:51:38 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:33.270 12:51:38 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:37:33.270 12:51:38 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:37:33.270 12:51:38 keyring_linux -- nvmf/common.sh@733 -- # python - 00:37:33.270 12:51:38 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:37:33.270 12:51:38 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:37:33.270 /tmp/:spdk-test:key1 00:37:33.270 12:51:38 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=465774 00:37:33.270 12:51:38 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:33.270 12:51:38 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 465774 00:37:33.270 12:51:38 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 465774 ']' 00:37:33.270 12:51:38 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:33.270 12:51:38 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:33.270 12:51:38 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:33.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:33.270 12:51:38 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:33.270 12:51:38 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:33.270 [2024-11-20 12:51:38.984657] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:37:33.270 [2024-11-20 12:51:38.984706] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid465774 ] 00:37:33.530 [2024-11-20 12:51:39.040434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:33.530 [2024-11-20 12:51:39.082398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:33.530 12:51:39 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:33.530 12:51:39 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:37:33.530 12:51:39 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:37:33.530 12:51:39 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:33.530 12:51:39 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:33.789 [2024-11-20 12:51:39.295894] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:33.789 null0 00:37:33.789 [2024-11-20 12:51:39.327954] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:33.789 [2024-11-20 12:51:39.328334] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:33.789 12:51:39 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:33.789 12:51:39 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:37:33.789 302748125 00:37:33.789 12:51:39 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:37:33.789 567248367 00:37:33.789 12:51:39 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=465780 00:37:33.789 12:51:39 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 465780 /var/tmp/bperf.sock 00:37:33.789 12:51:39 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:37:33.789 12:51:39 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 465780 ']' 00:37:33.789 12:51:39 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:33.789 12:51:39 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:33.789 12:51:39 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:33.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:33.789 12:51:39 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:33.789 12:51:39 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:33.789 [2024-11-20 12:51:39.400369] Starting SPDK v25.01-pre git sha1 92fb22519 / DPDK 24.03.0 initialization... 00:37:33.789 [2024-11-20 12:51:39.400409] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid465780 ] 00:37:33.789 [2024-11-20 12:51:39.473963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:33.789 [2024-11-20 12:51:39.513764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:34.090 12:51:39 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:34.090 12:51:39 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:37:34.090 12:51:39 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:37:34.090 12:51:39 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:37:34.090 12:51:39 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:37:34.090 12:51:39 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:34.382 12:51:40 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:34.382 12:51:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:34.649 [2024-11-20 12:51:40.194100] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:34.649 nvme0n1 00:37:34.649 12:51:40 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:37:34.649 12:51:40 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:37:34.649 12:51:40 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:34.649 12:51:40 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:34.649 12:51:40 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:34.649 12:51:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:34.908 12:51:40 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:37:34.908 12:51:40 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:34.908 12:51:40 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:37:34.908 12:51:40 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:37:34.908 12:51:40 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:34.908 12:51:40 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:37:34.908 12:51:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:34.908 12:51:40 keyring_linux -- keyring/linux.sh@25 -- # sn=302748125 00:37:34.908 12:51:40 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:37:34.908 12:51:40 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:35.167 12:51:40 keyring_linux -- keyring/linux.sh@26 -- # [[ 302748125 == \3\0\2\7\4\8\1\2\5 ]] 00:37:35.167 12:51:40 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 302748125 00:37:35.167 12:51:40 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:37:35.167 12:51:40 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:35.167 Running I/O for 1 seconds... 00:37:36.104 21635.00 IOPS, 84.51 MiB/s 00:37:36.104 Latency(us) 00:37:36.104 [2024-11-20T11:51:41.870Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:36.104 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:36.104 nvme0n1 : 1.01 21631.93 84.50 0.00 0.00 5897.19 5086.84 13856.18 00:37:36.104 [2024-11-20T11:51:41.870Z] =================================================================================================================== 00:37:36.104 [2024-11-20T11:51:41.870Z] Total : 21631.93 84.50 0.00 0.00 5897.19 5086.84 13856.18 00:37:36.104 { 00:37:36.104 "results": [ 00:37:36.104 { 00:37:36.104 "job": "nvme0n1", 00:37:36.104 "core_mask": "0x2", 00:37:36.104 "workload": "randread", 00:37:36.104 "status": "finished", 00:37:36.104 "queue_depth": 128, 00:37:36.104 "io_size": 4096, 00:37:36.104 "runtime": 1.006059, 00:37:36.104 "iops": 21631.93212326514, 00:37:36.104 "mibps": 84.49973485650445, 00:37:36.104 "io_failed": 0, 00:37:36.104 "io_timeout": 0, 00:37:36.104 "avg_latency_us": 5897.1896126015545, 00:37:36.104 "min_latency_us": 5086.8419047619045, 00:37:36.104 "max_latency_us": 13856.182857142858 00:37:36.104 } 00:37:36.104 ], 00:37:36.104 "core_count": 1 00:37:36.104 } 00:37:36.104 12:51:41 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:36.104 12:51:41 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:36.363 12:51:41 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:37:36.363 12:51:41 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:37:36.363 12:51:41 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:36.363 12:51:41 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:36.363 12:51:41 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:36.363 12:51:41 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:36.622 12:51:42 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:37:36.622 12:51:42 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:36.622 12:51:42 keyring_linux -- keyring/linux.sh@23 -- # return 00:37:36.622 12:51:42 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:36.622 12:51:42 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:37:36.622 12:51:42 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:36.622 12:51:42 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:37:36.622 12:51:42 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:36.622 12:51:42 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:37:36.622 12:51:42 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:36.622 12:51:42 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:36.622 12:51:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:36.622 [2024-11-20 12:51:42.358586] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:36.622 [2024-11-20 12:51:42.359346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2193a70 (107): Transport endpoint is not connected 00:37:36.622 [2024-11-20 12:51:42.360342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2193a70 (9): Bad file descriptor 00:37:36.622 [2024-11-20 12:51:42.361344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:37:36.622 [2024-11-20 12:51:42.361352] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:36.622 [2024-11-20 12:51:42.361359] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:37:36.622 [2024-11-20 12:51:42.361369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:37:36.622 request: 00:37:36.622 { 00:37:36.622 "name": "nvme0", 00:37:36.622 "trtype": "tcp", 00:37:36.622 "traddr": "127.0.0.1", 00:37:36.622 "adrfam": "ipv4", 00:37:36.622 "trsvcid": "4420", 00:37:36.622 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:36.622 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:36.622 "prchk_reftag": false, 00:37:36.622 "prchk_guard": false, 00:37:36.622 "hdgst": false, 00:37:36.622 "ddgst": false, 00:37:36.622 "psk": ":spdk-test:key1", 00:37:36.622 "allow_unrecognized_csi": false, 00:37:36.622 "method": "bdev_nvme_attach_controller", 00:37:36.622 "req_id": 1 00:37:36.622 } 00:37:36.622 Got JSON-RPC error response 00:37:36.622 response: 00:37:36.622 { 00:37:36.622 "code": -5, 00:37:36.622 "message": "Input/output error" 00:37:36.622 } 00:37:36.881 12:51:42 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:37:36.881 12:51:42 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:36.881 12:51:42 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:36.881 12:51:42 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:36.881 12:51:42 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:37:36.881 12:51:42 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:36.881 12:51:42 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:37:36.881 12:51:42 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:37:36.881 12:51:42 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:37:36.881 12:51:42 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:36.881 12:51:42 keyring_linux -- keyring/linux.sh@33 -- # sn=302748125 00:37:36.881 12:51:42 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 302748125 00:37:36.881 1 links removed 00:37:36.881 12:51:42 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:36.881 12:51:42 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:37:36.881 12:51:42 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:37:36.881 12:51:42 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:37:36.881 12:51:42 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:37:36.881 12:51:42 keyring_linux -- keyring/linux.sh@33 -- # sn=567248367 00:37:36.881 12:51:42 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 567248367 00:37:36.881 1 links removed 00:37:36.881 12:51:42 keyring_linux -- keyring/linux.sh@41 -- # killprocess 465780 00:37:36.881 12:51:42 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 465780 ']' 00:37:36.881 12:51:42 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 465780 00:37:36.881 12:51:42 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:37:36.881 12:51:42 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:36.881 12:51:42 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 465780 00:37:36.881 12:51:42 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:36.881 12:51:42 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:36.881 12:51:42 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 465780' 00:37:36.881 killing process with pid 465780 00:37:36.881 12:51:42 keyring_linux -- common/autotest_common.sh@973 -- # kill 465780 00:37:36.881 Received shutdown signal, test time was about 1.000000 seconds 00:37:36.881 00:37:36.881 Latency(us) 00:37:36.881 [2024-11-20T11:51:42.647Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:36.881 [2024-11-20T11:51:42.647Z] =================================================================================================================== 00:37:36.881 [2024-11-20T11:51:42.647Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:36.881 12:51:42 keyring_linux -- common/autotest_common.sh@978 -- # wait 465780 00:37:36.881 12:51:42 keyring_linux -- keyring/linux.sh@42 -- # killprocess 465774 00:37:36.881 12:51:42 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 465774 ']' 00:37:36.881 12:51:42 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 465774 00:37:36.881 12:51:42 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:37:36.881 12:51:42 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:36.881 12:51:42 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 465774 00:37:37.140 12:51:42 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:37.140 12:51:42 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:37.140 12:51:42 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 465774' 00:37:37.140 killing process with pid 465774 00:37:37.140 12:51:42 keyring_linux -- common/autotest_common.sh@973 -- # kill 465774 00:37:37.140 12:51:42 keyring_linux -- common/autotest_common.sh@978 -- # wait 465774 00:37:37.400 00:37:37.400 real 0m4.317s 00:37:37.400 user 0m8.172s 00:37:37.400 sys 0m1.431s 00:37:37.400 12:51:42 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:37.400 12:51:42 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:37.400 ************************************ 00:37:37.400 END TEST keyring_linux 00:37:37.400 ************************************ 00:37:37.400 12:51:42 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:37:37.400 12:51:42 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:37:37.400 12:51:42 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:37:37.400 12:51:42 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:37:37.400 12:51:42 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:37:37.400 12:51:42 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:37:37.400 12:51:42 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:37:37.400 12:51:42 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:37:37.400 12:51:42 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:37:37.400 12:51:42 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:37:37.400 12:51:42 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:37:37.400 12:51:42 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:37:37.400 12:51:42 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:37:37.400 12:51:42 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:37:37.400 12:51:42 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:37:37.400 12:51:42 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:37:37.400 12:51:42 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:37:37.400 12:51:42 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:37.400 12:51:42 -- common/autotest_common.sh@10 -- # set +x 00:37:37.400 12:51:42 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:37:37.400 12:51:42 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:37:37.400 12:51:42 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:37:37.400 12:51:42 -- common/autotest_common.sh@10 -- # set +x 00:37:42.673 INFO: APP EXITING 00:37:42.673 INFO: killing all VMs 00:37:42.673 INFO: killing vhost app 00:37:42.673 INFO: EXIT DONE 00:37:45.221 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:37:45.221 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:37:45.221 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:37:45.221 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:37:45.221 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:37:45.221 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:37:45.221 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:37:45.221 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:37:45.221 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:37:45.221 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:37:45.221 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:37:45.221 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:37:45.221 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:37:45.221 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:37:45.221 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:37:45.221 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:37:45.221 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:37:48.510 Cleaning 00:37:48.510 Removing: /var/run/dpdk/spdk0/config 00:37:48.510 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:48.510 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:48.510 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:48.510 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:48.510 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:37:48.510 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:37:48.510 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:37:48.510 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:37:48.510 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:48.510 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:48.510 Removing: /var/run/dpdk/spdk1/config 00:37:48.510 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:48.510 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:48.510 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:48.510 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:48.510 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:37:48.510 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:37:48.510 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:37:48.510 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:37:48.510 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:48.510 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:48.510 Removing: /var/run/dpdk/spdk2/config 00:37:48.510 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:48.510 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:48.510 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:48.510 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:48.510 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:37:48.510 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:37:48.510 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:37:48.511 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:37:48.511 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:48.511 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:48.511 Removing: /var/run/dpdk/spdk3/config 00:37:48.511 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:48.511 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:48.511 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:48.511 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:48.511 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:37:48.511 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:37:48.511 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:37:48.511 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:37:48.511 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:48.511 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:48.511 Removing: /var/run/dpdk/spdk4/config 00:37:48.511 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:48.511 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:48.511 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:48.511 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:48.511 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:37:48.511 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:37:48.511 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:37:48.511 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:37:48.511 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:48.511 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:48.511 Removing: /dev/shm/bdev_svc_trace.1 00:37:48.511 Removing: /dev/shm/nvmf_trace.0 00:37:48.511 Removing: /dev/shm/spdk_tgt_trace.pid4179429 00:37:48.511 Removing: /var/run/dpdk/spdk0 00:37:48.511 Removing: /var/run/dpdk/spdk1 00:37:48.511 Removing: /var/run/dpdk/spdk2 00:37:48.511 Removing: /var/run/dpdk/spdk3 00:37:48.511 Removing: /var/run/dpdk/spdk4 00:37:48.511 Removing: /var/run/dpdk/spdk_pid10013 00:37:48.511 Removing: /var/run/dpdk/spdk_pid1258 00:37:48.511 Removing: /var/run/dpdk/spdk_pid127309 00:37:48.511 Removing: /var/run/dpdk/spdk_pid132638 00:37:48.511 Removing: /var/run/dpdk/spdk_pid138402 00:37:48.511 Removing: /var/run/dpdk/spdk_pid144889 00:37:48.511 Removing: /var/run/dpdk/spdk_pid144894 00:37:48.511 Removing: /var/run/dpdk/spdk_pid145803 00:37:48.511 Removing: /var/run/dpdk/spdk_pid146548 00:37:48.511 Removing: /var/run/dpdk/spdk_pid147681 00:37:48.511 Removing: /var/run/dpdk/spdk_pid148614 00:37:48.511 Removing: /var/run/dpdk/spdk_pid148620 00:37:48.511 Removing: /var/run/dpdk/spdk_pid148856 00:37:48.511 Removing: /var/run/dpdk/spdk_pid148885 00:37:48.511 Removing: /var/run/dpdk/spdk_pid149032 00:37:48.511 Removing: /var/run/dpdk/spdk_pid149798 00:37:48.511 Removing: /var/run/dpdk/spdk_pid150700 00:37:48.511 Removing: /var/run/dpdk/spdk_pid151614 00:37:48.511 Removing: /var/run/dpdk/spdk_pid152081 00:37:48.511 Removing: /var/run/dpdk/spdk_pid152239 00:37:48.511 Removing: /var/run/dpdk/spdk_pid152533 00:37:48.511 Removing: /var/run/dpdk/spdk_pid153551 00:37:48.511 Removing: /var/run/dpdk/spdk_pid154538 00:37:48.511 Removing: /var/run/dpdk/spdk_pid1609 00:37:48.511 Removing: /var/run/dpdk/spdk_pid162768 00:37:48.511 Removing: /var/run/dpdk/spdk_pid191771 00:37:48.511 Removing: /var/run/dpdk/spdk_pid196298 00:37:48.511 Removing: /var/run/dpdk/spdk_pid197900 00:37:48.511 Removing: /var/run/dpdk/spdk_pid199737 00:37:48.511 Removing: /var/run/dpdk/spdk_pid199758 00:37:48.511 Removing: /var/run/dpdk/spdk_pid199990 00:37:48.511 Removing: /var/run/dpdk/spdk_pid200152 00:37:48.511 Removing: /var/run/dpdk/spdk_pid200626 00:37:48.511 Removing: /var/run/dpdk/spdk_pid20111 00:37:48.511 Removing: /var/run/dpdk/spdk_pid202348 00:37:48.511 Removing: /var/run/dpdk/spdk_pid203291 00:37:48.511 Removing: /var/run/dpdk/spdk_pid203657 00:37:48.511 Removing: /var/run/dpdk/spdk_pid205930 00:37:48.511 Removing: /var/run/dpdk/spdk_pid206428 00:37:48.511 Removing: /var/run/dpdk/spdk_pid207046 00:37:48.511 Removing: /var/run/dpdk/spdk_pid20806 00:37:48.511 Removing: /var/run/dpdk/spdk_pid211198 00:37:48.511 Removing: /var/run/dpdk/spdk_pid216811 00:37:48.511 Removing: /var/run/dpdk/spdk_pid216812 00:37:48.511 Removing: /var/run/dpdk/spdk_pid216813 00:37:48.511 Removing: /var/run/dpdk/spdk_pid220586 00:37:48.511 Removing: /var/run/dpdk/spdk_pid229438 00:37:48.511 Removing: /var/run/dpdk/spdk_pid233476 00:37:48.511 Removing: /var/run/dpdk/spdk_pid239486 00:37:48.511 Removing: /var/run/dpdk/spdk_pid240790 00:37:48.511 Removing: /var/run/dpdk/spdk_pid242320 00:37:48.511 Removing: /var/run/dpdk/spdk_pid243667 00:37:48.511 Removing: /var/run/dpdk/spdk_pid248374 00:37:48.511 Removing: /var/run/dpdk/spdk_pid25078 00:37:48.511 Removing: /var/run/dpdk/spdk_pid252710 00:37:48.511 Removing: /var/run/dpdk/spdk_pid25541 00:37:48.511 Removing: /var/run/dpdk/spdk_pid256945 00:37:48.511 Removing: /var/run/dpdk/spdk_pid264356 00:37:48.511 Removing: /var/run/dpdk/spdk_pid264487 00:37:48.511 Removing: /var/run/dpdk/spdk_pid269077 00:37:48.511 Removing: /var/run/dpdk/spdk_pid269303 00:37:48.511 Removing: /var/run/dpdk/spdk_pid269533 00:37:48.511 Removing: /var/run/dpdk/spdk_pid269991 00:37:48.511 Removing: /var/run/dpdk/spdk_pid269996 00:37:48.511 Removing: /var/run/dpdk/spdk_pid274726 00:37:48.511 Removing: /var/run/dpdk/spdk_pid275566 00:37:48.511 Removing: /var/run/dpdk/spdk_pid279911 00:37:48.511 Removing: /var/run/dpdk/spdk_pid282666 00:37:48.511 Removing: /var/run/dpdk/spdk_pid287880 00:37:48.511 Removing: /var/run/dpdk/spdk_pid293390 00:37:48.511 Removing: /var/run/dpdk/spdk_pid29807 00:37:48.511 Removing: /var/run/dpdk/spdk_pid302182 00:37:48.511 Removing: /var/run/dpdk/spdk_pid309177 00:37:48.511 Removing: /var/run/dpdk/spdk_pid309179 00:37:48.511 Removing: /var/run/dpdk/spdk_pid328476 00:37:48.511 Removing: /var/run/dpdk/spdk_pid328960 00:37:48.511 Removing: /var/run/dpdk/spdk_pid329634 00:37:48.511 Removing: /var/run/dpdk/spdk_pid330116 00:37:48.511 Removing: /var/run/dpdk/spdk_pid330847 00:37:48.511 Removing: /var/run/dpdk/spdk_pid331324 00:37:48.770 Removing: /var/run/dpdk/spdk_pid331850 00:37:48.770 Removing: /var/run/dpdk/spdk_pid332485 00:37:48.770 Removing: /var/run/dpdk/spdk_pid336527 00:37:48.770 Removing: /var/run/dpdk/spdk_pid336770 00:37:48.770 Removing: /var/run/dpdk/spdk_pid342826 00:37:48.770 Removing: /var/run/dpdk/spdk_pid343102 00:37:48.770 Removing: /var/run/dpdk/spdk_pid348597 00:37:48.770 Removing: /var/run/dpdk/spdk_pid352836 00:37:48.770 Removing: /var/run/dpdk/spdk_pid35678 00:37:48.770 Removing: /var/run/dpdk/spdk_pid362570 00:37:48.770 Removing: /var/run/dpdk/spdk_pid363258 00:37:48.770 Removing: /var/run/dpdk/spdk_pid367934 00:37:48.770 Removing: /var/run/dpdk/spdk_pid368278 00:37:48.770 Removing: /var/run/dpdk/spdk_pid372374 00:37:48.770 Removing: /var/run/dpdk/spdk_pid378153 00:37:48.770 Removing: /var/run/dpdk/spdk_pid380746 00:37:48.770 Removing: /var/run/dpdk/spdk_pid38301 00:37:48.770 Removing: /var/run/dpdk/spdk_pid390681 00:37:48.770 Removing: /var/run/dpdk/spdk_pid399355 00:37:48.770 Removing: /var/run/dpdk/spdk_pid400956 00:37:48.770 Removing: /var/run/dpdk/spdk_pid401881 00:37:48.770 Removing: /var/run/dpdk/spdk_pid4177063 00:37:48.770 Removing: /var/run/dpdk/spdk_pid4178131 00:37:48.770 Removing: /var/run/dpdk/spdk_pid4179429 00:37:48.770 Removing: /var/run/dpdk/spdk_pid4180077 00:37:48.770 Removing: /var/run/dpdk/spdk_pid4181022 00:37:48.770 Removing: /var/run/dpdk/spdk_pid4181146 00:37:48.770 Removing: /var/run/dpdk/spdk_pid4182157 00:37:48.770 Removing: /var/run/dpdk/spdk_pid4182243 00:37:48.770 Removing: /var/run/dpdk/spdk_pid4182597 00:37:48.770 Removing: /var/run/dpdk/spdk_pid418320 00:37:48.770 Removing: /var/run/dpdk/spdk_pid4184162 00:37:48.770 Removing: /var/run/dpdk/spdk_pid4185626 00:37:48.770 Removing: /var/run/dpdk/spdk_pid4185910 00:37:48.770 Removing: /var/run/dpdk/spdk_pid4186197 00:37:48.770 Removing: /var/run/dpdk/spdk_pid4186517 00:37:48.770 Removing: /var/run/dpdk/spdk_pid4186807 00:37:48.770 Removing: /var/run/dpdk/spdk_pid4187058 00:37:48.770 Removing: /var/run/dpdk/spdk_pid4187310 00:37:48.771 Removing: /var/run/dpdk/spdk_pid4187595 00:37:48.771 Removing: /var/run/dpdk/spdk_pid4188562 00:37:48.771 Removing: /var/run/dpdk/spdk_pid4191569 00:37:48.771 Removing: /var/run/dpdk/spdk_pid4191773 00:37:48.771 Removing: /var/run/dpdk/spdk_pid4191876 00:37:48.771 Removing: /var/run/dpdk/spdk_pid4192093 00:37:48.771 Removing: /var/run/dpdk/spdk_pid4192554 00:37:48.771 Removing: /var/run/dpdk/spdk_pid4192713 00:37:48.771 Removing: /var/run/dpdk/spdk_pid4193024 00:37:48.771 Removing: /var/run/dpdk/spdk_pid4193212 00:37:48.771 Removing: /var/run/dpdk/spdk_pid4193790 00:37:48.771 Removing: /var/run/dpdk/spdk_pid4193857 00:37:48.771 Removing: /var/run/dpdk/spdk_pid4194125 00:37:48.771 Removing: /var/run/dpdk/spdk_pid4194188 00:37:48.771 Removing: /var/run/dpdk/spdk_pid422182 00:37:48.771 Removing: /var/run/dpdk/spdk_pid424963 00:37:48.771 Removing: /var/run/dpdk/spdk_pid432999 00:37:48.771 Removing: /var/run/dpdk/spdk_pid433004 00:37:48.771 Removing: /var/run/dpdk/spdk_pid438105 00:37:48.771 Removing: /var/run/dpdk/spdk_pid440007 00:37:48.771 Removing: /var/run/dpdk/spdk_pid441970 00:37:48.771 Removing: /var/run/dpdk/spdk_pid443229 00:37:48.771 Removing: /var/run/dpdk/spdk_pid445207 00:37:48.771 Removing: /var/run/dpdk/spdk_pid446273 00:37:48.771 Removing: /var/run/dpdk/spdk_pid455008 00:37:49.029 Removing: /var/run/dpdk/spdk_pid455478 00:37:49.029 Removing: /var/run/dpdk/spdk_pid455937 00:37:49.029 Removing: /var/run/dpdk/spdk_pid458927 00:37:49.029 Removing: /var/run/dpdk/spdk_pid459398 00:37:49.029 Removing: /var/run/dpdk/spdk_pid459863 00:37:49.029 Removing: /var/run/dpdk/spdk_pid463693 00:37:49.029 Removing: /var/run/dpdk/spdk_pid463704 00:37:49.029 Removing: /var/run/dpdk/spdk_pid465216 00:37:49.029 Removing: /var/run/dpdk/spdk_pid465774 00:37:49.029 Removing: /var/run/dpdk/spdk_pid465780 00:37:49.029 Removing: /var/run/dpdk/spdk_pid48927 00:37:49.029 Removing: /var/run/dpdk/spdk_pid5743 00:37:49.029 Removing: /var/run/dpdk/spdk_pid57974 00:37:49.029 Removing: /var/run/dpdk/spdk_pid59813 00:37:49.029 Removing: /var/run/dpdk/spdk_pid60742 00:37:49.029 Removing: /var/run/dpdk/spdk_pid77621 00:37:49.029 Removing: /var/run/dpdk/spdk_pid81694 00:37:49.029 Removing: /var/run/dpdk/spdk_pid984 00:37:49.029 Clean 00:37:49.029 12:51:54 -- common/autotest_common.sh@1453 -- # return 0 00:37:49.029 12:51:54 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:37:49.029 12:51:54 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:49.029 12:51:54 -- common/autotest_common.sh@10 -- # set +x 00:37:49.029 12:51:54 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:37:49.029 12:51:54 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:49.029 12:51:54 -- common/autotest_common.sh@10 -- # set +x 00:37:49.029 12:51:54 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:49.029 12:51:54 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:37:49.029 12:51:54 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:37:49.029 12:51:54 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:37:49.029 12:51:54 -- spdk/autotest.sh@398 -- # hostname 00:37:49.029 12:51:54 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:37:49.411 geninfo: WARNING: invalid characters removed from testname! 00:38:11.346 12:52:15 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:12.283 12:52:18 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:14.187 12:52:19 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:16.092 12:52:21 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:17.997 12:52:23 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:19.904 12:52:25 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:21.809 12:52:27 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:21.809 12:52:27 -- spdk/autorun.sh@1 -- $ timing_finish 00:38:21.809 12:52:27 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:38:21.809 12:52:27 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:21.809 12:52:27 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:38:21.809 12:52:27 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:21.809 + [[ -n 4099779 ]] 00:38:21.809 + sudo kill 4099779 00:38:21.819 [Pipeline] } 00:38:21.833 [Pipeline] // stage 00:38:21.838 [Pipeline] } 00:38:21.852 [Pipeline] // timeout 00:38:21.856 [Pipeline] } 00:38:21.870 [Pipeline] // catchError 00:38:21.875 [Pipeline] } 00:38:21.890 [Pipeline] // wrap 00:38:21.895 [Pipeline] } 00:38:21.908 [Pipeline] // catchError 00:38:21.918 [Pipeline] stage 00:38:21.920 [Pipeline] { (Epilogue) 00:38:21.932 [Pipeline] catchError 00:38:21.934 [Pipeline] { 00:38:21.947 [Pipeline] echo 00:38:21.948 Cleanup processes 00:38:21.955 [Pipeline] sh 00:38:22.239 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:22.239 476458 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:22.254 [Pipeline] sh 00:38:22.538 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:22.538 ++ grep -v 'sudo pgrep' 00:38:22.538 ++ awk '{print $1}' 00:38:22.538 + sudo kill -9 00:38:22.538 + true 00:38:22.550 [Pipeline] sh 00:38:22.834 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:35.061 [Pipeline] sh 00:38:35.347 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:35.347 Artifacts sizes are good 00:38:35.364 [Pipeline] archiveArtifacts 00:38:35.373 Archiving artifacts 00:38:35.508 [Pipeline] sh 00:38:35.793 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:38:35.809 [Pipeline] cleanWs 00:38:35.821 [WS-CLEANUP] Deleting project workspace... 00:38:35.821 [WS-CLEANUP] Deferred wipeout is used... 00:38:35.828 [WS-CLEANUP] done 00:38:35.830 [Pipeline] } 00:38:35.847 [Pipeline] // catchError 00:38:35.858 [Pipeline] sh 00:38:36.139 + logger -p user.info -t JENKINS-CI 00:38:36.148 [Pipeline] } 00:38:36.162 [Pipeline] // stage 00:38:36.167 [Pipeline] } 00:38:36.179 [Pipeline] // node 00:38:36.184 [Pipeline] End of Pipeline 00:38:36.231 Finished: SUCCESS